How to scale parallel event processing when Event Hub's partitions are fixed at creation time - Azure Event Hubs

Ryan C 1 Reputation point
2022-03-17T21:44:10.127+00:00

Hi everyone!

I'm looking for guidance on event hub scaling strategy. Specifically, how to scale parallel event processing for a single event hub in an event hub namespace as throughput grows.

Context:
The amount of throughput going through my event hub and consumed by my application right now is small. ~70 events/sec at absolute peak to be processed by an event receiver/consumer.

However, there is a future where the amount of events per second needed to be processed is orders of magnitude higher than it is now, and one receiver isn't going to cut it.

Takeaways from the docs (Notice the conflict between 2 and 3):

  1. When multiple receivers are listening to the same partition, they will each receive and process the same set of events independently (this results in duplicate event processing, not desirable for my app)
  2. For events to be evenly distributed to multiple receiver processes, create an event hub with multiple partitions and assign each receiver process to each partition.
  3. The number of Partitions for an event hub are set at creation time and cannot be changed.
  4. Too many partitions can decrease performance

The Problem
I want to set the number of partitions and event receivers appropriate to the current scale of my application (1 partition/1 receiver), AND I want to be able to scale in the future as traffic increases. But given 2 and 3, I don't see a straightforward solution for increasing parallel event processing without creating a new event hub. I would be fine with this if I could migrate unprocessed messages over to that new hub, but there doesn't seem to be an easy mechanism for that.

What's the recommended strategy for when a user wants to scale event receivers as their throughput grows? If this is impossible, I'd settle for an adequate migration solution from old->new event hub!

Thanks in advance y'all. :)

Azure Event Hubs
Azure Event Hubs
An Azure real-time data ingestion service.
719 questions
0 comments No comments
{count} votes

2 answers

Sort by: Most helpful
  1. HimanshuSinha-msft 19,486 Reputation points Microsoft Employee Moderator
    2022-03-18T18:45:28.377+00:00

    Hello @Ryan C ,
    Thanks for the question and using MS Q&A platform.

    As we understand the ask here is how to mahe a decision currently so that when the events volume increase in future you can manage , please do let us know if its not accurate.

    Partition scaling is enabled with Premium namespaces. If you are seeing a high probability of volume increase in the future then they should be on a Premium namespace instead.

    If you re concerned about pricing ask when i used the the calculator & at 10 MB/sec, Premium SKU is cheaper than Standard.

    Please do let me if you have any queries.
    Thanks
    Himanshu


    • Please don't forget to click on 130616-image.png or upvote 130671-image.png button whenever the information provided helps you. Original posters help the community find answers faster by identifying the correct answer. Here is how
    • Want a reminder to come back and check responses? Here is how to subscribe to a notification
      • If you are interested in joining the VM program and help shape the future of Q&A: Here is how you can be part of Q&A Volunteer Moderators
    1 person found this answer helpful.
    0 comments No comments

  2. Ryan C 1 Reputation point
    2022-03-22T16:14:54.99+00:00

    Hi @HimanshuSinha-msft

    Thanks for the info! I was a bit concerned about the price. What does the hours/minutes/days multiplier represent in the pricing calculator for the event hub? Could you help me understand how to calculate appropriately? For example, what would be an appropriate price for say 100 events/sec that are roughly 500kb in size?

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.