New receiver with higher epoch of '0' is created hence current receiver with epoch '0' is getting disconnected

Sebastian 20 Reputation points
2025-04-28T10:04:58.21+00:00

Hello,

We are receiving the following error whenever new instances of our application are starting up and the partitions need to be rebalanced:

"New receiver '2643943a-a143-43ef-ad1d-858bdec9c686' with higher epoch of '0' is created hence current receiver 'fd1b4f53-2ac4-4f83-b8d0-a30e29eed7f0' with epoch '0' is getting disconnected. If you are recreating the receiver, make sure a higher epoch is used."

After that error is thrown, the application's behaviour is correct, meaning that the partitions are distributed across instances as expected and they are receiving correct events.

Our current setup is that we have 4 partitions on each event hub. Each separate application has it's own consumer group, but there might be more than 1 active instance of each application, which is why we are relying on automatic rebalancing. We're using Event Hub SDK for JS, @azure/event-hubs version 6.0.0 and @azure/eventhubs-checkpointstore-blob version 2.0.0.

Our question is: is this error expected when the rebalance is happening, or is something wrong in our setup?

Azure Event Hubs
Azure Event Hubs
An Azure real-time data ingestion service.
711 questions
0 comments No comments
{count} votes

Accepted answer
  1. Smaran Thoomu 23,260 Reputation points Microsoft External Staff Moderator
    2025-04-28T10:35:30.6833333+00:00

    Hi @Sebastian
    Based on your description, the behavior you're seeing

    "New receiver with higher epoch of '0' is created hence current receiver with epoch '0' is getting disconnected." during rebalancing - is expected when using multiple instances that consume from the same Event Hub partitions.

    A few clarifications:

    • Epoch receivers are designed to enforce ownership over a partition. When a new receiver with the same or higher epoch connects to a partition, the existing one gets disconnected - this is intentional and prevents multiple receivers from processing the same events, maintaining "at-most-once" delivery guarantees.
    • Since you're using automatic rebalancing across multiple instances and you're seeing correct event processing after the disconnection, this means the Event Processor Host (internally used by the SDK) is working as expected.
    • The log you see is more of an informational message rather than an actual error in this context. It's notifying that an ownership change occurred during rebalancing.

    Please let me know if you would like me to also share best practices for scaling Event Hub consumers for larger workloads!


    Kindly consider upvoting the comment if the information provided is helpful. This can assist other community members in resolving similar issues.


0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.