Hi @Sebastian
Based on your description, the behavior you're seeing
"New receiver with higher epoch of '0' is created hence current receiver with epoch '0' is getting disconnected." during rebalancing - is expected when using multiple instances that consume from the same Event Hub partitions.
A few clarifications:
- Epoch receivers are designed to enforce ownership over a partition. When a new receiver with the same or higher epoch connects to a partition, the existing one gets disconnected - this is intentional and prevents multiple receivers from processing the same events, maintaining "at-most-once" delivery guarantees.
- Since you're using automatic rebalancing across multiple instances and you're seeing correct event processing after the disconnection, this means the Event Processor Host (internally used by the SDK) is working as expected.
- The log you see is more of an informational message rather than an actual error in this context. It's notifying that an ownership change occurred during rebalancing.
Please let me know if you would like me to also share best practices for scaling Event Hub consumers for larger workloads!
Kindly consider upvoting the comment if the information provided is helpful. This can assist other community members in resolving similar issues.