@ARAVETI, MAHESH - Thanks for the question and using MS Q&A platform.
To answer your first question, in the Azure Event Hubs Premium tier, processing units (PUs) are assigned to a namespace and are shared across all event hubs in that namespace. The number of PUs assigned to a namespace determines the maximum throughput capacity of the namespace. You can increase the number of PUs assigned to a namespace to increase the maximum throughput capacity.
Regarding your second question, there are a few things you can do to improve the performance of your Logstash Kafka consumer:
- Increase the number of partitions for your event hubs: The number of partitions determines the maximum number of concurrent readers that can read from an event hub. By increasing the number of partitions, you can increase the maximum number of concurrent readers and improve the overall throughput of your event hub.
- Increase the number of PUs assigned to your namespace: As I mentioned earlier, the number of PUs assigned to a namespace determines the maximum throughput capacity of the namespace. By increasing the number of PUs, you can increase the maximum throughput capacity and improve the overall performance of your event hub.
- Optimize your Logstash Kafka consumer configuration: Make sure that your Logstash Kafka consumer is configured to use the optimal settings for your use case. This includes settings such as the batch size, the maximum number of messages to fetch per poll, and the maximum number of concurrent fetches.
- Monitor your event hub and Logstash Kafka consumer: Use Azure Monitor to monitor the performance of your event hub and Logstash Kafka consumer. This will help you identify any bottlenecks or performance issues and take corrective action.
Hope this helps. Do let us know if you any further queries.
If this answers your query, do click Accept Answer
and Yes
for was this answer helpful. And, if you have any further query do let us know.