Hello @Amar-Azure-Practice , apologies for the delayed response here. After going through the additional details above and looking at some similar issues internally. Have you configured the TCP idle timeout in this scenario?(supported only on Standard Load Balancer) I ask because Load Balancer's default behavior is to silently drop flows when the idle timeout of a flow is reached, and if this setting is already enabled you can try and configure the TCP keep-alive with an interval less than the idle timeout setting or increase the idle timeout value see if there is the same observation.
There are some additional cases to consider here as well. When using Azure Load balancer TCP handshakes occur between client and selected backend VM, so if there is any delay or a pause in connectivity due to high load and if this delay is more than timeout setting set at the host VM it might cause timeout errors. You can try increasing timeout duration for the Host VMs and setting TCP keep-alive might also help in this scenario.
To help validate this scenario you can perform packet captures using Wireshark at either of these VMs which might give us more insights.
If this does not help resolve the issue, I think deeper analyses of the set-up and configuration will be required to determine where the requests are getting throttled. You can create a support request for this if you have a support plan. If you do not have a support plane please refer to the private message I will make here shortly.
Please let me know if there are any concerns, Thank you!