Share via

Azure container apps outbound connections have intermittent delays

Laz6598 0 Reputation points
2026-02-15T21:56:43.7933333+00:00

Hi,

I am experiencing a intermittent 40 to 60-second delay between my application in Azure Container Apps and my database. This latency occurs after the request is fired but before it actually reaches the database. The same container image runs without any delay when deployed in AKS. There is no high cpu on either the container app or the db. You can see in the attached the timestamp of the log right before making a query and when the query actually executed.

User's image

Azure Container Apps
Azure Container Apps
An Azure service that provides a general-purpose, serverless container platform.
0 comments No comments
{count} votes

2 answers

Sort by: Most helpful
  1. Siddhesh Desai 3,860 Reputation points Microsoft External Staff Moderator
    2026-02-16T19:42:36.49+00:00

    Hi @Laz6598

    Thank you for reaching out to Microsoft Q&A.

    You are experiencing intermittent 40–60 second delays in outbound connections from Azure Container Apps (ACA) to your database. This delay occurs before the request even reaches the database, which means the stall is happening inside the Azure Container Apps outbound networking layer. This issue is commonly seen when ACA uses shared SNAT ports, which can become exhausted under certain workloads. When SNAT ports are unavailable, outbound TCP connections can hang until they time out, typically causing the 40–60 second delay you are observing. Since the same container image works fine on AKS, the root cause is tied to ACA’s underlying infrastructure and not your application or the database. Other possible contributors include DNS latency inside ACA or cold starts if replicas scale from zero, but the symptoms strongly match SNAT exhaustion in ACA’s shared environment.

    Refer below points to resolve this issue or use them as workarounds:

    1. Deploy Azure Container Apps inside a VNET and use a NAT Gateway This gives your environment a dedicated outbound IP and a large pool of SNAT ports, eliminating the shared SNAT bottleneck. Steps:

    • Create a VNET and subnet for ACA
    • Create a NAT Gateway
    • Attach the NAT Gateway to the subnet
    • Deploy ACA Environment into this VNET
        az network nat gateway create \
          --name MyNatGateway \
          --resource-group MyRG \
          --public-ip-addresses MyPublicIP \
          --idle-timeout 10
        az network vnet subnet update \
          --vnet-name MyVnet \
          --name ACASubnet \
          --resource-group MyRG \
          --nat-gateway MyNatGateway
      

    2. Enable and optimize database connection pooling If your app creates new outbound connections frequently, reduce connection churn by enabling pooling. This avoids repeated SNAT allocations and TCP handshakes.

    • For .NET use default ADO.NET pooling
    • For Node.js mssql enable pool settings
    • For EF Core ensure Max Pool Size is tuned

    3. Use a private endpoint + private DNS zone (if applicable) If your database supports private endpoints, moving traffic to an internal VNET path will:

    • Bypass public SNAT completely
    • Avoid DNS fallback delays
    • Improve reliability and latency

  2. Alex Burlachenko 19,525 Reputation points Volunteer Moderator
    2026-02-16T08:15:37.43+00:00

    Laz6598 hello,

    this kind of 40 to 60 sec delay before the req even hits the db is almost never cpu related and usually points to networking or conn setup behaviour inside azure container apps. since the same image works fine in aks this strongly suggests env level diff not app bug. in aca outbound traffic goes through managed networking and if ur app resolves dns on every new conn slow dns lookup or retries can easily add 30 to 60 sec delay so first look how fast the db fqdn resolves from inside the container and make sure conn pooling is enabled in ur db client. is snat port exhaustion or aggressive short lived outbound conns without pooling because aca nat layer can queue new tcp connects under pressure even when cpu looks fine.

    if ur aca env is vnet integrated and whether nsg firewall or private endpoint rules cause initial tcp retries before handshake completes. compare outbound path in aks vs aca especially if aks uses nat gateway and aca uses default egress. enable diag logs and check tcp connect time tls handshake time and dns resolution timing to see exactly where the 40 to 60 sec gap happens. this pattern is almost always conn setup or networking layer not actual db execution latency.

    rgds,

    Alex


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.