Share via

Azure Container Apps internal ingress unreachable from same VNet (APIM and VM TCP timeout to ACA internal static IP)

Kaustubh Bhal 20 Reputation points
2026-02-24T00:22:16.79+00:00

We are experiencing a connectivity issue in East US where Azure API Management (APIM) cannot connect to our backend Azure Container App hosted in an internal Azure Container Apps managed environment. APIM traces show request processing proceeds normally (including successful JWT validation), but backend forwarding fails with a TCP connect timeout: connection timed out: 10.0.7.118:443 (after ~20 seconds). The backend Container App FQDN resolves to 10.0.7.118, which matches the Azure Container Apps managed environment static IP.

We independently reproduced the same behavior from a separate Linux VM in the same VNet. From the VM, DNS resolution for the production Container App FQDN returns 10.0.7.118 correctly, but curl/nc to 10.0.7.118:443 time out. We also captured packets with tcpdump and observed repeated outbound TCP SYN packets to 10.0.7.118:443 with no SYN-ACK returned. This confirms the failure occurs before TCP handshake completion and before any TLS/HTTP/application processing.

Environment and network details:

  • Region: East US
  • VNet: vnet-openhand-prod-eastus
  • APIM: openhand-apim (Developer SKU), VNet-integrated (virtualNetworkType=External), subnet snet-apim
  • Backend Container App: ca-openhand-synapse-production
  • ACA managed environment: ca-openhand-synapse-internal

ACA managed environment settings include publicNetworkAccess=Disabled, vnetConfiguration.internal=true, staticIp=10.0.7.118

Checks completed (no obvious blocking issue found):

  • snet-apim: NSG attached (apim-nsg), no route table
  • snet-debug-vm: no NSG, no route table
  • snet-container-apps-internal: delegated to Microsoft.App/environments, no NSG, no route table
  • APIM subnet NSG reviewed; no obvious rule preventing VNet egress to backend
  • ACA internal subnet has no NSG/UDR attached

No-op Container App revision rollout was attempted and did not restore connectivity

Control test:

A separate dev Container App with public ingress is reachable from the same VM, indicating the VM and general networking are functioning.

Based on APIM trace + same-VNet VM reproduction + packet capture (SYN out, no SYN-ACK), this appears to be an Azure Container Apps internal managed environment ingress / load balancer dataplane reachability issue (or related Azure-managed network path issue), not an APIM policy/authentication or application-code issue.

I'd like to see if someone could investigate ACA internal ingress/dataplane reachability to 10.0.7.118:443 for managed environment ca-openhand-synapse-internal in East US. This configuration was working previously and began failing suddenly.

Azure Container Apps
Azure Container Apps

An Azure service that provides a general-purpose, serverless container platform.

{count} votes

Answer accepted by question author
  1. Alex Burlachenko 19,530 Reputation points Volunteer Moderator
    2026-02-25T12:34:38.8866667+00:00

    Kaustubh Bhal hi,

    given your diagnostics, this is almost certainly not APIM policy, not JWT, and not application code. It is either routing asymmetry or an ACA managed environment internal load balancer dataplane issue.

    on the VM run az network nic show-effective-route-table and see if 10.0.7.118 shows as VNet local. If it points to a virtual appliance or anything else, u have a UDR / forced tunneling issue. Fix the route.

    I would check effective NSG rules on both the APIM subnet and the VM NIC. Even if u think nothing blocks it, verify there is no outbound deny to 10.0.7.0/24 or 443. Use az network nic list-effective-nsg. Is container app ingress target port matches what u are hitting. If the app listens on 80 internally and u hit 443, ACA handles TLS at the environment level, but misconfigured ingress can still blackhole traffic. Double check ingress is internal and enabled.

    I'm pretty sure u did but pls look if no firewall/NVA exists anywhere in the path. Even if ACA subnet has no UDR, another subnet with forced tunnel can break return path and cause SYN without SYN-ACK. If all routing and NSGs are clean and u still see SYN with no SYN-ACK from both VM and APIM, stop troubleshooting locally. That strongly indicates an ACA managed environment internal load balancer issue. So open a msft ticket. This is either asymmetric routing or an ACA internal dataplane issue. If routing is clean, its looks like a platform side and only msft can confirm and fix it.

    rgds,

    Alex


0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.