Hello Botond Beres, I completely understand the frustration you've experienced specifically related to the restrictions on posting. You're absolutely not alone in running into these exact limitations.
I have shared a fully working, platform-agnostic workaround on your original post that I've tested end-to-end, which gets around the exact issues you described — including the ASPNETCORE_URLS
discrepancy, DNS resolution failures, and isolated container networking. Please check that out and let me know if that works for you.
Meanwhile here I will be answering your 3 questions which you have asked initially
Q1. Why are containers being assigned IP addresses in different subnets within the link-local range?
Ans- This happens when containers are deployed in environments that do not connect them to a shared virtual network, such as Azure App Services or some Container Apps configurations without VNet integration. In those cases, each container runs in isolation and is assigned a link-local (APIPA) IP in the 169.254.x.x
range but from different /16 subnets, without any routing between them.
This behavior is expected in these environments, because there's no shared overlay network between containers, there's no internal DNS or service discovery and no automatic routing exists between the different link-local subnets, so these containers are running in isolation, by design.
Q2. Is there a configuration option to ensure all containers receive IP addresses in the same subnet?
Ans- Not in Azure App Services or basic Azure Container Apps without VNet integration. These platforms do not expose control over container IP allocation, nor guarantee subnet-level co-location or routable IPs. However, in Kubernetes-based environments (like AKS, EKS, GKE), the Kubernetes CNI (Container Network Interface) plugin automatically assigns Pods IPs from the same routable overlay subnet and ensures all containers can talk to each other using internal DNS and IP routing, regardless of node placement. If you need full control over IP assignment and DNS between containers, Kubernetes is currently the only platform-agnostic way to achieve that consistently across clouds and on-prem. Checkout my comment on your original post
Q3. How can we resolve this without relying on Azure-specific services or network configurations?
Ans- Use a platform-agnostic Kubernetes setup (e.g., AKS, EKS, GKE, or on-prem k8s).
With Kubernetes, each container runs in a Pod with its own IP address, all Pods can communicate with each other using cluster-wide DNS names (e.g., http://app-b:8080
), no reliance on Azure App Service DNS, WEBSITE_PRIVATE_IP
, or environment-specific workarounds
Most importantly the same exact code and manifests work on any Kubernetes distribution — cloud or on-prem
Which suffices your original goal i.e. direct container-to-container communication that works identically across all environments.
I’ve shared a step-by-step repro that demonstrates this working in AKS here