Is this a joke?

Botond Beres 0 Reputation points
2025-04-03T21:45:56.8466667+00:00

Not only did you remove email and phone support from the developer subscription, you've replaced it with a publicly accessible Q&A forum where developers are expected to share problems about your platform while running potentially proprietary software. Additionally, you've imposed arbitrary limits on the information one can provide. I attempted to append information to my previous ticket (2243264) but received an error stating: 'You are not authorized to make this response.' Now I'm forced to use my personal account to communicate this issue. This situation reminds me of when I had to get my $30 developer subscription fee reimbursed because I couldn't report a problem (for which Microsoft was at fault) without first paying for said subscription. Is this the quality of support I can expect going forward?

So I guess I have to use this "workaround" to have @Alekhya Vaddepally see below for more information:

As far as I can see there is nothing set that should block communication between instances. As for what the container is listening on, in Azure Kudu I can see this Environment Variable: ASPNETCORE_URLS = 0.0.0.0:8181 which now causes even more confusion because when I try to reach the other instance internally, nothing replies on 8181, in fact, netstat comes back with: tcp6 0 0 :::8080 :::* LISTEN 1097/dotnet So its listening on IPV6 and Port 8080 instead of IPv4 and 8181 and is ignoring the env var? There is nothing in the project that would tell it to bind to port 8080, no launchsettings.json, no appsettings.json, nothing, so this must be Azure's doing.

Azure App Service
Azure App Service
Azure App Service is a service used to create and deploy scalable, mission-critical web apps.
8,689 questions
{count} votes

1 answer

Sort by: Most helpful
  1. Arko 2,130 Reputation points Microsoft External Staff
    2025-04-05T11:35:54.8666667+00:00

    Hello Botond Beres, I completely understand the frustration you've experienced specifically related to the restrictions on posting. You're absolutely not alone in running into these exact limitations.

    I have shared a fully working, platform-agnostic workaround on your original post that I've tested end-to-end, which gets around the exact issues you described — including the ASPNETCORE_URLS discrepancy, DNS resolution failures, and isolated container networking. Please check that out and let me know if that works for you.

    Meanwhile here I will be answering your 3 questions which you have asked initially

    Q1. Why are containers being assigned IP addresses in different subnets within the link-local range?

    Ans- This happens when containers are deployed in environments that do not connect them to a shared virtual network, such as Azure App Services or some Container Apps configurations without VNet integration. In those cases, each container runs in isolation and is assigned a link-local (APIPA) IP in the 169.254.x.x range but from different /16 subnets, without any routing between them.

    This behavior is expected in these environments, because there's no shared overlay network between containers, there's no internal DNS or service discovery and no automatic routing exists between the different link-local subnets, so these containers are running in isolation, by design.

    Q2. Is there a configuration option to ensure all containers receive IP addresses in the same subnet?

    Ans- Not in Azure App Services or basic Azure Container Apps without VNet integration. These platforms do not expose control over container IP allocation, nor guarantee subnet-level co-location or routable IPs. However, in Kubernetes-based environments (like AKS, EKS, GKE), the Kubernetes CNI (Container Network Interface) plugin automatically assigns Pods IPs from the same routable overlay subnet and ensures all containers can talk to each other using internal DNS and IP routing, regardless of node placement. If you need full control over IP assignment and DNS between containers, Kubernetes is currently the only platform-agnostic way to achieve that consistently across clouds and on-prem. Checkout my comment on your original post

    Q3. How can we resolve this without relying on Azure-specific services or network configurations?

    Ans- Use a platform-agnostic Kubernetes setup (e.g., AKS, EKS, GKE, or on-prem k8s).

    With Kubernetes, each container runs in a Pod with its own IP address, all Pods can communicate with each other using cluster-wide DNS names (e.g., http://app-b:8080), no reliance on Azure App Service DNS, WEBSITE_PRIVATE_IP, or environment-specific workarounds

    Most importantly the same exact code and manifests work on any Kubernetes distribution — cloud or on-prem

    Which suffices your original goal i.e. direct container-to-container communication that works identically across all environments.
    I’ve shared a step-by-step repro that demonstrates this working in AKS here


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.