Allow networking between scaled-out instances

Botond Beres 0 Reputation points
2025-04-02T13:21:40.22+00:00
We're experiencing a networking issue with containers running in Azure. Our application consists of multiple containers that need to communicate with each other directly. However, we've discovered that these containers are being assigned IP addresses in different link-local networks:
- Container A: 169.254.254.2:8080
- Container B: 169.254.130.3:8080

These IP addresses are in different subnets within the APIPA/link-local range, making direct communication impossible without additional routing.

Business Impact:
This is blocking deployment of our product which needs to be platform-agnostic. We cannot rely on Azure-specific networking services to solve this as our solution must work identically across multiple cloud and on-premises environments.

Technical Details:
- The containers need direct network communication without intermediaries
- We are NOT using AKS or other orchestration-specific solutions
- Our product must maintain identical networking behavior across different environments (AWS, GCP, on-premises, etc.)

Expected Behavior:
Containers should either:
1. Be assigned IP addresses in the same subnet, allowing direct communication, OR
2. Have proper routing configured automatically to allow cross-subnet communication within the 169.254.x.x range

Questions:
1. Why are containers being assigned IP addresses in different subnets within the link-local range?
2. Is there a configuration option to ensure all containers receive IP addresses in the same subnet?
3. How can we resolve this without relying on Azure-specific services or network configurations?

We need a solution that works solely at the container/network level without relying on Azure-specific infrastructure, as this application must run identically in multiple environments.

Thank you for your assistance.
Azure App Service
Azure App Service
Azure App Service is a service used to create and deploy scalable, mission-critical web apps.
8,963 questions
{count} votes

2 answers

Sort by: Most helpful
  1. Botond Beres 0 Reputation points
    2025-04-07T17:57:22.6233333+00:00

    Thank you all for your responses. I appreciate your efforts, but I need to clarify a few important points:

    1. I'm specifically using Azure App Service with scale-out capability, not Azure Container Apps.
    2. I do not wish to migrate to Kubernetes or AKS. Suggesting a completely different service architecture is not a solution to my current issue. If Azure App Service officially supports scale-out functionality (which it clearly does, as per the UI), then there should be a way to enable direct communication between those scaled instances.
    3. To reiterate my original question: Why are container instances within the same App Service and VNet being assigned IP addresses in different subnets that cannot communicate with each other? This seems like a fundamental networking configuration issue.
    4. I've already confirmed:
    • We have a VNet configured
    • Environment variables like HOSTNAME don't resolve (DNS error: "Name or service not known")
    • WEBSITE_PRIVATE_IP produces "Connection refused" errors
    • The container is listening on IPv6 port 8080 (via netstat), despite ASPNETCORE_URLS being set to 0.0.0.0:8181

    To me it looks like that the environment variables HOSTNAME:PORT provided in the container which provided us with something like "app-name-6fd8287f:8080" (which in turn should translate into 10.0.1.xxx IP) would work is it wasn't for the DNS resolution error: "Name or service not known".

    What I need is specific guidance on how to properly configure the VNet or App Service networking settings to allow direct communication between instances. This is a basic networking requirement for any scaled application and should be possible without requiring architectural changes.

    Could someone from your team please address the specific networking configuration required, rather than suggesting we switch to entirely different Azure services?

    Thank you.


  2. NGANDU-BISEBA Gabriel 35 Reputation points
    2025-04-10T05:42:53.56+00:00

    @Botond Beres

    Please read my comment again. What you are looking at is vnet integration, a feature that only works for outbound communication.

    The IPs bound to vnet integration can only receive traffic from TCP connections they initiated. This means that it will not help you to make instances of the same app on the same app service plan talk to each others. See the vnet integration as a NAT gateway inside your vnet if you will.

    To have an inbound private IP inside a vnet that any other resource can connect to privately you must use private endpoints. However, private endpoints are shared between all instances of the same app. Meaning that you can not pin any private endpoints to a specific instance of the web app.

    If you have 2 private endpoints and 2 scaled out instances of the same app. Private endpoint 1 may transfer traffic to any of the two instances and private endpoint 2 will do the same. Which is why I say that your individual instances having a unique private IP is impossible. For outbound, you have a NAT gateway. For inbound, you have a pool of IP address(es) through private endpoints that cannot be pinned.

    Now obviously you know you requirements better than I do. But from what I understand, what you should be looking for to solve your issues are private endpoints and not vnet integration.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.