Issues with Azure Kubernetes CNI Overlay Connectivity to On-Premise Devices

Max Ricketts 40 Reputation points
2024-10-30T15:28:39.7266667+00:00

An AKS cluster is configured with CNI Overlay and a custom network in the range 10.10.48.0/20. The cluster service address range is set to 172.16.0.0/16, and the Kube DNS service IP is 172.16.0.10.

A functional VPN allows communication between a VM in Azure and an on-premise VM. However, when attempting to connect a scheduled container to an on-premise host, the connection fails.

Switching the Kube networking to Azure CNI Node Subnet resolves the issue.

According to this document, external communications should work using the node IP through NAT, as Azure CNI translates the pod's overlay IP to the VM's primary IP address.

What could be causing the connectivity issue despite the expected functionality? The remote VPN is configured to receive from 10.10.48.0/20, and the AKS subnet is 10.10.50.0/23.

The outcome is that the Kube Cluster is created as a GitLab runner agent, which creates build agents per job, and they need to deploy to an on-premise server.

Azure Kubernetes Service
Azure Kubernetes Service
An Azure service that provides serverless Kubernetes, an integrated continuous integration and continuous delivery experience, and enterprise-grade security and governance.
2,458 questions
{count} votes

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.