Issues with Azure Kubernetes CNI Overlay Connectivity to On-Premise Devices
An AKS cluster is configured with CNI Overlay and a custom network in the range 10.10.48.0/20. The cluster service address range is set to 172.16.0.0/16, and the Kube DNS service IP is 172.16.0.10.
A functional VPN allows communication between a VM in Azure and an on-premise VM. However, when attempting to connect a scheduled container to an on-premise host, the connection fails.
Switching the Kube networking to Azure CNI Node Subnet resolves the issue.
According to this document, external communications should work using the node IP through NAT, as Azure CNI translates the pod's overlay IP to the VM's primary IP address.
What could be causing the connectivity issue despite the expected functionality? The remote VPN is configured to receive from 10.10.48.0/20, and the AKS subnet is 10.10.50.0/23.
The outcome is that the Kube Cluster is created as a GitLab runner agent, which creates build agents per job, and they need to deploy to an on-premise server.