Hello Tobias Babin
From the above diagram, only AzureDevOps agent IP are needed for authorized IP ranges. All others are using the internal IP to access the cluster.
Hope this helps.
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
For an AKS cluster with API Server VNet Integration (Preview) enabled and access to the API server via a VPN Gateway established, using the feature API server using authorized IP address ranges breaks any access coming through the VPN Gateway.
We have an AKS cluster configured with API Server VNet Integration (Preview). Cluster mode is set to "public".
We would like to provide access to the API server to admin users via a P2S VPN Gateway using the API server internal VIP and peering the VPN Gateway VNet and the cluster VNet. This works great.
Now to reduce the attack surface, we would also like to limit access from the internet down to to Azure DevOps agents (Microsoft-hosted) where we are running our pipelines. Once we start entering authorized IP ranges, access to the API server internal VIP via the VPN Gateway breaks. We added
to the authorized IP ranges, but still cannot connect.
What IP range(s) do we need to authorize in this setup? Is this combination of features designed to work together?
Thank you for any insights.
(N.B. We are aware of Azure DevOps self-hosted agents as a possible solution, but one we would like to avoid for now to keep operational complexity low.)
Hello Tobias Babin
From the above diagram, only AzureDevOps agent IP are needed for authorized IP ranges. All others are using the internal IP to access the cluster.
Hope this helps.
You cannot use the authorised IP ranges when the API server is injected into the vNet because the API server is no longer available outside of the network it is injected into, as per the docs:
To access the API server from outside the cluster network, utilize either VNet peering or AKS run command.
Hello @vipullag-MSFT ,
Following your confirmation, we re-examined our setup and it now works. Turns out we got a setting wrong in the VNet peering between the VPN Gateway VNet and the AKS Cluster VNet. For anyone reading this in the future, make sure to have these options activated in the peerings:
I can now access the AKS API server via kubectl
from my workstation using the regular kubeconfig generated via the az aks get-credentials
command when having the P2S VPN connection active. 🎉
To make that work, I need to tweak my local DNS and associate the hostname of the API server to the frontend IP of the internal LoadBalancer, e.g. by modifying the hosts
file. It's a bit hacky but does the trick.
To make the API server accessible for our Microsoft-hosted Azure DevOps (ADO) agents, we configured the IP ranges published by Microsoft (see here) into the "Authorized IP ranges" for the AKS cluster.
We are aware this will still allow any agents run by other ADO organizations to physically reach the API server, but keep out the rest of the internet - a compromise between security and effort saved, i.e. not having to maintain self-hosted ADO agents.
The last thing we are struggling with is that the total number of IP ranges required to configure for the ADO agents of our geography exceeds the maximum of 200 items which can be entered as authorized IP ranges. We worked around this by manually condensing the list, but that's far from perfect especially since it is dynamic.
Is there any way to have that limit increased?