Training
Module
Configure scaling in Azure Kubernetes Service - Training
This module covers the scaling applications in Azure Kubernetes Service (AKS), including manually scaling pods or nodes and integrating with Azure Container Instances (ACI).
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
AKS nodes don't require their own public IP addresses for communication. However, scenarios may require nodes in a node pool to receive their own dedicated public IP addresses. A common scenario is for gaming workloads, where a console needs to make a direct connection to a cloud virtual machine to minimize hops. This scenario can be achieved on AKS by using Node Public IP.
First, create a new resource group.
az group create --name <resourceGroup> --location <region>
Create a new AKS cluster and attach a public IP for your nodes. Each of the nodes in the node pool receives a unique public IP. You can verify this by looking at the Virtual Machine Scale Set instances.
az aks create \
--resource-group <resourceGroup> \
--name <aksClusterName> \
--location <region> \
--enable-node-public-ip \
--generate-ssh-keys
For existing AKS clusters, you can also add a new node pool, and attach a public IP for your nodes.
az aks nodepool add --resource-group <resourceGroup> --cluster-name <aksClusterName> --name <newNodePool> --enable-node-public-ip
There are a number of benefits to using a public IP prefix. AKS supports using addresses from an existing public IP prefix for your nodes by passing the resource ID with the flag --node-public-ip-prefix-id
when creating a new cluster or adding a node pool.
First, create a public IP prefix using az network public-ip prefix create:
az network public-ip prefix create --length 28 --location <region> --name <publicIPPrefixName> --resource-group <resourceGroup>
View the output, and take note of the id
for the prefix:
{
...
"id": "/subscriptions/<subscription-id>/resourceGroups/<resourceGroup>/providers/Microsoft.Network/publicIPPrefixes/<publicIPPrefixName>",
...
}
Finally, when creating a new cluster or adding a new node pool, use the flag --node-public-ip-prefix-id
and pass in the prefix's resource ID:
az aks create \
--resource-group <resourceGroup> \
--name <aksClusterName> \
--location <region> \
--enable-node-public-ip \
--node-public-ip-prefix-id /subscriptions/<subscription-id>/resourceGroups/<resourceGroup>/providers/Microsoft.Network/publicIPPrefixes/<publicIPPrefixName> \
--generate-ssh-keys
You can locate the public IPs for your nodes in various ways:
az vmss list-instance-public-ips
.Important
The node resource group contains the nodes and their public IPs. Use the node resource group when executing commands to find the public IPs for your nodes.
az vmss list-instance-public-ips --resource-group <MC_region_aksClusterName_region> --name <virtualMachineScaleSetName>
Public IP tags can be utilized on node public IPs to utilize the Azure Routing Preference feature.
az aks create \
--name <aksClusterName> \
--location <region> \
--resource-group <resourceGroup> \
--enable-node-public-ip \
--node-public-ip-tags RoutingPreference=Internet \
--generate-ssh-keys
az aks nodepool add --cluster-name <aksClusterName> \
--name <nodePoolName> \
--location <region> \
--resource-group <resourceGroup> \
--enable-node-public-ip \
--node-public-ip-tags RoutingPreference=Internet
AKS nodes utilizing node public IPs that host services on their host address need to have an NSG rule added to allow the traffic. Adding the desired ports in the node pool configuration will create the appropriate allow rules in the cluster network security group.
If a network security group is in place on the subnet with a cluster using bring-your-own virtual network, an allow rule must be added to that network security group. This can be limited to the nodes in a given node pool by adding the node pool to an application security group (ASG). A managed ASG will be created by default in the managed resource group if allowed host ports are specified. Nodes can also be added to one or more custom ASGs by specifying the resource ID of the NSG(s) in the node pool parameters.
When specifying the list of ports to allow, use a comma-separate list with entries in the format of port/protocol
or startPort-endPort/protocol
.
Examples:
az aks create \
--resource-group <resourceGroup> \
--name <aksClusterName> \
--nodepool-name <nodePoolName> \
--nodepool-allowed-host-ports 80/tcp,443/tcp,53/udp,40000-60000/tcp,40000-50000/udp\
--nodepool-asg-ids "<asgId>,<asgId>" \
--generate-ssh-keys
az aks nodepool add \
--resource-group <resourceGroup> \
--cluster-name <aksClusterName> \
--name <nodePoolName> \
--allowed-host-ports 80/tcp,443/tcp,53/udp,40000-60000/tcp,40000-50000/udp \
--asg-ids "<asgId>,<asgId>"
az aks nodepool update \
--resource-group <resourceGroup> \
--cluster-name <aksClusterName> \
--name <nodePoolName> \
--allowed-host-ports 80/tcp,443/tcp,53/udp,40000-60000/tcp,40000-50000/udp \
--asg-ids "<asgId>,<asgId>"
When public IPs are configured on nodes, host ports can be utilized to allow pods to directly receive traffic without having to configure a load balancer service. This is especially useful in scenarios like gaming, where the ephemeral nature of the node IP and port is not a problem because a matchmaker service at a well-known hostname can provide the correct host and port to use at connection time. However, because only one process on a host can be listening on the same port, using applications with host ports can lead to problems with scheduling. To avoid this issue, AKS provides the ability to have the system dynamically assign an available port at scheduling time, preventing conflicts.
Warning
Pod host port traffic will be blocked by the default NSG rules in place on the cluster. This feature should be combined with allowing host ports on the node pool to allow traffic to flow.
Important
AKS preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. AKS previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see the following support articles:
Register the PodHostPortAutoAssignPreview
feature flag by using the az feature register command, as shown in the following example:
az feature register --namespace "Microsoft.ContainerService" --name "PodHostPortAutoAssignPreview"
It takes a few minutes for the status to show Registered. Verify the registration status by using the az feature show command:
az feature show --namespace "Microsoft.ContainerService" --name "PodHostPortAutoAssignPreview"
When the status reflects Registered, refresh the registration of the Microsoft.ContainerService resource provider by using the az provider register command:
az provider register --namespace Microsoft.ContainerService
Triggering host port auto assignment is done by deploying a workload without any host ports and applying the kubernetes.azure.com/assign-hostports-for-containerports
annotation with the list of ports that need host port assignments. The value of the annotation should be specified as a comma-separated list of entries like port/protocol
, where the port is an individual port number that is defined in the Pod spec and the protocol is tcp
or udp
.
Ports will be assigned from the range 40000-59999
and will be unique across the cluster. The assigned ports will also be added to environment variables inside the pod so that the application can determine what ports were assigned. The environment variable name will be in the following format (example below): <deployment name>_PORT_<port number>_<protocol>_HOSTPORT
, so an example would be mydeployment_PORT_8080_TCP_HOSTPORT: 41932
.
Here is an example echoserver
deployment, showing the mapping of host ports for ports 8080 and 8443:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echoserver-hostport
labels:
app: echoserver-hostport
spec:
replicas: 3
selector:
matchLabels:
app: echoserver-hostport
template:
metadata:
annotations:
kubernetes.azure.com/assign-hostports-for-containerports: 8080/tcp,8443/tcp
labels:
app: echoserver-hostport
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: echoserver-hostport
image: k8s.gcr.io/echoserver:1.10
ports:
- name: http
containerPort: 8080
protocol: TCP
- name: https
containerPort: 8443
protocol: TCP
When the deployment is applied, the hostPort
entries will be in the YAML of the individual pods:
$ kubectl describe pod echoserver-hostport-75dc8d8855-4gjfc
<cut for brevity>
Containers:
echoserver-hostport:
Container ID: containerd://d0b75198afe0612091f412ee7cf7473f26c80660143a96b459b3e699ebaee54c
Image: k8s.gcr.io/echoserver:1.10
Image ID: k8s.gcr.io/echoserver@sha256:cb5c1bddd1b5665e1867a7fa1b5fa843a47ee433bbb75d4293888b71def53229 Ports: 8080/TCP, 8443/TCP
Host Ports: 46645/TCP, 49482/TCP
State: Running
Started: Thu, 12 Jan 2023 18:02:50 +0000
Ready: True
Restart Count: 0
Environment:
echoserver-hostport_PORT_8443_TCP_HOSTPORT: 49482
echoserver-hostport_PORT_8080_TCP_HOSTPORT: 46645
Learn about using multiple node pools in AKS.
Learn about using standard load balancers in AKS
Azure Kubernetes Service feedback
Azure Kubernetes Service is an open source project. Select a link to provide feedback:
Training
Module
Configure scaling in Azure Kubernetes Service - Training
This module covers the scaling applications in Azure Kubernetes Service (AKS), including manually scaling pods or nodes and integrating with Azure Container Instances (ACI).