Create an Azure Kubernetes Service (AKS) cluster that uses availability zones
This article shows you how to create an AKS cluster and distribute the node components across availability zones.
Before you begin
- You need the Azure CLI version 2.0.76 or later installed and configured. Run
az --versionto find the version. If you need to install or upgrade, see Install Azure CLI. - Read the overview of availability zones in AKS to understand the benefits and limitations of using availability zones in AKS.
Azure Resource Manager templates and availability zones
Keep the following details in mind when creating an AKS cluster with availability zones using an Azure Resource Manager template:
- If you explicitly define a null value in a template, for example,
"availabilityZones": null, the template treats the property as if it doesn't exist. This means your cluster doesn't deploy in an availability zone. - If you don't include the
"availabilityZones":property in the template, your cluster doesn't deploy in an availability zone. - You can't update settings for availability zones on an existing cluster, as the behavior is different when you update an AKS cluster with Azure Resource Manager templates. If you explicitly set a null value in your template for availability zones and update your cluster, it doesn't update your cluster for availability zones. However, if you omit the availability zones property with syntax such as
"availabilityZones": [], the deployment attempts to disable availability zones on your existing AKS cluster and fails.
Create an AKS cluster across availability zones
When you create a cluster using the az aks create command, the --zones parameter specifies the availability zones to deploy agent nodes into. The availability zones that the managed control plane components are deployed into are not controlled by this parameter. They are automatically spread across all availability zones (if present) in the region during cluster deployment.
The following example commands show how to create a resource group and an AKS cluster with a total of three nodes. One agent node in zone 1, one in 2, and then one in 3.
Create a resource group using the
az group createcommand.az group create --name $RESOURCE_GROUP --location $LOCATIONCreate an AKS cluster using the
az aks createcommand with the--zonesparameter.az aks create \ --resource-group $RESOURCE_GROUP \ --name $CLUSTER_NAME \ --generate-ssh-keys \ --vm-set-type VirtualMachineScaleSets \ --load-balancer-sku standard \ --node-count 3 \ --zones 1 2 3It takes a few minutes to create the AKS cluster.
When deciding what zone a new node should belong to, a specified AKS node pool uses a best effort zone balancing offered by underlying Azure Virtual Machine Scale Sets. The AKS node pool is "balanced" when each zone has the same number of VMs or +- one VM in all other zones for the scale set.
Verify node distribution across zones
When the cluster is ready, list what availability zone the agent nodes in the scale set are in.
Get the AKS cluster credentials using the
az aks get-credentialscommand:az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAMEList the nodes in the cluster using the
kubectl describecommand and filter on thetopology.kubernetes.io/zonevalue.kubectl describe nodes | grep -e "Name:" -e "topology.kubernetes.io/zone"The following example output shows the three nodes distributed across the specified region and availability zones, such as eastus2-1 for the first availability zone and eastus2-2 for the second availability zone:
Name: aks-nodepool1-28993262-vmss000000 topology.kubernetes.io/zone=eastus2-1 Name: aks-nodepool1-28993262-vmss000001 topology.kubernetes.io/zone=eastus2-2 Name: aks-nodepool1-28993262-vmss000002 topology.kubernetes.io/zone=eastus2-3
As you add more nodes to an agent pool, the Azure platform automatically distributes the underlying VMs across the specified availability zones.
With Kubernetes versions 1.17.0 and later, AKS uses the topology.kubernetes.io/zone label and the deprecated failure-domain.beta.kubernetes.io/zone. You can get the same result from running the kubectl describe nodes command in the previous example using the following command:
kubectl get nodes -o custom-columns=NAME:'{.metadata.name}',REGION:'{.metadata.labels.topology\.kubernetes\.io/region}',ZONE:'{metadata.labels.topology\.kubernetes\.io/zone}'
The following example resembles the output with more verbose details:
NAME REGION ZONE
aks-nodepool1-34917322-vmss000000 eastus eastus-1
aks-nodepool1-34917322-vmss000001 eastus eastus-2
aks-nodepool1-34917322-vmss000002 eastus eastus-3
Verify pod distribution across zones
As documented in Well-Known Labels, Annotations and Taints, Kubernetes uses the topology.kubernetes.io/zone label to automatically distribute pods in a replication controller or service across the different zones available. In this example, you test the label and scale your cluster from 3 to 5 nodes to verify the pod correctly spreads.
Scale your AKS cluster from 3 to 5 nodes using the
az aks scalecommand with the--node-countset to5.az aks scale \ --resource-group $RESOURCE_GROUP \ --name $CLUSTER_NAME \ --node-count 5When the scale operation completes, verify the pod distribution across the zones using the following
kubectl describecommand:kubectl describe nodes | grep -e "Name:" -e "topology.kubernetes.io/zone"The following example output shows the five nodes distributed across the specified region and availability zones, such as eastus2-1 for the first availability zone and eastus2-2 for the second availability zone:
Name: aks-nodepool1-28993262-vmss000000 topology.kubernetes.io/zone=eastus2-1 Name: aks-nodepool1-28993262-vmss000001 topology.kubernetes.io/zone=eastus2-2 Name: aks-nodepool1-28993262-vmss000002 topology.kubernetes.io/zone=eastus2-3 Name: aks-nodepool1-28993262-vmss000003 topology.kubernetes.io/zone=eastus2-1 Name: aks-nodepool1-28993262-vmss000004 topology.kubernetes.io/zone=eastus2-2Deploy an NGINX application with three replicas using the following
kubectl create deploymentandkubectl scalecommands:kubectl create deployment nginx --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine kubectl scale deployment nginx --replicas=3Verify the pod distribution across the zones using the following
kubectl describecommand:kubectl describe pod | grep -e "^Name:" -e "^Node:"The following example output shows the three pods distributed across the specified region and availability zones, such as eastus2-1 for the first availability zone and eastus2-2 for the second availability zone:
Name: nginx-6db489d4b7-ktdwg Node: aks-nodepool1-28993262-vmss000000/10.240.0.4 Name: nginx-6db489d4b7-v7zvj Node: aks-nodepool1-28993262-vmss000002/10.240.0.6 Name: nginx-6db489d4b7-xz6wj Node: aks-nodepool1-28993262-vmss000004/10.240.0.8As you can see from the previous output, the first pod is running on node 0 located in the availability zone
eastus2-1. The second pod is running on node 2, corresponding toeastus2-3, and the third one in node 4, ineastus2-2. Without any extra configuration, Kubernetes spreads the pods correctly across all three availability zones.
Next steps
This article described how to create an AKS cluster using availability zones. For more considerations on highly available clusters, see Best practices for business continuity and disaster recovery in AKS.
Azure Kubernetes Service