Reduce latency with proximity placement groups
Note
When using proximity placement groups on AKS, colocation only applies to the agent nodes. Node to node and the corresponding hosted pod to pod latency is improved. The colocation does not affect the placement of a cluster's control plane.
When deploying your application in Azure, spreading Virtual Machine (VM) instances across regions or availability zones creates network latency, which may impact the overall performance of your application. A proximity placement group is a logical grouping used to make sure Azure compute resources are physically located close to each other. Some applications like gaming, engineering simulations, and high-frequency trading (HFT) require low latency and tasks that complete quickly. For high-performance computing (HPC) scenarios such as these, consider using proximity placement groups (PPG) for your cluster's node pools.
Before you begin
This article requires that you are running the Azure CLI version 2.14 or later. Run az --version
to find the version. If you need to install or upgrade, see Install Azure CLI.
Limitations
- A proximity placement group can map to at most one availability zone.
- A node pool must use Virtual Machine Scale Sets to associate a proximity placement group.
- A node pool can associate a proximity placement group at node pool create time only.
Node pools and proximity placement groups
The first resource you deploy with a proximity placement group attaches to a specific data center. Additional resources deployed with the same proximity placement group are colocated in the same data center. Once all resources using the proximity placement group have been stopped (deallocated) or deleted, it's no longer attached.
- Many node pools can be associated with a single proximity placement group.
- A node pool may only be associated with a single proximity placement group.
Configure proximity placement groups with availability zones
Note
While proximity placement groups require a node pool to use at most one availability zone, the baseline Azure VM SLA of 99.9% is still in effect for VMs in a single zone.
Proximity placement groups are a node pool concept and associated with each individual node pool. Using a PPG resource has no impact on AKS control plane availability. This can impact how a cluster should be designed with zones. To ensure a cluster is spread across multiple zones the following design is recommended.
- Provision a cluster with the first system pool using 3 zones and no proximity placement group associated. This ensures the system pods land in a dedicated node pool which will spread across multiple zones.
- Add additional user node pools with a unique zone and proximity placement group associated to each pool. An example is nodepool1 in zone 1 and PPG1, nodepool2 in zone 2 and PPG2, nodepool3 in zone 3 with PPG3. This ensures at a cluster level, nodes are spread across multiple zones and each individual node pool is colocated in the designated zone with a dedicated PPG resource.
Create a new AKS cluster with a proximity placement group
The following example uses the az group create command to create a resource group named myResourceGroup in the centralus region. An AKS cluster named myAKSCluster is then created using the az aks create command.
Accelerated networking greatly improves networking performance of virtual machines. Ideally, use proximity placement groups in conjunction with accelerated networking. By default, AKS uses accelerated networking on supported virtual machine instances, which include most Azure virtual machine with two or more vCPUs.
Create a new AKS cluster with a proximity placement group associated to the first system node pool:
# Create an Azure resource group
az group create --name myResourceGroup --location centralus
Run the following command, and store the ID that is returned:
# Create proximity placement group
az ppg create -n myPPG -g myResourceGroup -l centralus -t standard
The command produces output, which includes the id value you need for upcoming CLI commands:
{
"availabilitySets": null,
"colocationStatus": null,
"id": "/subscriptions/yourSubscriptionID/resourceGroups/myResourceGroup/providers/Microsoft.Compute/proximityPlacementGroups/myPPG",
"location": "centralus",
"name": "myPPG",
"proximityPlacementGroupType": "Standard",
"resourceGroup": "myResourceGroup",
"tags": {},
"type": "Microsoft.Compute/proximityPlacementGroups",
"virtualMachineScaleSets": null,
"virtualMachines": null
}
Use the proximity placement group resource ID for the myPPGResourceID value in the below command:
# Create an AKS cluster that uses a proximity placement group for the initial system node pool only. The PPG has no effect on the cluster control plane.
az aks create \
--resource-group myResourceGroup \
--name myAKSCluster \
--ppg myPPGResourceID
Add a proximity placement group to an existing cluster
You can add a proximity placement group to an existing cluster by creating a new node pool. You can then optionally migrate existing workloads to the new node pool, and then delete the original node pool.
Use the same proximity placement group that you created earlier, and this will ensure agent nodes in both node pools in your AKS cluster are physically located in the same data center.
Use the resource ID from the proximity placement group you created earlier, and add a new node pool with the az aks nodepool add
command:
# Add a new node pool that uses a proximity placement group, use a --node-count = 1 for testing
az aks nodepool add \
--resource-group myResourceGroup \
--cluster-name myAKSCluster \
--name mynodepool \
--node-count 1 \
--ppg myPPGResourceID
Clean up
To delete the cluster, use the az group delete
command to delete the AKS resource group:
az group delete --name myResourceGroup --yes --no-wait
Next steps
- Learn more about proximity placement groups.
Feedback
Submit and view feedback for