Create and configure an Azure Kubernetes Services (AKS) cluster to use virtual nodes

To rapidly scale application workloads in an AKS cluster, you can use virtual nodes. With virtual nodes, you have quick provisioning of pods, and only pay per second for their execution time. You don't need to wait for Kubernetes cluster autoscaler to deploy VM compute nodes to run more pods. Virtual nodes are only supported with Linux pods and nodes.

The virtual nodes add on for AKS is based on the open source project Virtual Kubelet.

This article gives you an overview of the region availability and networking requirements for using virtual nodes, and the known limitations.

Regional availability

All regions, where ACI supports VNET SKUs, are supported for virtual nodes deployments. For more information, see Resource availability for Azure Container Instances in Azure regions.

For available CPU and memory SKUs in each region, review Azure Container Instances Resource availability for Azure Container Instances in Azure regions - Linux container groups

Network requirements

Virtual nodes enable network communication between pods that run in Azure Container Instances (ACI) and the AKS cluster. To support this communication, a virtual network subnet is created and delegated permissions are assigned. Virtual nodes only work with AKS clusters created using advanced networking (Azure CNI). By default, AKS clusters are created with basic networking (kubenet).

Pods running in Azure Container Instances (ACI) need access to the AKS API server endpoint, in order to configure networking.

Limitations

Virtual nodes functionality is heavily dependent on ACI's feature set. In addition to the quotas and limits for Azure Container Instances, the following are scenarios not supported with virtual nodes or are deployment considerations:

Next steps

Configure virtual nodes for your clusters:

Virtual nodes are often one component of a scaling solution in AKS. For more information on scaling solutions, see the following articles: