Node autoprovisioning (preview)

When you deploy workloads onto AKS, you need to make a decision about the node pool configuration regarding the VM size needed. As your workloads become more complex, and require different CPU, memory, and capabilities to run, the overhead of having to design your VM configuration for numerous resource requests becomes difficult.

Node autoprovisioning (NAP) (preview) decides based on pending pod resource requirements the optimal VM configuration to run those workloads in the most efficient and cost effective manner.

NAP is based on the Open Source Karpenter project, and the AKS provider is also Open Source. NAP automatically deploys and configures and manages Karpenter on your AKS clusters.


Node autoprovisioning (NAP) for AKS is currently in PREVIEW. See the Supplemental Terms of Use for Microsoft Azure Previews for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.

Before you begin

Install the aks-preview CLI extension

  1. Install the aks-preview CLI extension using the az extension add command.

    az extension add --name aks-preview
  2. Update the extension to ensure you have the latest version installed using the az extension update command.

    az extension update --name aks-preview

Register the NodeAutoProvisioningPreview feature flag

  1. Register the NodeAutoProvisioningPreview feature flag using the az feature register command.

    az feature register --namespace "Microsoft.ContainerService" --name "NodeAutoProvisioningPreview"

    It takes a few minutes for the status to show Registered.

  2. Verify the registration status using the az feature show command.

    az feature show --namespace "Microsoft.ContainerService" --name "NodeAutoProvisioningPreview"
  3. When the status reflects Registered, refresh the registration of the Microsoft.ContainerService resource provider using the az provider register command.

    az provider register --namespace Microsoft.ContainerService


  • Windows and Azure Linux node pools aren't supported yet
  • Kubelet configuration through Node pool configuration is not supported
  • NAP can only be enabled on new clusters currently
  • It is not currently possible to stop nodepools or clusters which use the NAP feature

Enable node autoprovisioning

To enable node autoprovisioning, create a new cluster using the az aks create command and set --node-provisioning-mode to "Auto". You'll also need to use overlay networking and the cilium network policy.

az aks create --name karpuktest --resource-group karpuk --node-provisioning-mode Auto --network-plugin azure --network-plugin-mode overlay --network-dataplane cilium

Node pools

Node autoprovision uses a list of VM SKUs as a starting point to decide which is best suited for the workloads that are in a pending state. Having control over what SKU you want in the initial pool allows you to specify specific SKU families, or VM types and the maximum amount of resources a provisioner uses.

If you have specific VM SKUs that are reserved instances, for example, you may wish to only use those VMs as the starting pool.

You can have multiple node pool definitions in a cluster, but AKS deploys a default node pool definition that you can modify:

kind: NodePool
  name: default
    consolidationPolicy: WhenUnderutilized
    expireAfter: Never
        name: default

      # Requirements that constrain the parameters of provisioned nodes.
      # These requirements are combined with pod.spec.affinity.nodeAffinity rules.
      # Operators { In, NotIn, Exists, DoesNotExist, Gt, and Lt } are supported.
      - key:
        operator: In
        - amd64
      - key:
        operator: In
        - linux
      - key:
        operator: In
        - on-demand
      - key:
        operator: In
        - D

Supported node provisioner requirements

SKU selectors with well known labels

Selector Description Example VM SKU Family D, F, L etc. Explicit SKU name Standard_A1_v2 SKU version (without "v", can use 1) 1 , 2 VM allocation type (Spot / On Demand) spot or on-demand Number of CPUs in VM 16 Memory in VM in MiB 131072 GPU name A100 GPU manufacturer nvidia GPU count per VM 2 Whether the VM has accelerated networking [true, false] Whether the VM supports Premium IO storage [true, false] Size limit for the Ephemeral OS disk in Gb 92 The Availability Zone(s) [uksouth-1,uksouth-2,uksouth-3] Operating System (Linux only during preview) linux CPU architecture (AMD64 or ARM64) [amd64, arm64]

To list the VM SKU capabilities and allowed values, use the vm list-skus command from the Azure CLI.

az vm list-skus --resource-type virtualMachines --location <location> --query '[].name' --output table

Node pool limits

By default, NAP attempts to schedule your workloads within the Azure quota you have available. You can also specify the upper limit of resources that is used by a node pool, specifying limits within the node pool spec.

  # Resource limits constrain the total size of the cluster.
  # Limits prevent Karpenter from creating new instances once the limit is exceeded.
    cpu: "1000"
    memory: 1000Gi

Node pool weights

When you have multiple node pools defined, it's possible to set a preference of where a workload should be scheduled. Define the relative weight on your Node pool definitions.

  # Priority given to the node pool when the scheduler considers which to select. Higher weights indicate higher priority when comparing node pools.
  # Specifying no weight is equivalent to specifying a weight of 0.
  weight: 10

Kubernetes and node image updates

AKS with NAP manages the Kubernetes version upgrades and VM OS disk updates for you by default.

Kubernetes upgrades

Kubernetes upgrades for NAP node pools follows the Control Plane Kubernetes version. If you perform a cluster upgrade, your NAP nodes are updated automatically to follow the same versioning.

Node image updates

By default NAP node pool virtual machines are automatically updated when a new image is available. If you wish to pin a node pool at a certain node image version, you can set the imageVersion on the node class:

kubectl edit aksnodeclass default

Within the node class definition, set the imageVersion to one of the published releases listed on the AKS Release notes. You can also see the availability of images in regions by referring to the AKS release tracker

The imageVersion is the date portion on the Node Image as only Ubuntu 22.04 is supported, for example, "AKSUbuntu-2204-202311.07.0" would be "202311.07.0"

kind: AKSNodeClass
  annotations: General purpose AKSNodeClass for running Ubuntu2204
      nodes aks-managed-karpenter-overlay kube-system
  creationTimestamp: "2023-11-16T23:59:06Z"
  generation: 1
  labels: Helm karpenter-overlay-main-adapter-helmrelease 6556abcb92c4ce0001202e78
  name: default
  resourceVersion: "1792"
  uid: 929a5b07-558f-4649-b78b-eb25e9b97076
  imageFamily: Ubuntu2204
  imageVersion: 202311.07.0
  osDiskSizeGB: 128

Removing the imageVersion spec would revert the node pool to be updated to the latest node image version.

Node disruption

When the workloads on your nodes scale down, NAP uses disruption rules on the Node pool specification to decide when and how to remove those nodes and potentially reschedule your workloads to be more efficient.

You can remove a node manually using kubectl delete node, but NAP can also control when it should optimize your nodes.

    # Describes which types of Nodes NAP should consider for consolidation
    consolidationPolicy: WhenUnderutilized | WhenEmpty
    # 'WhenUnderutilized', NAP will consider all nodes for consolidation and attempt to remove or replace Nodes when it discovers that the Node is underutilized and could be changed to reduce cost

    #  `WhenEmpty`, NAP will only consider nodes for consolidation that contain no workload pods
    # The amount of time NAP should wait after discovering a consolidation decision
    # This value can currently only be set when the consolidationPolicy is 'WhenEmpty'
    # You can choose to disable consolidation entirely by setting the string value 'Never'
    consolidateAfter: 30s

Monitoring selection events

Node autoprovision produces cluster events that can be used to monitor deployment and scheduling decisions being made. You can view events through the Kubernetes events stream.

kubectl get events -A --field-selector source=karpenter -w