Add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster

A Spot node pool is a node pool backed by an Azure Spot Virtual machine scale set. Using Spot VMs for nodes with your AKS cluster allows you to take advantage of unutilized capacity in Azure at a significant cost savings. The amount of available unutilized capacity will vary based on many factors, including node size, region, and time of day.

When you deploy a Spot node pool, Azure will allocate the Spot nodes if there's capacity available. There's no SLA for the Spot nodes. A Spot scale set that backs the Spot node pool is deployed in a single fault domain and offers no high availability guarantees. At any time when Azure needs the capacity back, the Azure infrastructure will evict Spot nodes.

Spot nodes are great for workloads that can handle interruptions, early terminations, or evictions. For example, workloads such as batch processing jobs, development and testing environments, and large compute workloads may be good candidates to schedule on a Spot node pool.

In this article, you add a secondary Spot node pool to an existing Azure Kubernetes Service (AKS) cluster.

This article assumes a basic understanding of Kubernetes and Azure Load Balancer concepts. For more information, see Kubernetes core concepts for Azure Kubernetes Service (AKS).

If you don't have an Azure subscription, create a free account before you begin.

Before you begin

When you create a cluster to use a Spot node pool, that cluster must use Virtual Machine Scale Sets for node pools and the Standard SKU load balancer. You must also add another node pool after you create your cluster, which is covered in a later step.

This article requires that you're running the Azure CLI version 2.14 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI.

Limitations

The following limitations apply when you create and manage AKS clusters with a Spot node pool:

  • A Spot node pool can't be the cluster's default node pool. A Spot node pool can only be used for a secondary pool.
  • The control plane and node pools can't be upgraded at the same time. You must upgrade them separately or remove the Spot node pool to upgrade the control plane and remaining node pools at the same time.
  • A Spot node pool must use Virtual Machine Scale Sets.
  • You can't change ScaleSetPriority or SpotMaxPrice after creation.
  • When setting SpotMaxPrice, the value must be -1 or a positive value with up to five decimal places.
  • A Spot node pool will have the label kubernetes.azure.com/scalesetpriority:spot, the taint kubernetes.azure.com/scalesetpriority=spot:NoSchedule, and system pods will have anti-affinity.
  • You must add a corresponding toleration and affinity to schedule workloads on a Spot node pool.

Add a Spot node pool to an AKS cluster

You must add a Spot node pool to an existing cluster that has multiple node pools enabled. For more details on creating an AKS cluster with multiple node pools, see use multiple node pools.

Create a node pool using the az aks nodepool add command:

az aks nodepool add \
    --resource-group myResourceGroup \
    --cluster-name myAKSCluster \
    --name spotnodepool \
    --priority Spot \
    --eviction-policy Delete \
    --spot-max-price -1 \
    --enable-cluster-autoscaler \
    --min-count 1 \
    --max-count 3 \
    --no-wait

By default, you create a node pool with a priority of Regular in your AKS cluster when you create a cluster with multiple node pools. The above command adds an auxiliary node pool to an existing AKS cluster with a priority of Spot. The priority of Spot makes the node pool a Spot node pool. The eviction-policy parameter is set to Delete in the above example, which is the default value. When you set the eviction policy to Delete, nodes in the underlying scale set of the node pool are deleted when they're evicted. You can also set the eviction policy to Deallocate. When you set the eviction policy to Deallocate, nodes in the underlying scale set are set to the stopped-deallocated state upon eviction. Nodes in the stopped-deallocated state count against your compute quota and can cause issues with cluster scaling or upgrading. The priority and eviction-policy values can only be set during node pool creation. Those values can't be updated later.

The command also enables the cluster autoscaler, which is recommended to use with Spot node pools. Based on the workloads running in your cluster, the cluster autoscaler scales up and scales down the number of nodes in the node pool. For Spot node pools, the cluster autoscaler will scale up the number of nodes after an eviction if more nodes are still needed. If you change the maximum number of nodes a node pool can have, you also need to adjust the maxCount value associated with the cluster autoscaler. If you don't use a cluster autoscaler, upon eviction, the Spot pool will eventually decrease to zero and require a manual operation to receive any additional Spot nodes.

Important

Only schedule workloads on Spot node pools that can handle interruptions, such as batch processing jobs and testing environments. It is recommended that you set up taints and tolerations on your Spot node pool to ensure that only workloads that can handle node evictions are scheduled on a Spot node pool. For example, the above command by default adds a taint of kubernetes.azure.com/scalesetpriority=spot:NoSchedule so only pods with a corresponding toleration are scheduled on this node.

Verify the Spot node pool

To verify your node pool has been added as a Spot node pool:

az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --name spotnodepool

Confirm scaleSetPriority is Spot.

To schedule a pod to run on a Spot node, add a toleration and node affinity that corresponds to the taint applied to your Spot node. The following example shows a portion of a yaml file that defines a toleration that corresponds to the kubernetes.azure.com/scalesetpriority=spot:NoSchedule taint and a node affinity that corresponds to the kubernetes.azure.com/scalesetpriority=spot label used in the previous step.

spec:
  containers:
  - name: spot-example
  tolerations:
  - key: "kubernetes.azure.com/scalesetpriority"
    operator: "Equal"
    value: "spot"
    effect: "NoSchedule"
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: "kubernetes.azure.com/scalesetpriority"
            operator: In
            values:
            - "spot"
   ...

When a pod with this toleration and node affinity is deployed, Kubernetes will successfully schedule the pod on the nodes with the taint and label applied.

Upgrade a Spot node pool

Upgrading Spot node pools was previously unsupported, but is now an available operation. When Upgrading a Spot node pool, AKS will internally issue a cordon and an eviction notice, but no drain is applied. There are no surge nodes available for Spot node pool upgrades. Outside of these changes, behavior when upgrading Spot node pools is consistent with other node pool types.

For more information on upgrading, see Upgrade an AKS cluster and the Azure CLI command az aks upgrade.

Max price for a Spot pool

Pricing for Spot instances is variable, based on region and SKU. For more information, see pricing for Linux and Windows.

With variable pricing, you have option to set a max price, in US dollars (USD), using up to five decimal places. For example, the value 0.98765 would be a max price of $0.98765 USD per hour. If you set the max price to -1, the instance won't be evicted based on price. The price for the instance will be the current price for Spot or the price for a standard instance, whichever is less, as long as there's capacity and quota available.

Next steps

In this article, you learned how to add a Spot node pool to an AKS cluster. For more information about how to control pods across node pools, see Best practices for advanced scheduler features in AKS.