Use Azure ultra disks on Azure Kubernetes Service

Azure ultra disks offer high throughput, high IOPS, and consistent low latency disk storage for your stateful applications. One major benefit of ultra disks is the ability to dynamically change the performance of the SSD along with your workloads without the need to restart your agent nodes. Ultra disks are suited for data-intensive workloads.

Before you begin

This feature can only be set at cluster creation or node pool creation time.


Azure ultra disks require nodepools deployed in availability zones and regions that support these disks as well as only specific VM series. See the Ultra disks GA scope and limitations.


  • Ultra disks can't be used with some features and functionality, such as availability sets or Azure Disk Encryption. Review Ultra disks GA scope and limitations before proceeding.
  • The supported size range for ultra disks is between 100 and 1500.

Create a new cluster that can use ultra disks

Create an AKS cluster that is able to leverage Azure ultra Disks by using the following CLI commands. Use the --enable-ultra-ssd flag to set the EnableUltraSSD feature.

Create an Azure resource group:

az group create --name myResourceGroup --location westus2

Create an AKS-managed Azure AD cluster with support for ultra disks.

az aks create -g MyResourceGroup -n myAKSCluster -l westus2 --node-vm-size Standard_D2s_v3 --zones 1 2 --node-count 2 --enable-ultra-ssd

If you want to create clusters without ultra disk support, you can do so by omitting the --enable-ultra-ssd parameter.

Enable ultra disks on an existing cluster

You can enable ultra disks on existing clusters by adding a new node pool to your cluster that support ultra disks. Configure a new node pool to use ultra disks by using the --enable-ultra-ssd flag.

az aks nodepool add --name ultradisk --cluster-name myAKSCluster --resource-group myResourceGroup --node-vm-size Standard_D2s_v3 --zones 1 2 --node-count 2 --enable-ultra-ssd

If you want to create new node pools without support for ultra disks, you can do so by omitting the --enable-ultra-ssd parameter.

Use ultra disks dynamically with a storage class

To use ultra disks in our deployments or stateful sets you can use a storage class for dynamic provisioning.

Create the storage class

A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see Kubernetes Storage Classes.

In this case, we'll create a storage class that references ultra disks. Create a file named azure-ultra-disk-sc.yaml, and copy in the following manifest.

kind: StorageClass
  name: ultra-disk-sc
provisioner: # replace with "" if aks version is less than 1.21
volumeBindingMode: WaitForFirstConsumer # optional, but recommended if you want to wait until the pod that will use this disk is created 
  skuname: UltraSSD_LRS
  kind: managed
  cachingMode: None
  diskIopsReadWrite: "2000"  # minimum value: 2 IOPS/GiB 
  diskMbpsReadWrite: "320"   # minimum value: 0.032/GiB

Create the storage class with the kubectl apply command and specify your azure-ultra-disk-sc.yaml file:

kubectl apply -f azure-ultra-disk-sc.yaml

The output from the command resembles the following example: created

Create a persistent volume claim

A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. In this case, a PVC can use the previously created storage class to create an ultra disk.

Create a file named azure-ultra-disk-pvc.yaml, and copy in the following manifest. The claim requests a disk named ultra-disk that is 1000 GB in size with ReadWriteOnce access. The ultra-disk-sc storage class is specified as the storage class.

apiVersion: v1
kind: PersistentVolumeClaim
  name: ultra-disk
  - ReadWriteOnce
  storageClassName: ultra-disk-sc
      storage: 1000Gi

Create the persistent volume claim with the kubectl apply command and specify your azure-ultra-disk-pvc.yaml file:

kubectl apply -f azure-ultra-disk-pvc.yaml

The output from the command resembles the following example:

persistentvolumeclaim/ultra-disk created

Use the persistent volume

Once the persistent volume claim has been created and the disk successfully provisioned, a pod can be created with access to the disk. The following manifest creates a basic NGINX pod that uses the persistent volume claim named ultra-disk to mount the Azure disk at the path /mnt/azure.

Create a file named nginx-ultra.yaml, and copy in the following manifest.

kind: Pod
apiVersion: v1
  name: nginx-ultra
  - name: nginx-ultra
        cpu: 100m
        memory: 128Mi
        cpu: 250m
        memory: 256Mi
    - mountPath: "/mnt/azure"
      name: volume
    - name: volume
        claimName: ultra-disk

Create the pod with the kubectl apply command, as shown in the following example:

kubectl apply -f nginx-ultra.yaml

The output from the command resembles the following example:

pod/nginx-ultra created

You now have a running pod with your Azure disk mounted in the /mnt/azure directory. This configuration can be seen when inspecting your pod via kubectl describe pod nginx-ultra, as shown in the following condensed example:

kubectl describe pod nginx-ultra

    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  azure-managed-disk
    ReadOnly:   false
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-smm2n
    Optional:    false
  Type    Reason                 Age   From                               Message
  ----    ------                 ----  ----                               -------
  Normal  Scheduled              2m    default-scheduler                  Successfully assigned mypod to aks-nodepool1-79590246-0
  Normal  SuccessfulMountVolume  2m    kubelet, aks-nodepool1-79590246-0  MountVolume.SetUp succeeded for volume "default-token-smm2n"
  Normal  SuccessfulMountVolume  1m    kubelet, aks-nodepool1-79590246-0  MountVolume.SetUp succeeded for volume "pvc-faf0f176-8b8d-11e8-923b-deb28c58d242"

Using Azure tags

For more details on using Azure tags, see Use Azure tags in Azure Kubernetes Service (AKS).

Next steps