Create and use a volume with Azure Disks in Azure Kubernetes Service (AKS)

A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. This article shows you how to dynamically create persistent volumes with Azure Disks for use by a single pod in an Azure Kubernetes Service (AKS) cluster.

Note

An Azure disk can only be mounted with Access mode type ReadWriteOnce, which makes it available to one node in AKS. If you need to share a persistent volume across multiple nodes, use Azure Files.

This article shows you how to:

  • Work with a dynamic persistent volume (PV) by installing the Container Storage Interface (CSI) driver and dynamically creating one or more Azure managed disk to attach to a pod.
  • Work with a static PV by creating one or more Azure managed disk, or use an existing one and attach it to a pod.

For more information on Kubernetes volumes, see Storage options for applications in AKS.

Before you begin

  • An Azure storage account.

  • The Azure CLI version 2.0.59 or later installed and configured. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI.

The Azure Disks CSI driver has a limit of 32 volumes per node. The volume count changes based on the size of the node/node pool. Run the kubectl get command to determine the number of volumes that can be allocated per node:

kubectl get CSINode <nodename> -o yaml

Dynamically provision a volume

This section provides guidance for cluster administrators who want to provision one or more persistent volumes that include details of Azure Disk storage for use by a workload. A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Disk storage container.

Dynamic provisioning parameters

Name Meaning Available Value Mandatory Default value
skuName Azure Disks storage account type (alias: storageAccountType) Standard_LRS, Premium_LRS, StandardSSD_LRS, PremiumV2_LRS, UltraSSD_LRS, Premium_ZRS, StandardSSD_ZRS No StandardSSD_LRS
fsType File System Type ext4, ext3, ext2, xfs, btrfs for Linux, ntfs for Windows No ext4 for Linux, ntfs for Windows
cachingMode Azure Data Disk Host Cache Setting None, ReadOnly, ReadWrite No ReadOnly
location Specify Azure region where Azure Disks will be created eastus, westus, etc. No If empty, driver will use the same location name as current AKS cluster
resourceGroup Specify the resource group where the Azure Disks will be created Existing resource group name No If empty, driver will use the same resource group name as current AKS cluster
DiskIOPSReadWrite UltraSSD disk IOPS Capability (minimum: 2 IOPS/GiB ) 100~160000 No 500
DiskMBpsReadWrite UltraSSD disk Throughput Capability(minimum: 0.032/GiB) 1~2000 No 100
LogicalSectorSize Logical sector size in bytes for ultra disk. Supported values are 512 ad 4096. 4096 is the default. 512, 4096 No 4096
tags Azure Disk tags Tag format: key1=val1,key2=val2 No ""
diskEncryptionSetID ResourceId of the disk encryption set to use for enabling encryption at rest format: /subscriptions/{subs-id}/resourceGroups/{rg-name}/providers/Microsoft.Compute/diskEncryptionSets/{diskEncryptionSet-name} No ""
diskEncryptionType Encryption type of the disk encryption set. EncryptionAtRestWithCustomerKey(by default), EncryptionAtRestWithPlatformAndCustomerKeys No ""
writeAcceleratorEnabled Write Accelerator on Azure Disks true, false No ""
networkAccessPolicy NetworkAccessPolicy property to prevent generation of the SAS URI for a disk or a snapshot AllowAll, DenyAll, AllowPrivate No AllowAll
diskAccessID Azure Resource ID of the DiskAccess resource to use private endpoints on disks No ``
enableBursting Enable on-demand bursting beyond the provisioned performance target of the disk. On-demand bursting should only be applied to Premium disk and when the disk size > 512 GB. Ultra and shared disk isn't supported. Bursting is disabled by default. true, false No false
useragent User agent used for customer usage attribution No Generated Useragent formatted driverName/driverVersion compiler/version (OS-ARCH)
enableAsyncAttach Allow multiple disk attach operations (in batch) on one node in parallel.
While this parameter can speed up disk attachment, you may encounter Azure API throttling limit when there are large number of volume attachments.
true, false No false
subscriptionID Specify Azure subscription ID where the Azure Disks is created. Azure subscription ID No If not empty, resourceGroup must be provided.
--- Following parameters are only for v2 --- --- ---
enableAsyncAttach The v2 driver uses a different strategy to manage Azure API throttling and ignores this parameter. No
maxShares The total number of shared disk mounts allowed for the disk. Setting the value to 2 or more enables attachment replicas. Supported values depend on the disk size. See Share an Azure managed disk for supported values. No 1
maxMountReplicaCount The number of replicas attachments to maintain. This value must be in the range [0..(maxShares - 1)] No If accessMode is ReadWriteMany, the default is 0. Otherwise, the default is maxShares - 1

Built-in storage classes

A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see Kubernetes Storage Classes.

Each AKS cluster includes four pre-created storage classes, two of them configured to work with Azure Disks:

  • The default storage class provisions a standard SSD Azure Disk.
    • Standard storage is backed by Standard SSDs and delivers cost-effective storage while still delivering reliable performance.
  • The managed-csi-premium storage class provisions a premium Azure Disk.
    • Premium disks are backed by SSD-based high-performance, low-latency disk. Perfect for VMs running production workload. If the AKS nodes in your cluster use premium storage, select the managed-premium class.

If you use one of the default storage classes, you can't update the volume size after the storage class is created. To be able to update the volume size after a storage class is created, add the line allowVolumeExpansion: true to one of the default storage classes, or you can create your own custom storage class. It's not supported to reduce the size of a PVC (to prevent data loss). You can edit an existing storage class by using the kubectl edit sc command.

For example, if you want to use a disk of size 4 TiB, you must create a storage class that defines cachingmode: None because disk caching isn't supported for disks 4 TiB and larger.

For more information about storage classes and creating your own storage class, see Storage options for applications in AKS.

Use the kubectl get sc command to see the pre-created storage classes. The following example shows the pre-create storage classes available within an AKS cluster:

kubectl get sc

The output of the command resembles the following example:

NAME                PROVISIONER                AGE
default (default)   disk.csi.azure.com         1h
managed-csi         disk.csi.azure.com         1h

Note

Persistent volume claims are specified in GiB but Azure managed disks are billed by SKU for a specific size. These SKUs range from 32GiB for S4 or P4 disks to 32TiB for S80 or P80 disks (in preview). The throughput and IOPS performance of a Premium managed disk depends on the both the SKU and the instance size of the nodes in the AKS cluster. For more information, see Pricing and performance of managed disks.

Create a persistent volume claim

A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. In this case, a PVC can use one of the pre-created storage classes to create a standard or premium Azure managed disk.

Create a file named azure-pvc.yaml, and copy in the following manifest. The claim requests a disk named azure-managed-disk that is 5 GB in size with ReadWriteOnce access. The managed-csi storage class is specified as the storage class.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: azure-managed-disk
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: managed-csi
  resources:
    requests:
      storage: 5Gi

Tip

To create a disk that uses premium storage, use storageClassName: managed-csi-premium rather than managed-csi.

Create the persistent volume claim with the kubectl apply command and specify your azure-pvc.yaml file:

kubectl apply -f azure-pvc.yaml

The output of the command resembles the following example:

persistentvolumeclaim/azure-managed-disk created

Use the persistent volume

Once the persistent volume claim has been created and the disk successfully provisioned, a pod can be created with access to the disk. The following manifest creates a basic NGINX pod that uses the persistent volume claim named azure-managed-disk to mount the Azure Disk at the path /mnt/azure. For Windows Server containers, specify a mountPath using the Windows path convention, such as 'D:'.

Create a file named azure-pvc-disk.yaml, and copy in the following manifest.

kind: Pod
apiVersion: v1
metadata:
  name: mypod
spec:
  containers:
  - name: mypod
    image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
    resources:
      requests:
        cpu: 100m
        memory: 128Mi
      limits:
        cpu: 250m
        memory: 256Mi
    volumeMounts:
    - mountPath: "/mnt/azure"
      name: volume
  volumes:
    - name: volume
      persistentVolumeClaim:
        claimName: azure-managed-disk

Create the pod with the kubectl apply command, as shown in the following example:

kubectl apply -f azure-pvc-disk.yaml

The output of the command resembles the following example:

pod/mypod created

You now have a running pod with your Azure Disk mounted in the /mnt/azure directory. This configuration can be seen when inspecting your pod using the kubectl describe command, as shown in the following condensed example:

kubectl describe pod mypod

The output of the command resembles the following example:

[...]
Volumes:
  volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  azure-managed-disk
    ReadOnly:   false
  default-token-smm2n:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-smm2n
    Optional:    false
[...]
Events:
  Type    Reason                 Age   From                               Message
  ----    ------                 ----  ----                               -------
  Normal  Scheduled              2m    default-scheduler                  Successfully assigned mypod to aks-nodepool1-79590246-0
  Normal  SuccessfulMountVolume  2m    kubelet, aks-nodepool1-79590246-0  MountVolume.SetUp succeeded for volume "default-token-smm2n"
  Normal  SuccessfulMountVolume  1m    kubelet, aks-nodepool1-79590246-0  MountVolume.SetUp succeeded for volume "pvc-faf0f176-8b8d-11e8-923b-deb28c58d242"
[...]

Use Azure ultra disks

To use Azure ultra disk, see Use ultra disks on Azure Kubernetes Service (AKS).

Back up a persistent volume

To back up the data in your persistent volume, take a snapshot of the managed disk for the volume. You can then use this snapshot to create a restored disk and attach to pods as a means of restoring the data.

First, get the volume name with the kubectl get command, such as for the PVC named azure-managed-disk:

kubectl get pvc azure-managed-disk

The output of the command resembles the following example:

NAME                 STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
azure-managed-disk   Bound     pvc-faf0f176-8b8d-11e8-923b-deb28c58d242   5Gi        RWO            managed-premium   3m

This volume name forms the underlying Azure disk name. Query for the disk ID with az disk list and provide your PVC volume name, as shown in the following example:

az disk list --query '[].id | [?contains(@,`pvc-faf0f176-8b8d-11e8-923b-deb28c58d242`)]' -o tsv

/subscriptions/<guid>/resourceGroups/MC_MYRESOURCEGROUP_MYAKSCLUSTER_EASTUS/providers/MicrosoftCompute/disks/kubernetes-dynamic-pvc-faf0f176-8b8d-11e8-923b-deb28c58d242

Use the disk ID to create a snapshot disk with az snapshot create. The following example creates a snapshot named pvcSnapshot in the same resource group as the AKS cluster MC_myResourceGroup_myAKSCluster_eastus. You may encounter permission issues if you create snapshots and restore disks in resource groups that the AKS cluster doesn't have access to.

az snapshot create \
    --resource-group MC_myResourceGroup_myAKSCluster_eastus \
    --name pvcSnapshot \
    --source /subscriptions/<guid>/resourceGroups/MC_myResourceGroup_myAKSCluster_eastus/providers/MicrosoftCompute/disks/kubernetes-dynamic-pvc-faf0f176-8b8d-11e8-923b-deb28c58d242

Depending on the amount of data on your disk, it may take a few minutes to create the snapshot.

Restore and use a snapshot

To restore the disk and use it with a Kubernetes pod, use the snapshot as a source when you create a disk with az disk create. This operation preserves the original resource if you then need to access the original data snapshot. The following example creates a disk named pvcRestored from the snapshot named pvcSnapshot:

az disk create --resource-group MC_myResourceGroup_myAKSCluster_eastus --name pvcRestored --source pvcSnapshot

To use the restored disk with a pod, specify the ID of the disk in the manifest. Get the disk ID with the az disk show command. The following example gets the disk ID for pvcRestored created in the previous step:

az disk show --resource-group MC_myResourceGroup_myAKSCluster_eastus --name pvcRestored --query id -o tsv

Create a pod manifest named azure-restored.yaml and specify the disk URI obtained in the previous step. The following example creates a basic NGINX web server, with the restored disk mounted as a volume at /mnt/azure:

kind: Pod
apiVersion: v1
metadata:
  name: mypodrestored
spec:
  containers:
  - name: mypodrestored
    image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
    resources:
      requests:
        cpu: 100m
        memory: 128Mi
      limits:
        cpu: 250m
        memory: 256Mi
    volumeMounts:
    - mountPath: "/mnt/azure"
      name: volume
  volumes:
    - name: volume
      azureDisk:
        kind: Managed
        diskName: pvcRestored
        diskURI: /subscriptions/<guid>/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored

Create the pod with the kubectl apply command, as shown in the following example:

kubectl apply -f azure-restored.yaml

The output of the command resembles the following example:

pod/mypodrestored created

You can use kubectl describe pod mypodrestored to view details of the pod, such as the following condensed example that shows the volume information:

kubectl describe pod mypodrestored

The output of the command resembles the following example:

[...]
Volumes:
  volume:
    Type:         AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
    DiskName:     pvcRestored
    DiskURI:      /subscriptions/19da35d3-9a1a-4f3b-9b9c-3c56ef409565/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored
    Kind:         Managed
    FSType:       ext4
    CachingMode:  ReadWrite
    ReadOnly:     false
[...]

Using Azure tags

For more information on using Azure tags, see Use Azure tags in Azure Kubernetes Service (AKS).

Statically provision a volume

This section provides guidance for cluster administrators who want to create one or more persistent volumes that include details of Azure Disks storage for use by a workload.

Static provisioning parameters

Name Meaning Available Value Mandatory Default value
volumeHandle Azure disk URI /subscriptions/{sub-id}/resourcegroups/{group-name}/providers/microsoft.compute/disks/{disk-id} Yes N/A
volumeAttributes.fsType File system type ext4, ext3, ext2, xfs, btrfs for Linux, ntfs for Windows No ext4 for Linux, ntfs for Windows
volumeAttributes.partition Partition number of the existing disk (only supported on Linux) 1, 2, 3 No Empty (no partition)
- Make sure partition format is like -part1
volumeAttributes.cachingMode Disk host cache setting None, ReadOnly, ReadWrite No ReadOnly

Create an Azure disk

When you create an Azure disk for use with AKS, you can create the disk resource in the node resource group. This approach allows the AKS cluster to access and manage the disk resource. If instead you created the disk in a separate resource group, you must grant the Azure Kubernetes Service (AKS) managed identity for your cluster the Contributor role to the disk's resource group. In this exercise, you're going to create the disk in the same resource group as your cluster.

  1. Identify the resource group name using the az aks show command and add the --query nodeResourceGroup parameter. The following example gets the node resource group for the AKS cluster name myAKSCluster in the resource group name myResourceGroup:

    az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
    
    MC_myResourceGroup_myAKSCluster_eastus
    
  2. Create a disk using the az disk create command. Specify the node resource group name obtained in the previous command, and then a name for the disk resource, such as myAKSDisk. The following example creates a 20GiB disk, and outputs the ID of the disk after it's created. If you need to create a disk for use with Windows Server containers, add the --os-type windows parameter to correctly format the disk.

    az disk create \
      --resource-group MC_myResourceGroup_myAKSCluster_eastus \
      --name myAKSDisk \
      --size-gb 20 \
      --query id --output tsv
    

    Note

    Azure Disks are billed by SKU for a specific size. These SKUs range from 32GiB for S4 or P4 disks to 32TiB for S80 or P80 disks (in preview). The throughput and IOPS performance of a Premium managed disk depends on both the SKU and the instance size of the nodes in the AKS cluster. See Pricing and Performance of Managed Disks.

    The disk resource ID is displayed once the command has successfully completed, as shown in the following example output. This disk ID is used to mount the disk in the next section.

    /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
    

Mount disk as a volume

  1. Create a pv-azuredisk.yaml file with a PersistentVolume. Update volumeHandle with disk resource ID from the previous step. For example:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-azuredisk
    spec:
      capacity:
        storage: 20Gi
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      storageClassName: managed-csi
      csi:
        driver: disk.csi.azure.com
        readOnly: false
        volumeHandle: /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
        volumeAttributes:
          fsType: ext4
    
  2. Create a pvc-azuredisk.yaml file with a PersistentVolumeClaim that uses the PersistentVolume. For example:

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: pvc-azuredisk
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi
      volumeName: pv-azuredisk
      storageClassName: managed-csi
    
  3. Use the kubectl apply commands to create the PersistentVolume and PersistentVolumeClaim, referencing the two YAML files created earlier:

    kubectl apply -f pv-azuredisk.yaml
    kubectl apply -f pvc-azuredisk.yaml
    
  4. To verify your PersistentVolumeClaim is created and bound to the PersistentVolume, run the following command:

    kubectl get pvc pvc-azuredisk
    

    The output of the command resembles the following example:

    NAME            STATUS   VOLUME         CAPACITY    ACCESS MODES   STORAGECLASS   AGE
    pvc-azuredisk   Bound    pv-azuredisk   20Gi        RWO                           5s
    
  5. Create a azure-disk-pod.yaml file to reference your PersistentVolumeClaim. For example:

    apiVersion: v1
    kind: Pod
    metadata:
      name: mypod
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      containers:
      - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
        name: mypod
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 250m
            memory: 256Mi
        volumeMounts:
          - name: azure
            mountPath: /mnt/azure
      volumes:
        - name: azure
          persistentVolumeClaim:
            claimName: pvc-azuredisk
    
  6. Run the kubectl apply command to apply the configuration and mount the volume, referencing the YAML configuration file created in the previous steps:

    kubectl apply -f azure-disk-pod.yaml
    

Next steps