MountVolume.MountDevice failed for volume

Nikos Fotiou 0 Reputation points
2023-05-14T22:36:35.54+00:00

Hello,

I am getting error while trying to mount volume to aks pod. The error message is:

MountVolume.MountDevice failed for volume "pvc-......" : rpc error: code = Internal desc = volume(myvolume#myaccount#pvc-........r#) mount "myaccount.blob.core.windows.net:/myaccount/pvc-........" on "/var/lib/kubelet/plugins/kubernetes.io/csi/blob.csi.azure.com/.........../globalmount" failed with mount failed: exit status 32...

The pv has been successfully created and the pvc is bound to it. I am using blob nfs for storage and i can see the pvc created in it.

From what I can understand the problem is indeed the actual mount from the blob storage into the worker, however I have not been able to identify something more.

Could you provide me with a lead?

Thank you in advance.

Azure Blob Storage
Azure Blob Storage
An Azure service that stores unstructured data in the cloud as blobs.
3,082 questions
Azure Kubernetes Service (AKS)
Azure Kubernetes Service (AKS)
An Azure service that provides serverless Kubernetes, an integrated continuous integration and continuous delivery experience, and enterprise-grade security and governance.
2,269 questions
0 comments No comments
{count} votes

5 answers

Sort by: Most helpful
  1. Konstantinos Passadis 19,381 Reputation points MVP
    2023-05-15T16:18:02.5566667+00:00

    Hello

    I am creating an answer!

    Hello @Nikos Fotiou !

    Thank you for the update !

    I am providing additonal commands to try for debugging

    kubectl describe pod <pod-name>

    kubectl get pvc azure-blob-storage

    Regarding the etc/exports :

    add the host server of the nfs in the /etc/exports file.

    /path/to/directory xx.xx.xx.xx (rw,sync,no_subtree_check)

    if you modify the file in anyway then you need restart the service too;

    sudo systemctl restart nfs-kernel-server

    An example of an entry in the /etc/exports file;

    */path/to/directory (rw,sync,no_subtree_check) ---for all users

    Can you check also the Blob Storage Logs from azure ?

    I hope this helps!

    Kindly mark the answer as Accepted and Upvote in case it helped!

    Regards

    1 person found this answer helpful.
    0 comments No comments

  2. Konstantinos Passadis 19,381 Reputation points MVP
    2023-05-14T22:54:42.9166667+00:00

    Hello @Nikos Fotiou !

    Welcome to Microsoft QnA!

    I understand you are having trouble mounting your NFS volume with exit Status 32 as an error

    Some suggestions (Source : https://stackoverflow.com/questions/34113569/kubernetes-nfs-volume-mount-fail-with-exit-status-32)

    1.execute the following on each master and node

    sudo yum install nfs-utils -y

    2.apt-get install -y nfs-common , On every K8s node

    3.

    a. allow non-root users to mount NFS (on the server).

    or

    b. in PersistentVolume add

    mountOptions:

    **- nfsvers=4.1**
    

    Also ( Source : https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/mounting-azure-blob-storage-container-fail#nfs-error2)

    NFS 3.0 error 2: Exit status 32, access denied by server while mounting

    Cause: AKS's VNET and subnet aren't allowed for the storage account

    If the storage account's network is limited to selected networks, but the VNET and subnet of the AKS cluster aren't added to the selected networks, the mounting operation will fail.

    Solution: Allow AKS's VNET and subnet for the storage account

    1. Identify the node that hosts the faulty pod by running the following command:

    ConsoleCopy

    kubectl get pod <pod-name> -n <namespace> -o wide
    

    Check the node in the command output:

    [Screenshot that shows the 'kubectl get pod' command output.

    ](https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/media/mounting-azure-blob-storage-container-fail/nfs-kubectl-get-pod-command-output.png#lightbox)

    Go to the AKS cluster in the Azure portal, select Properties > Infrastructure resource group, access the VMSS associated with the node, and then check the Virtual network/subnet to identify the VNET and subnet.

    Screenshot of the VNET and subnet.

    1. Access the storage account in the Azure portal, and then select Networking. If Public network access is set to Enabled from selected virtual networks or Disabled, and the connectivity isn't through a private endpoint, check if the VNET and subnet of the AKS cluster are allowed under Firewalls and virtual networks.

    [Screenshot of the settings under 'Firewalls and virtual networks'.

    ](https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/media/mounting-azure-blob-storage-container-fail/nfs-firewalls-and-virtual-networks-settings.png#lightbox)

    If the VNET and subnet of the AKS cluster aren't added, select Add existing virtual network. On the Add networks page, type the VNET and subnet of the AKS cluster, and then select Add > Save.

    Screenshot of the 'Add networks' dialog.

    It may take a few moments for the changes to take effect. After the VNET and subnet are added, check if the pod status changes from ContainerCreating to Running.

    [Screenshot of the pod status.

    ](https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/media/mounting-azure-blob-storage-container-fail/nfs-pod-status-running.png#lightbox)

    NFS 3.0 error 2: Exit status 32, access denied by server while mounting

    Cause: AKS's VNET and subnet aren't allowed for the storage account

    If the storage account's network is limited to selected networks, but the VNET and subnet of the AKS cluster aren't added to the selected networks, the mounting operation will fail.

    Solution: Allow AKS's VNET and subnet for the storage account

    1. Identify the node that hosts the faulty pod by running the following command:

    ConsoleCopy

    kubectl get pod <pod-name> -n <namespace> -o wide
    

    Check the node in the command output:

    [Screenshot that shows the 'kubectl get pod' command output.

    ](https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/media/mounting-azure-blob-storage-container-fail/nfs-kubectl-get-pod-command-output.png#lightbox)

    Go to the AKS cluster in the Azure portal, select Properties > Infrastructure resource group, access the VMSS associated with the node, and then check the Virtual network/subnet to identify the VNET and subnet.

    Screenshot of the VNET and subnet.

    Access the storage account in the Azure portal, and then select Networking. If Public network access is set to Enabled from selected virtual networks or Disabled, and the connectivity isn't through a private endpoint, check if the VNET and subnet of the AKS cluster are allowed under Firewalls and virtual networks.

    [Screenshot of the settings under 'Firewalls and virtual networks'.

    ](https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/media/mounting-azure-blob-storage-container-fail/nfs-firewalls-and-virtual-networks-settings.png#lightbox)

    If the VNET and subnet of the AKS cluster aren't added, select Add existing virtual network. On the Add networks page, type the VNET and subnet of the AKS cluster, and then select Add > Save.

    Screenshot of the 'Add networks' dialog.

    It may take a few moments for the changes to take effect. After the VNET and subnet are added, check if the pod status changes from ContainerCreating to Running.

    [Screenshot of the pod status.

    ](https://learn.microsoft.com/en-us/troubleshoot/azure/azure-kubernetes/media/mounting-azure-blob-storage-container-fail/nfs-pod-status-running.png#lightbox)

    If you could show some logs or a general configuration , in case these did not helped we could dig deeper!

    I hope this helps!

    Kindly mark the answer as Accepted and Upvote in case it helped!

    Regards


  3. Nikos Fotiou 0 Reputation points
    2023-05-15T10:19:22.49+00:00

    Here are some more detailed kubelet logs:

    Output: mount.nfs: mounting myaaccount.blob.core.windows.net:/myaccount/pvc-xxxxxxx, reason given by server: No such file or directory

    i am troubled by it because I can see in the storage account such a container name, however it is pending.

    Where should I check for more logs in that case?

    0 comments No comments

  4. frank Agyemang 0 Reputation points
    2023-10-06T13:04:17.0933333+00:00

    Hi @Konstantinos Passadis and team, I am also having a similar issue after upgrading our Aks cluster from V1.19.11 to V1.25. After the upgrade,Our kubernetes pods would not initialise, they are showing the following error."Warning FailedMount 35s (x8 over 103s) kubelet MountVolume.SetUp failed for volume "init-var-volume" : rpc error: code = Unknown desc = mount failed: exit status 32"

    Worth noting we seem to run out of ip addresses during the upgrade so had to shutdown one of the nodepools on the cluster to free up some ip address while we updated the other 2 nodepools.

    In the end we seem to have had some of the original ips taken over by other resources.But we managed to configure and resolve that.

    not sure if that has an effect on the pods mount issue as well.

    any ideas will be appreciated.

    0 comments No comments

  5. mayank sharma 20 Reputation points
    2024-01-21T17:40:46.3733333+00:00

    We faced the same issue recently. The error also included " access denied by server while mounting xxxx.file.core.windows.net". For us, the reason was that nodepool subnet was not in the allowed network list for storage account which is automatically create by AKS. We could trace-back the reason as below:

    1. We had a created an AKS with a Custom vnet and subnet which had a small ip range
    2. Due to scalability issue, we needed to add more capacity so we added a new nodepool with same vnet and different subnet and removed the old nodepool.
    3. when creating new pvc, AKS does not automatically configure the subnet of new nodepool to allowed network range of the storage account and still registers only the subnet of the old nodepool(which is deleted). We were able to resolve the issue using below steps:
      1. delete the storage account (automatically created by AKS)
      2. Run 'az aks update -n <cluster_name> -g <resource_group>' without any other parameter
      3. reapply the pvc configuration
      P.S. your AKS identity will not have read access over new subnet scope which needs to be provided.
    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.