How to disable recursive group change (fsGroupPolicy) in Azure File CSI Driver on AKS

Pablo Zambrano 0 Reputation points
2025-05-22T05:58:01.5+00:00

We are experiencing excessive delays (e.g., 30 minutes) during volume mounting due to the managed Azure File CSI driver performing recursive chgrp operations when fsGroup is set in pod securityContext.

This appears to be due to fsGroupPolicy being set to ReadWriteOnceWithFSType by default in the managed driver (file.csi.azure.com).

We would like to:

Understand whether it's possible to configure fsGroupPolicy: None in the managed CSI driver.

If not, request official guidance or support on safely replacing the managed CSI driver with the open-source version that allows setting fsGroupPolicy.

Platform: AKS

Region: Australia East

Kubernetes Version: 1.31.7

Driver Version: v1.30.6

Azure Kubernetes Service
Azure Kubernetes Service
An Azure service that provides serverless Kubernetes, an integrated continuous integration and continuous delivery experience, and enterprise-grade security and governance.
2,447 questions
{count} votes

1 answer

Sort by: Most helpful
  1. Akshay kumar Mandha 3,390 Reputation points Microsoft External Staff Moderator
    2025-05-22T09:02:40.1+00:00

    Hi Pablo Zambrano,
    Based on your query, the documentation does not explicitly mention the ability to change this policy. However, I attempted to modify the policy on my side, but after a certain period, it reverted to the default setting: ReadWriteOnceWithFSType. When I changed it to None, it was automatically reset to the default version. by using below command

    kubectl edit csidriver file.csi.azure.com

    As an alternative, if file permissions are already correct, removing fsGroup from the pod's securityContext prevents Kubernetes from triggering recursive ownership changes (chgrp), effectively reducing mount delays.

    Added steps below installing open-source CSI driver
    To install the open-source CSI driver instead of the Azure-managed version, you must first disable the Azure CSI driver. Before doing so, you need to properly eject the PVC and PV; otherwise, they may misbehave or become stuck. If the PVC and PV are not detached correctly, they will remain attached to the existing configuration and cannot be overridden.

    If your pod is using the storage, you must delete the pod or remove the mount configuration from the YAML file before ejecting the PVC and PV. Once the PVC and PV are no longer attached to any pod, if you are unable to detach you need the PV and PVC delete those things then you can safely disable the Azure-managed CSI driver using the following command:

    az aks update \
      --name <clsutername> \
      --resource-group <resourcegrp> \
      --disable-file-driver
    

    To verify the driver has been disabled, use:

    kubectl get csidriver
    

    After that, you can install the open-source Azure File CSI driver using Helm:
    I'm adding below command to your private message please check here I'm unable post the command

    Update command <----->
    
    helm install azurefile-csi azurefile-csi-driver/azurefile-csi-driver \
    --namespace kube-system \
    --set linux.enabled=true \
    --set windows.enabled=false \
    --set controller.replicas=1
    

    Then check whether the driver was installed successfully with: kubectl get csidriver

    To verify whether the driver is managed by Azure or installed manually, run inspect the elements

    kubectl describe csidriver file.csi.azure.com
    

    And then Edit the CSI driver using the command below to change fsGroupPolicy to None. Once applied, as shown in the screenshot, it should work as expected. changes was applied on the open source csi driver

    User's image

    After modifying the CSI driver, recreate or update the PVC and PV YAML files in the same way as you initially did for the Azure-managed version. Deploy the cluster and, once the pod is running, verify that you can see the mounted storage files within the pod. If you can access the files from the pod, the setup is working correctly. after the above steps I have check in my lab I'm able to see the mount storage file from pod as shown below screenshot
    User's image

    For you check whether operations are taking excessive time after setting fsGroupPolicy to None. If delays persist, consider removing the security context from your pod configuration to avoid unnecessary overhead. If performance issues continue, reverting to the Azure-managed CSI driver may be necessary

    Disclaimer
    If you face any issues for deploying this driver manually is not an officially supported Microsoft product. For a fully managed and supported Kubernetes experience, use AKS with the Azure-managed File CSI driver. Screenshot for your reference
    User's image

    Please refer the below documentation for more information
    https://github.com/kubernetes-sigs/azurefile-csi-driver

    Important note: Before performing any activity using these steps, make sure to take a backup of the configuration and data. It is recommended to use a test environment to avoid potential issues.

    Please let me know if you have any further queries, and I will be happy to help as needed! If this was helpful, please give it an upvote and let us know.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.