Hi Pablo Zambrano,
Based on your query, the documentation does not explicitly mention the ability to change this policy. However, I attempted to modify the policy on my side, but after a certain period, it reverted to the default setting: ReadWriteOnceWithFSType
. When I changed it to None
, it was automatically reset to the default version. by using below command
kubectl edit csidriver file.csi.azure.com
As an alternative, if file permissions are already correct, removing fsGroup
from the pod's securityContext
prevents Kubernetes from triggering recursive ownership changes (chgrp
), effectively reducing mount delays.
Added steps below installing open-source CSI driver
To install the open-source CSI driver instead of the Azure-managed version, you must first disable the Azure CSI driver. Before doing so, you need to properly eject the PVC and PV; otherwise, they may misbehave or become stuck. If the PVC and PV are not detached correctly, they will remain attached to the existing configuration and cannot be overridden.
If your pod is using the storage, you must delete the pod or remove the mount configuration from the YAML file before ejecting the PVC and PV. Once the PVC and PV are no longer attached to any pod, if you are unable to detach you need the PV and PVC delete those things then you can safely disable the Azure-managed CSI driver using the following command:
az aks update \
--name <clsutername> \
--resource-group <resourcegrp> \
--disable-file-driver
To verify the driver has been disabled, use:
kubectl get csidriver
After that, you can install the open-source Azure File CSI driver using Helm:
I'm adding below command to your private message please check here I'm unable post the command
Update command <----->
helm install azurefile-csi azurefile-csi-driver/azurefile-csi-driver \
--namespace kube-system \
--set linux.enabled=true \
--set windows.enabled=false \
--set controller.replicas=1
Then check whether the driver was installed successfully with: kubectl get csidriver
To verify whether the driver is managed by Azure or installed manually, run inspect the elements
kubectl describe csidriver file.csi.azure.com
And then Edit the CSI driver using the command below to change fsGroupPolicy to None. Once applied, as shown in the screenshot, it should work as expected. changes was applied on the open source csi driver
After modifying the CSI driver, recreate or update the PVC and PV YAML files in the same way as you initially did for the Azure-managed version. Deploy the cluster and, once the pod is running, verify that you can see the mounted storage files within the pod. If you can access the files from the pod, the setup is working correctly. after the above steps I have check in my lab I'm able to see the mount storage file from pod as shown below screenshot
For you check whether operations are taking excessive time after setting fsGroupPolicy
to None
. If delays persist, consider removing the security context from your pod configuration to avoid unnecessary overhead. If performance issues continue, reverting to the Azure-managed CSI driver may be necessary
Disclaimer
If you face any issues for deploying this driver manually is not an officially supported Microsoft product. For a fully managed and supported Kubernetes experience, use AKS with the Azure-managed File CSI driver. Screenshot for your reference
Please refer the below documentation for more information
https://github.com/kubernetes-sigs/azurefile-csi-driver
Important note: Before performing any activity using these steps, make sure to take a backup of the configuration and data. It is recommended to use a test environment to avoid potential issues.
Please let me know if you have any further queries, and I will be happy to help as needed! If this was helpful, please give it an upvote and let us know.