Hi folks.
I have one AKS cluster disk driver enabled and autoscale enabled.
I have deployed a statefulset and it triggered ndoe scaling up. Then a pod of this statefulset has been scheduled on the new node. The PVC and PV creaed and all are in Bound status. But it is not attached to the pod.
Here is the pod describe result.
Warning FailedMount 7m15s (x5 over 27m) kubelet Unable to attach or mount volumes: unmounted volumes=[state], unattached volumes=[kube-api-access-tl55s state]: timed out waiting for the condition
Warning FailedMount 31s (x11 over 34m) kubelet Unable to attach or mount volumes: unmounted volumes=[state], unattached volumes=[state kube-api-access-tl55s]: timed out waiting for the condition
Warning FailedAttachVolume 13s (x13 over 34m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-27274f7e-9690-45e0-8ada-6e031b51d07f" : timed out waiting for external-attacher of disk.csi.azure.com CSI driver to attach volume /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mc_xxx_yyy-aks_eastus/providers/Microsoft.Compute/disks/pvc-27274f7e-9690-45e0-8ada-6e031b51d07f
I checkd the azuredisk
container log of csi-azuredisk-node pods but there is neither error nor warn log.
All nodes are ready and I checked the cluster's node resource group and there is disk volume created correctly. Also all instances are healthy in VM scale set but there is no mounted volume in the instances.
I tried to find any related logs from AKS log viewer but I am not sure how to check external-attacher's log.
I tried several times to create PV but same result.
I used the default storage class managed-csi
and the k8s version is 1.26.3.
THanks.