My AKS Cluster is having issues. The node is not ready.

pamarthir 1 Reputation point
2022-11-01T10:58:41.543+00:00

My AKS Cluster (nihiamazapigwdev1) is having issues.

The Node says node is not ready. I am not able to create a node as the cluster needs to be upgraded.
I am not able to upgrade the cluster as it says Route table is not associated. Even after associating the route table, it gives the same error. We have other AKS Cluster that dont have route table associated, they work perfectly fine.
The Pods are in pending state.

Error details:-
Kubernetes may be unavailable during cluster upgrades.
Are you sure you want to perform this operation? (y/N): y
Since control-plane-only argument is specified, this will upgrade only the control plane to 1.22.11. Node pool will not change. Continue? (y/N): y
(ExistingRouteTableNotAssociatedWithSubnet) An existing route table has not been associated with subnet /subscriptions/xxxxxx-xxxx-xxxx-xxxx-xxxxxfc14/resourceGroups/CIT-IAM-Network/providers/Microsoft.Network/virtualNetworks/xxx-xxx-xxx-1/subnets/xxx-xxx-xxx-LOGIN-DEV. Please update the route table association
Code: ExistingRouteTableNotAssociatedWithSubnet
Message: An existing route table has not been associated with subnet /subscriptions/5e0d990c-d4ef-48b7-a171-110fdd29fc14/resourceGroups/xxx-xxx-Network/providers/Microsoft.Network/virtualNetworks/CIT-IAM-NET-1/subnets/xxx-xxx-xxx-xxx-DEV. Please update the route table association

After associating the route table, i am getting this error.

The pods in the kube-system are not running.

kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
azure-ip-masq-agent-8lzv8 1/1 Terminating 0 112d
coredns-57dd64d696-c46fs 0/1 Pending 0 28d
coredns-57dd64d696-dtbdk 0/1 Pending 0 28d
coredns-autoscaler-5f85dc856b-fxbqz 0/1 Terminating 0 49d
coredns-autoscaler-5f85dc856b-n4ggk 0/1 Pending 0 19d
coredns-autoscaler-5f85dc856b-vrd4f 1/1 Terminating 1 244d
coredns-autoscaler-75fccbc7bc-gq699 0/1 Pending 0 28d
coredns-dc97c5f55-d4dw9 0/1 Terminating 0 49d
coredns-dc97c5f55-fdrgf 1/1 Terminating 0 157d
coredns-dc97c5f55-lcf42 1/1 Terminating 0 157d
coredns-dc97c5f55-rch8b 0/1 Terminating 0 49d
coredns-dc97c5f55-zq8vq 0/1 Pending 0 19d
csi-azuredisk-node-lqvxx 3/3 Terminating 0 63d
csi-azurefile-node-7vwmf 3/3 Terminating 0 87d
kube-proxy-24mc4 1/1 Terminating 0 63d
metrics-server-79f9556b5b-v7sg2 0/1 Terminating 0 49d
metrics-server-79f9556b5b-wfw6h 1/1 Terminating 21 172d
metrics-server-bc5788dcb-vmm4v 0/1 Pending 0 28d
tunnelfront-78748f97db-99j8t 0/1 Pending 0 19d
tunnelfront-78748f97db-jnmx8 0/1 Terminating 0 49d
tunnelfront-78748f97db-vtn5z 1/1 Terminating 3 143d

Please advise the steps to fix the error.

Thanks

Azure Kubernetes Service (AKS)
Azure Kubernetes Service (AKS)
An Azure service that provides serverless Kubernetes, an integrated continuous integration and continuous delivery experience, and enterprise-grade security and governance.
1,855 questions
{count} votes

1 answer

Sort by: Most helpful
  1. Daniel Candela 21 Reputation points
    2023-05-26T21:38:23.9033333+00:00

    Helle there,

    If you are behind a firewall or proxy you need be able to reach the microsoft endpoints, here is the documentation for outbounds connection https://learn.microsoft.com/en-us/azure/aks/outbound-rules-control-egress

    Also when you kubenet, there are a few considerations about that

    • kubenet
    • Conserves IP address space.
    • Uses Kubernetes internal or external load balancers to reach pods from outside of the cluster.
      • You manually manage and maintain user-defined routes (UDRs).
      • Maximum of 400 nodes per cluster.

    If you are using a firewall after the FQDN requeriments you must to specify a 0.0.0.0/0 to the virtual appliance NVA.

    You can connect to the node https://learn.microsoft.com/en-us/azure/aks/node-access and check with curl o telnet the connectivity of the endpoints.

    Ensure you have the right connectivity, and also check for pod logs and try to find out why the are in terminating state.

    Looking forward for your reponse

    Best Regards,

    Daniel

    0 comments No comments