@Rajaniesh Kaushikk There has been some discussion on this issue here: https://github.com/Azure/AKS/issues/274
We've seen a few folks that were actually describing the by-design experience of upgrade as described in this article and below: https://learn.microsoft.com/en-us/azure/aks/upgrade-cluster#upgrade-an-aks-cluster
Here is the outline:
With a list of available versions for your AKS cluster, use the az aks upgrade command to upgrade. During the upgrade process, AKS adds a new node to the cluster that runs the specified Kubernetes version, then carefully cordon and drains one of the old nodes to minimize disruption to running applications. When the new node is confirmed as running application pods, the old node is deleted. This process repeats until all nodes in the cluster have been upgraded.
This means that during upgrade you will see a lot of nodes flipping between ready and not ready and that is normal. Since we'll always have n+1 in ready state for your applications.
Nonetheless we've see a few folks describing behaviors that are not expected. If you believe what you see is not expected, please open a support ticket if you have the ability to do so.
If not, please let me know and I would enable a free support ticket for you. Thanks.