Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Using the same Valkey cluster on Azure Kubernetes Service (AKS) that you deployed in the previous article with Locust running, you can validate the resiliency of the Valkey cluster during an AKS node pool upgrade.
Upgrade the AKS cluster
List the available versions for the AKS cluster and identify the target version you're upgrading to.
az aks get-upgrades --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_CLUSTER_NAME --output table
Upgrade the AKS control plane only. In this example, the target version is 1.30.0:
az aks upgrade --resource-group $MY_RESOURCE_GROUP_NAME --name $MY_CLUSTER_NAME --control-plane-only --kubernetes-version 1.30.0
Verify the Locust client started in the previous article is still running. The Locust dashboard will show the impact of the AKS node pool upgrade on the Valkey cluster.
Upgrade the Valkey node pool.
az aks nodepool upgrade \ --resource-group $MY_RESOURCE_GROUP_NAME \ --cluster-name $MY_CLUSTER_NAME \ --kubernetes-version 1.30.0 \ --name valkey
While the upgrade process is running, you can monitor the Locust dashboard to see the status of the client requests. Ideally, the dashboard should be similar to the following screenshot:
Locust is running with 100 users making 50 requests per second. During the upgrade process, 4 times a master Pod is evicted. You can see that the shard isn't available for a few seconds, but the Valkey cluster is still able to respond to requests for the other shards.
Azure Kubernetes Service