Rediģēt

Kopīgot, izmantojot


Tutorial - Scale applications in Azure Kubernetes Service (AKS)

If you followed the previous tutorials, you have a working Kubernetes cluster and Azure Store Front app.

In this tutorial, part six of seven, you scale out the pods in the app, try pod autoscaling, and scale the number of Azure VM nodes to change the cluster's capacity for hosting workloads. You learn how to:

  • Scale the Kubernetes nodes.
  • Manually scale Kubernetes pods that run your application.
  • Configure autoscaling pods that run the app front end.

Before you begin

In previous tutorials, you packaged an application into a container image, uploaded the image to Azure Container Registry, created an AKS cluster, deployed an application, and used Azure Service Bus to redeploy an updated application. If you haven't completed these steps and want to follow along, start with Tutorial 1 - Prepare application for AKS.

This tutorial requires Azure CLI version 2.34.1 or later. Run az --version to find the version. If you need to install or upgrade, see Install Azure CLI.

Manually scale pods

  1. View the pods in your cluster using the kubectl get command.

    kubectl get pods
    

    The following example output shows the pods running the Azure Store Front app:

    NAME                               READY     STATUS     RESTARTS   AGE
    order-service-848767080-tf34m      1/1       Running    0          31m
    product-service-4019737227-2q2qz   1/1       Running    0          31m
    store-front-2606967446-2q2qz       1/1       Running    0          31m
    
  2. Manually change the number of pods in the store-front deployment using the kubectl scale command.

    kubectl scale --replicas=5 deployment.apps/store-front
    
  3. Verify the additional pods were created using the kubectl get pods command.

    kubectl get pods
    

    The following example output shows the additional pods running the Azure Store Front app:

                                      READY     STATUS    RESTARTS   AGE
    store-front-2606967446-2q2qzc     1/1       Running   0          15m
    store-front-3309479140-2hfh0      1/1       Running   0          3m
    store-front-3309479140-bzt05      1/1       Running   0          3m
    store-front-3309479140-fvcvm      1/1       Running   0          3m
    store-front-3309479140-hrbf2      1/1       Running   0          15m
    store-front-3309479140-qphz8      1/1       Running   0          3m
    

Autoscale pods

To use the horizontal pod autoscaler, all containers must have defined CPU requests and limits, and pods must have specified requests. In the aks-store-quickstart deployment, the front-end container requests 1m CPU with a limit of 1000m CPU.

These resource requests and limits are defined for each container, as shown in the following condensed example YAML:

...
  containers:
  - name: store-front
    image: ghcr.io/azure-samples/aks-store-demo/store-front:latest
    ports:
    - containerPort: 8080
      name: store-front
...
    resources:
      requests:
        cpu: 1m
...
      limits:
        cpu: 1000m
...

Autoscale pods using a manifest file

  1. Create a manifest file to define the autoscaler behavior and resource limits, as shown in the following condensed example manifest file aks-store-quickstart-hpa.yaml:

    apiVersion: autoscaling/v2
    kind: HorizontalPodAutoscaler
    metadata:
      name: store-front-hpa
    spec:
      maxReplicas: 10 # define max replica count
      minReplicas: 3  # define min replica count
      scaleTargetRef:
        apiVersion: apps/v1
        kind: Deployment
        name: store-front
      metrics:
      - type: Resource
        resource:
          name: cpu
          target:
            type: Utilization
            averageUtilization: 50
    
  2. Apply the autoscaler manifest file using the kubectl apply command.

    kubectl apply -f aks-store-quickstart-hpa.yaml
    
  3. Check the status of the autoscaler using the kubectl get hpa command.

    kubectl get hpa
    

    After a few minutes, with minimal load on the Azure Store Front app, the number of pod replicas decreases to three. You can use kubectl get pods again to see the unneeded pods being removed.

Note

You can enable the Kubernetes-based Event-Driven Autoscaler (KEDA) AKS add-on to your cluster to drive scaling based on the number of events needing to be processed. For more information, see Enable simplified application autoscaling with the Kubernetes Event-Driven Autoscaling (KEDA) add-on (Preview).

Manually scale AKS nodes

If you created your Kubernetes cluster using the commands in the previous tutorials, your cluster has two nodes. If you want to increase or decrease this amount, you can manually adjust the number of nodes.

The following example increases the number of nodes to three in the Kubernetes cluster named myAKSCluster. The command takes a couple of minutes to complete.

  • Scale your cluster nodes using the az aks scale command.

    az aks scale --resource-group myResourceGroup --name myAKSCluster --node-count 3
    

    Once the cluster successfully scales, your output will be similar to following example output:

    "aadProfile": null,
    "addonProfiles": null,
    "agentPoolProfiles": [
      {
        ...
        "count": 3,
        "mode": "System",
        "name": "nodepool1",
        "osDiskSizeGb": 128,
        "osDiskType": "Managed",
        "osType": "Linux",
        "ports": null,
        "vmSize": "Standard_DS2_v2",
        "vnetSubnetId": null
        ...
      }
      ...
    ]
    

You can also autoscale the nodes in your cluster. For more information, see Use the cluster autoscaler with node pools.

Next steps

In this tutorial, you used different scaling features in your Kubernetes cluster. You learned how to:

  • Manually scale Kubernetes pods that run your application.
  • Configure autoscaling pods that run the app front end.
  • Manually scale the Kubernetes nodes.

In the next tutorial, you learn how to upgrade Kubernetes in your AKS cluster.