Quickstart: Deploy an Azure Linux Container Host for an AKS cluster using Azure PowerShell
Get started with the Azure Linux Container Host by using Azure PowerShell to deploy an Azure Linux Container Host for an AKS cluster. After installing the prerequisites, you create a resource group, create an AKS cluster, connect to the cluster, and run a sample multi-container application in the cluster.
Prerequisites
- If you don't have an Azure subscription, create an Azure free account before you begin.
- Use the PowerShell environment in Azure Cloud Shell. For more information, see Azure Cloud Shell Quickstart.
- If you're running PowerShell locally, install the
Az PowerShell
module and connect to your Azure account using theConnect-AzAccount
cmdlet. For more information about installing the Az PowerShell module, see Install Azure PowerShell. - The identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see Access and identity options for Azure Kubernetes Service (AKS).
Create a resource group
An Azure resource group is a logical group in which Azure resources are deployed and managed. When creating a resource group, you need to specify a location. This location is the storage location of your resource group metadata and where your resources run in Azure if you don't specify another region during resource creation.
The following example creates resource group named testAzureLinuxResourceGroup in the eastus region.
Create a resource group using the
New-AzResourceGroup
cmdlet.New-AzResourceGroup -Name testAzureLinuxResourceGroup -Location eastus
The following example output resembles successful creation of the resource group:
ResourceGroupName : testAzureLinuxResourceGroup Location : eastus ProvisioningState : Succeeded Tags : ResourceId : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testAzureLinuxResourceGroup
Note
The above example uses eastus, but Azure Linux Container Host clusters are available in all regions.
Create an Azure Linux Container Host cluster
The following example creates a cluster named testAzureLinuxCluster with one node.
Create an AKS cluster using the
New-AzAksCluster
cmdlet with the-NodeOsSKU
flag set to AzureLinux.New-AzAksCluster -ResourceGroupName testAzureLinuxResourceGroup -Name testAzureLinuxCluster -NodeOsSKU AzureLinux
After a few minutes, the command completes and returns JSON-formatted information about the cluster.
Connect to the cluster
To manage a Kubernetes cluster, use the Kubernetes command-line client, kubectl. kubectl
is already installed if you use Azure Cloud Shell.
Install
kubectl
locally using theInstall-AzAksCliTool
cmdlet.Install-AzAksCliTool
Configure
kubectl
to connect to your Kubernetes cluster using theImport-AzAksCredential
cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them.Import-AzAksCredential -ResourceGroupName testAzureLinuxResourceGroup -Name testAzureLinuxCluster
Verify the connection to your cluster using the
kubectl get
command. This command returns a list of the cluster pods.kubectl get pods --all-namespaces
Deploy the application
To deploy the application, you use a manifest file to create all the objects required to run the AKS Store application. A Kubernetes manifest file defines a cluster's desired state, such as which container images to run. The manifest includes the following Kubernetes deployments and services:
- Store front: Web application for customers to view products and place orders.
- Product service: Shows product information.
- Order service: Places orders.
- Rabbit MQ: Message queue for an order queue.
Note
We don't recommend running stateful containers, such as Rabbit MQ, without persistent storage for production. These are used here for simplicity, but we recommend using managed services, such as Azure Cosmos DB or Azure Service Bus.
Create a file named
aks-store-quickstart.yaml
and copy in the following manifest:apiVersion: apps/v1 kind: Deployment metadata: name: rabbitmq spec: replicas: 1 selector: matchLabels: app: rabbitmq template: metadata: labels: app: rabbitmq spec: nodeSelector: "kubernetes.io/os": linux containers: - name: rabbitmq image: mcr.microsoft.com/mirror/docker/library/rabbitmq:3.10-management-alpine ports: - containerPort: 5672 name: rabbitmq-amqp - containerPort: 15672 name: rabbitmq-http env: - name: RABBITMQ_DEFAULT_USER value: "username" - name: RABBITMQ_DEFAULT_PASS value: "password" resources: requests: cpu: 10m memory: 128Mi limits: cpu: 250m memory: 256Mi volumeMounts: - name: rabbitmq-enabled-plugins mountPath: /etc/rabbitmq/enabled_plugins subPath: enabled_plugins volumes: - name: rabbitmq-enabled-plugins configMap: name: rabbitmq-enabled-plugins items: - key: rabbitmq_enabled_plugins path: enabled_plugins --- apiVersion: v1 data: rabbitmq_enabled_plugins: | [rabbitmq_management,rabbitmq_prometheus,rabbitmq_amqp1_0]. kind: ConfigMap metadata: name: rabbitmq-enabled-plugins --- apiVersion: v1 kind: Service metadata: name: rabbitmq spec: selector: app: rabbitmq ports: - name: rabbitmq-amqp port: 5672 targetPort: 5672 - name: rabbitmq-http port: 15672 targetPort: 15672 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: order-service spec: replicas: 1 selector: matchLabels: app: order-service template: metadata: labels: app: order-service spec: nodeSelector: "kubernetes.io/os": linux containers: - name: order-service image: ghcr.io/azure-samples/aks-store-demo/order-service:latest ports: - containerPort: 3000 env: - name: ORDER_QUEUE_HOSTNAME value: "rabbitmq" - name: ORDER_QUEUE_PORT value: "5672" - name: ORDER_QUEUE_USERNAME value: "username" - name: ORDER_QUEUE_PASSWORD value: "password" - name: ORDER_QUEUE_NAME value: "orders" - name: FASTIFY_ADDRESS value: "0.0.0.0" resources: requests: cpu: 1m memory: 50Mi limits: cpu: 75m memory: 128Mi initContainers: - name: wait-for-rabbitmq image: busybox command: ['sh', '-c', 'until nc -zv rabbitmq 5672; do echo waiting for rabbitmq; sleep 2; done;'] resources: requests: cpu: 1m memory: 50Mi limits: cpu: 75m memory: 128Mi --- apiVersion: v1 kind: Service metadata: name: order-service spec: type: ClusterIP ports: - name: http port: 3000 targetPort: 3000 selector: app: order-service --- apiVersion: apps/v1 kind: Deployment metadata: name: product-service spec: replicas: 1 selector: matchLabels: app: product-service template: metadata: labels: app: product-service spec: nodeSelector: "kubernetes.io/os": linux containers: - name: product-service image: ghcr.io/azure-samples/aks-store-demo/product-service:latest ports: - containerPort: 3002 resources: requests: cpu: 1m memory: 1Mi limits: cpu: 1m memory: 7Mi --- apiVersion: v1 kind: Service metadata: name: product-service spec: type: ClusterIP ports: - name: http port: 3002 targetPort: 3002 selector: app: product-service --- apiVersion: apps/v1 kind: Deployment metadata: name: store-front spec: replicas: 1 selector: matchLabels: app: store-front template: metadata: labels: app: store-front spec: nodeSelector: "kubernetes.io/os": linux containers: - name: store-front image: ghcr.io/azure-samples/aks-store-demo/store-front:latest ports: - containerPort: 8080 name: store-front env: - name: VUE_APP_ORDER_SERVICE_URL value: "http://order-service:3000/" - name: VUE_APP_PRODUCT_SERVICE_URL value: "http://product-service:3002/" resources: requests: cpu: 1m memory: 200Mi limits: cpu: 1000m memory: 512Mi --- apiVersion: v1 kind: Service metadata: name: store-front spec: ports: - port: 80 targetPort: 8080 selector: app: store-front type: LoadBalancer
If you create and save the YAML file locally, then you can upload the manifest file to your default directory in CloudShell by selecting the Upload/Download files button and selecting the file from your local file system.
Deploy the application using the kubectl apply command and specify the name of your YAML manifest.
kubectl apply -f aks-store-quickstart.yaml
The following example output shows the deployments and services:
deployment.apps/rabbitmq created service/rabbitmq created deployment.apps/order-service created service/order-service created deployment.apps/product-service created service/product-service created deployment.apps/store-front created service/store-front created
Test the application
When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
Check the status of the deployed pods using the kubectl get pods command. Make sure all pods are
Running
before proceeding.kubectl get pods
Check for a public IP address for the store-front application. Monitor progress using the kubectl get service command with the
--watch
argument.kubectl get service store-front --watch
The EXTERNAL-IP output for the
store-front
service initially shows as pending:NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE store-front LoadBalancer 10.0.100.10 <pending> 80:30025/TCP 4h4m
Once the EXTERNAL-IP address changes from pending to an actual public IP address, use
CTRL-C
to stop thekubectl
watch process.The following example output shows a valid public IP address assigned to the service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE store-front LoadBalancer 10.0.100.10 20.62.159.19 80:30025/TCP 4h5m
Open a web browser to the external IP address of your service to see the Azure Store app in action.
Delete the cluster
If you don't plan on continuing through the following tutorials, remove the created resources to avoid incurring Azure charges.
Remove the resource group and all related resources using the
RemoveAzResourceGroup
cmdlet.Remove-AzResourceGroup -Name testAzureLinuxResourceGroup
Next steps
In this quickstart, you deployed an Azure Linux Container Host AKS cluster. To learn more about the Azure Linux Container Host and walk through a complete cluster deployment and management example, continue to the Azure Linux Container Host tutorial.