Edit

Share via


Deploy an AI model on Azure Kubernetes Service (AKS) with the AI toolchain operator (preview)

Deploy to Azure

The AI toolchain operator (KAITO) is a managed add-on that simplifies the experience of running open-source and private AI models on your AKS cluster. KAITO reduces the time to onboard models and provision resources, enabling faster AI model prototyping and development rather than infrastructure management.

This article shows you how to enable the AI toolchain operator add-on and deploy an AI model for inferencing on AKS.

Important

AKS preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. AKS previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see the following support articles:

Before you begin

  • This article assumes a basic understanding of Kubernetes concepts. For more information, see Kubernetes core concepts for AKS.
  • For all hosted model preset images and default resource configuration, see the KAITO GitHub repository.
  • The AI toolchain operator add-on currently supports KAITO version 0.4.4, please make a note of this in considering your choice of model from the KAITO model repository.

Prerequisites

Install the Azure CLI preview extension

  1. Install the Azure CLI preview extension using the az extension add command.

    az extension add --name aks-preview
    
  2. Update the extension to make sure you have the latest version using the az extension update command.

    az extension update --name aks-preview
    

Register the AI toolchain operator add-on feature flag

  1. Register the AIToolchainOperatorPreview feature flag using the az feature register command.

    az feature register --namespace "Microsoft.ContainerService" --name "AIToolchainOperatorPreview"
    

    It takes a few minutes for the registration to complete.

  2. Verify the registration using the az feature show command.

    az feature show --namespace "Microsoft.ContainerService" --name "AIToolchainOperatorPreview"
    

Export environment variables

  • To simplify the configuration steps in this article, you can define environment variables using the following commands. Make sure to replace the placeholder values with your own.

    export AZURE_SUBSCRIPTION_ID="mySubscriptionID"
    export AZURE_RESOURCE_GROUP="myResourceGroup"
    export AZURE_LOCATION="myLocation"
    export CLUSTER_NAME="myClusterName"
    

Enable the AI toolchain operator add-on on an AKS cluster

The following sections describe how to create an AKS cluster with the AI toolchain operator add-on enabled and deploy a default hosted AI model.

Create an AKS cluster with the AI toolchain operator add-on enabled

  1. Create an Azure resource group using the az group create command.

    az group create --name $AZURE_RESOURCE_GROUP --location $AZURE_LOCATION
    
  2. Create an AKS cluster with the AI toolchain operator add-on enabled using the az aks create command with the --enable-ai-toolchain-operator flag.

    az aks create --location $AZURE_LOCATION \
        --resource-group $AZURE_RESOURCE_GROUP \
        --name $CLUSTER_NAME \
        --enable-ai-toolchain-operator \
        --generate-ssh-keys
    
  3. On an existing AKS cluster, you can enable the AI toolchain operator add-on using the az aks update command.

    az aks update --name $CLUSTER_NAME \
            --resource-group $AZURE_RESOURCE_GROUP \
            --enable-ai-toolchain-operator
    

Connect to your cluster

  1. Configure kubectl to connect to your cluster using the az aks get-credentials command.

    az aks get-credentials --resource-group $AZURE_RESOURCE_GROUP --name $CLUSTER_NAME
    
  2. Verify the connection to your cluster using the kubectl get command.

    kubectl get nodes
    

Deploy a default hosted AI model

  1. Deploy the Falcon 7B-instruct model preset from the KAITO model repository using the kubectl apply command.

    kubectl apply -f https://raw.githubusercontent.com/Azure/kaito/main/examples/inference/kaito_workspace_falcon_7b-instruct.yaml
    
  2. Track the live resource changes in your workspace using the kubectl get command.

    kubectl get workspace workspace-falcon-7b-instruct -w
    

    Note

    As you track the KAITO workspace deployment, note that machine readiness can take up to 10 minutes, and workspace readiness up to 20 minutes depending on the size of your model.

  3. Check your inference service and get the service IP address using the kubectl get svc command.

    export SERVICE_IP=$(kubectl get svc workspace-falcon-7b-instruct -o jsonpath='{.spec.clusterIP}')
    
  4. Test the Falcon 7B-instruct inference service with a sample input of your choice using the OpenAI chat completions API format:

    kubectl run -it --rm --restart=Never curl --image=curlimages/curl -- curl -X POST http://$SERVICE_IP/v1/completions -H "Content-Type: application/json" \
      -d '{
            "model": "falcon-7b-instruct",
            "prompt": "What is Kubernetes?",
            "max_tokens": 10
           }'
    

Clean up resources

If you no longer need these resources, you can delete them to avoid incurring extra Azure compute charges.

  1. Delete the KAITO workspace using the kubectl delete workspace command.

    kubectl delete workspace workspace-falcon-7b-instruct
    
  2. You need to manually delete the GPU node pools provisioned by the KAITO deployment. Use the node label created by Falcon-7b instruct workspace to get the node pool name using the az aks nodepool list command. In this example, the node label is "kaito.sh/workspace": "workspace-falcon-7b-instruct".

    az aks nodepool list --resource-group $AZURE_RESOURCE_GROUP --cluster-name $CLUSTER_NAME
    
  3. Delete the node pool with this name from your AKS cluster and repeat the steps in this section for each KAITO workspace that will be removed.

Common troubleshooting scenarios

After applying the KAITO model inference workspace, your resource readiness and workspace conditions might not update to True for the following reasons:

  • Your Azure subscription doesn't have quota for the minimum GPU instance type specified in your KAITO workspace. You'll need to request a quota increase for the GPU VM family in your Azure subscription.
  • The GPU instance type isn't available in your AKS region. Confirm the GPU instance availability in your specific region and switch the Azure region if your GPU VM family isn't available.

Next steps

Learn more about KAITO model deployment options below: