Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters

With cluster connect, you can securely connect to Azure Arc-enabled Kubernetes clusters without requiring any inbound port to be enabled on the firewall.

Access to the apiserver of the Azure Arc-enabled Kubernetes cluster enables the following scenarios:

  • Interactive debugging and troubleshooting.
  • Cluster access to Azure services for custom locations and other resources created on top of it.

A conceptual overview of this feature is available in Cluster connect - Azure Arc-enabled Kubernetes.

Prerequisites

  • An Azure account with an active subscription. Create an account for free.

  • Install or update Azure CLI to version >= 2.16.0.

  • Install the connectedk8s Azure CLI extension of version >= 1.2.5:

    az extension add --name connectedk8s
    

    If you've already installed the connectedk8s extension, update the extension to the latest version:

    az extension update --name connectedk8s
    
  • An existing Azure Arc-enabled Kubernetes connected cluster.

  • Enable the below endpoints for outbound access in addition to the ones mentioned under connecting a Kubernetes cluster to Azure Arc:

    Endpoint Port
    *.servicebus.windows.net 443
    guestnotificationservice.azure.com, *.guestnotificationservice.azure.com 443

    Note

    To translate the *.servicebus.windows.net wildcard into specific endpoints, use the command \GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>. Within this command, the region must be specified for the <location> placeholder.

  • Replace the placeholders and run the below command to set the environment variables used in this document:

    CLUSTER_NAME=<cluster-name>
    RESOURCE_GROUP=<resource-group-name>
    ARM_ID_CLUSTER=$(az connectedk8s show -n $CLUSTER_NAME -g $RESOURCE_GROUP --query id -o tsv)
    

Azure Active Directory authentication option

  1. Get the objectId associated with your Azure AD entity.

    • For an Azure AD user account:

      AAD_ENTITY_OBJECT_ID=$(az ad signed-in-user show --query userPrincipalName -o tsv)
      
    • For an Azure AD application:

      AAD_ENTITY_OBJECT_ID=$(az ad sp show --id <id> --query id -o tsv)
      
  2. Authorize the entity with appropriate permissions.

    • If you are using Kubernetes native ClusterRoleBinding or RoleBinding for authorization checks on the cluster, with the kubeconfig file pointing to the apiserver of your cluster for direct access, you can create one mapped to the Azure AD entity (service principal or user) that needs to access this cluster. Example:

      kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --user=$AAD_ENTITY_OBJECT_ID
      
    • If you are using Azure RBAC for authorization checks on the cluster, you can create an Azure role assignment mapped to the Azure AD entity. Example:

      az role assignment create --role "Azure Arc Kubernetes Viewer" --assignee $AAD_ENTITY_OBJECT_ID --scope $ARM_ID_CLUSTER
      az role assignment create --role "Azure Arc Enabled Kubernetes Cluster User Role" --assignee $AAD_ENTITY_OBJECT_ID --scope $ARM_ID_CLUSTER
      

Service account token authentication option

  1. With the kubeconfig file pointing to the apiserver of your Kubernetes cluster, create a service account in any namespace (the following command creates it in the default namespace):

    kubectl create serviceaccount demo-user
    
  2. Create ClusterRoleBinding to grant this service account the appropriate permissions on the cluster. Example:

    kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --serviceaccount default:demo-user
    
  3. Create a service account token:

    kubectl apply -f - <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: demo-user-secret
      annotations:
        kubernetes.io/service-account.name: demo-user
    type: kubernetes.io/service-account-token
    EOF
    
    TOKEN=$(kubectl get secret demo-user-secret -o jsonpath='{$.data.token}' | base64 -d | sed 's/$/\\\n/g')
    
  4. Get the token to output to console

    echo $TOKEN
    

Access your cluster

  1. Set up the cluster connect kubeconfig needed to access your cluster based on the authentication option used:

    • If using Azure AD authentication, after logging into Azure CLI using the Azure AD entity of interest, get the Cluster Connect kubeconfig needed to communicate with the cluster from anywhere (from even outside the firewall surrounding the cluster):

      az connectedk8s proxy -n $CLUSTER_NAME -g $RESOURCE_GROUP
      
    • If using service account authentication, get the cluster connect kubeconfig needed to communicate with the cluster from anywhere:

      az connectedk8s proxy -n $CLUSTER_NAME -g $RESOURCE_GROUP --token $TOKEN
      
  2. Use kubectl to send requests to the cluster:

    kubectl get pods
    

You should now see a response from the cluster containing the list of all pods under the default namespace.

Known limitations

When making requests to the Kubernetes cluster, if the Azure AD entity used is a part of more than 200 groups, you may see the following error:

You must be logged in to the server (Error:Error while retrieving group info. Error:Overage claim (users with more than 200 group membership) is currently not supported.

This is a known limitation. To get past this error:

  1. Create a service principal, which is less likely to be a member of more than 200 groups.
  2. Sign in to Azure CLI with the service principal before running the az connectedk8s proxy command.

Next steps