Редагувати

Поділитися через


Quickstart: Connect an existing Kubernetes cluster to Azure Arc

Get started with Azure Arc-enabled Kubernetes by using Azure CLI or Azure PowerShell to connect an existing Kubernetes cluster to Azure Arc.

For a conceptual look at connecting clusters to Azure Arc, see Azure Arc-enabled Kubernetes agent overview. To try things out in a sample/practice experience, visit the Azure Arc Jumpstart.

Prerequisites

Important

In addition to these prerequisites, be sure to meet all network requirements for Azure Arc-enabled Kubernetes.

Register providers for Azure Arc-enabled Kubernetes

  1. Enter the following commands:

    az provider register --namespace Microsoft.Kubernetes
    az provider register --namespace Microsoft.KubernetesConfiguration
    az provider register --namespace Microsoft.ExtendedLocation
    
  2. Monitor the registration process. Registration may take up to 10 minutes.

    az provider show -n Microsoft.Kubernetes -o table
    az provider show -n Microsoft.KubernetesConfiguration -o table
    az provider show -n Microsoft.ExtendedLocation -o table
    

    Once registered, you should see the RegistrationState state for these namespaces change to Registered.

Create a resource group

Run the following command:

az group create --name AzureArcTest --location EastUS --output table

Output:

Location    Name
----------  ------------
eastus      AzureArcTest

Connect an existing Kubernetes cluster

Run the following command to connect your cluster. This command deploys the Azure Arc agents to the cluster and installs Helm v. 3.6.3 to the .azure folder of the deployment machine. This Helm 3 installation is only used for Azure Arc, and it doesn't remove or change any previously installed versions of Helm on the machine.

In this example, the cluster's name is AzureArcTest1.

az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest

Output:

Helm release deployment succeeded

    {
      "aadProfile": {
        "clientAppId": "",
        "serverAppId": "",
        "tenantId": ""
      },
      "agentPublicKeyCertificate": "xxxxxxxxxxxxxxxxxxx",
      "agentVersion": null,
      "connectivityStatus": "Connecting",
      "distribution": "gke",
      "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/AzureArcTest/providers/Microsoft.Kubernetes/connectedClusters/AzureArcTest1",
      "identity": {
        "principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
        "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
        "type": "SystemAssigned"
      },
      "infrastructure": "gcp",
      "kubernetesVersion": null,
      "lastConnectivityTime": null,
      "location": "eastus",
      "managedIdentityCertificateExpirationTime": null,
      "name": "AzureArcTest1",
      "offering": null,
      "provisioningState": "Succeeded",
      "resourceGroup": "AzureArcTest",
      "tags": {},
      "totalCoreCount": null,
      "totalNodeCount": null,
      "type": "Microsoft.Kubernetes/connectedClusters"
    }

Tip

The above command without the location parameter specified creates the Azure Arc-enabled Kubernetes resource in the same location as the resource group. To create the Azure Arc-enabled Kubernetes resource in a different location, specify either --location <region> or -l <region> when running the az connectedk8s connect command.

Important

If deployment fails due to a timeout error, see our troubleshooting guide for details on how to resolve this issue.

Connect using an outbound proxy server

If your cluster is behind an outbound proxy server, requests must be routed via the outbound proxy server.

  1. On the deployment machine, set the environment variables needed for Azure CLI to use the outbound proxy server:

    export HTTP_PROXY=<proxy-server-ip-address>:<port>
    export HTTPS_PROXY=<proxy-server-ip-address>:<port>
    export NO_PROXY=<cluster-apiserver-ip-address>:<port>
    
  2. On the Kubernetes cluster, run the connect command with the proxy-https and proxy-http parameters specified. If your proxy server is set up with both HTTP and HTTPS, be sure to use --proxy-http for the HTTP proxy and --proxy-https for the HTTPS proxy. If your proxy server only uses HTTP, you can use that value for both parameters.

    az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-https https://<proxy-server-ip-address>:<port> --proxy-http http://<proxy-server-ip-address>:<port> --proxy-skip-range <excludedIP>,<excludedCIDR> --proxy-cert <path-to-cert-file>
    

Note

  • Some network requests such as the ones involving in-cluster service-to-service communication need to be separated from the traffic that is routed via the proxy server for outbound communication. The --proxy-skip-range parameter can be used to specify the CIDR range and endpoints in a comma-separated way so that any communication from the agents to these endpoints do not go via the outbound proxy. At a minimum, the CIDR range of the services in the cluster should be specified as value for this parameter. For example, let's say kubectl get svc -A returns a list of services where all the services have ClusterIP values in the range 10.0.0.0/16. Then the value to specify for --proxy-skip-range is 10.0.0.0/16,kubernetes.default.svc,.svc.cluster.local,.svc.
  • --proxy-http, --proxy-https, and --proxy-skip-range are expected for most outbound proxy environments. --proxy-cert is only required if you need to inject trusted certificates expected by proxy into the trusted certificate store of agent pods.
  • The outbound proxy has to be configured to allow websocket connections.

For outbound proxy servers, if you're only providing a trusted certificate, you can run az connectedk8s connect with just the --proxy-cert parameter specified:

az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-cert <path-to-cert-file>

If there are multiple trusted certificates, then the certificate chain (Leaf cert, Intermediate cert, Root cert) needs to be combined into a single file which is passed in the --proxy-cert parameter.

Note

  • --custom-ca-cert is an alias for --proxy-cert. Either parameter can be used interchangeably. Passing both parameters in the same command will honor the one passed last.

Verify cluster connection

Run the following command:

az connectedk8s list --resource-group AzureArcTest --output table

Output:

Name           Location    ResourceGroup
-------------  ----------  ---------------
AzureArcTest1  eastus      AzureArcTest

For help troubleshooting connection problems, see Diagnose connection issues for Azure Arc-enabled Kubernetes clusters.

Note

After onboarding the cluster, it takes up to ten minutes for cluster metadata (such as cluster version and number of nodes) to appear on the overview page of the Azure Arc-enabled Kubernetes resource in the Azure portal.

View Azure Arc agents for Kubernetes

Azure Arc-enabled Kubernetes deploys several agents into the azure-arc namespace.

  1. View these deployments and pods using:

    kubectl get deployments,pods -n azure-arc
    
  2. Verify all pods are in a Running state.

    Output:

     NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
     deployment.apps/cluster-metadata-operator   1/1     1            1           13d
     deployment.apps/clusterconnect-agent        1/1     1            1           13d
     deployment.apps/clusteridentityoperator     1/1     1            1           13d
     deployment.apps/config-agent                1/1     1            1           13d
     deployment.apps/controller-manager          1/1     1            1           13d
     deployment.apps/extension-manager           1/1     1            1           13d
     deployment.apps/flux-logs-agent             1/1     1            1           13d
     deployment.apps/kube-aad-proxy              1/1     1            1           13d
     deployment.apps/metrics-agent               1/1     1            1           13d
     deployment.apps/resource-sync-agent         1/1     1            1           13d
    
     NAME                                            READY   STATUS    RESTARTS   AGE
     pod/cluster-metadata-operator-9568b899c-2stjn   2/2     Running   0          13d
     pod/clusterconnect-agent-576758886d-vggmv       3/3     Running   0          13d
     pod/clusteridentityoperator-6f59466c87-mm96j    2/2     Running   0          13d
     pod/config-agent-7cbd6cb89f-9fdnt               2/2     Running   0          13d
     pod/controller-manager-df6d56db5-kxmfj          2/2     Running   0          13d
     pod/extension-manager-58c94c5b89-c6q72          2/2     Running   0          13d
     pod/flux-logs-agent-6db9687fcb-rmxww            1/1     Running   0          13d
     pod/kube-aad-proxy-67b87b9f55-bthqv             2/2     Running   0          13d
     pod/metrics-agent-575c565fd9-k5j2t              2/2     Running   0          13d
     pod/resource-sync-agent-6bbd8bcd86-x5bk5        2/2     Running   0          13d
    

For more information about these agents, see Azure Arc-enabled Kubernetes agent overview.

Clean up resources

You can delete the Azure Arc-enabled Kubernetes resource, any associated configuration resources, and any agents running on the cluster by using the following command:

az connectedk8s delete --name AzureArcTest1 --resource-group AzureArcTest

If the deletion process fails, use the following command to force deletion (adding -y if you want to bypass the confirmation prompt):

az connectedk8s delete -n AzureArcTest1 -g AzureArcTest --force

This command can also be used if you experience issues when creating a new cluster deployment (due to previously created resources not being completely removed).

Note

Deleting the Azure Arc-enabled Kubernetes resource using the Azure portal removes any associated configuration resources, but does not remove any agents running on the cluster. Because of this, we recommend deleting the Azure Arc-enabled Kubernetes resource using az connectedk8s delete rather than deleting the resource in the Azure portal.

Next steps