Quickstart: Connect an existing Kubernetes cluster to Azure Arc
Get started with Azure Arc-enabled Kubernetes by using Azure CLI or Azure PowerShell to connect an existing Kubernetes cluster to Azure Arc.
For a conceptual look at connecting clusters to Azure Arc, see Azure Arc-enabled Kubernetes agent overview.
Prerequisites
An Azure account with an active subscription. Create an account for free.
A basic understanding of Kubernetes core concepts.
An identity (user or service principal) which can be used to log in to Azure CLI and connect your cluster to Azure Arc.
Important
- The identity must have 'Read' and 'Write' permissions on the Azure Arc-enabled Kubernetes resource type (
Microsoft.Kubernetes/connectedClusters
). - If connecting the cluster to an existing resource group (rather than a new one created by this identity), the identity must have 'Read' permission for that resource group.
- The Kubernetes Cluster - Azure Arc Onboarding built-in role can be used for this identity. This role is useful for at-scale onboarding, as it has only the granular permissions required to connect clusters to Azure Arc, and doesn't have permission to update, delete, or modify any other clusters or other Azure resources.
- The identity must have 'Read' and 'Write' permissions on the Azure Arc-enabled Kubernetes resource type (
Install or upgrade Azure CLI to the latest version.
Install the latest version of connectedk8s Azure CLI extension:
az extension add --name connectedk8s
An up-and-running Kubernetes cluster. If you don't have one, you can create a cluster using one of these options:
Self-managed Kubernetes cluster using Cluster API
Note
The cluster needs to have at least one node of operating system and architecture type
linux/amd64
. Clusters with onlylinux/arm64
nodes aren't yet supported.
At least 850 MB free for the Arc agents that will be deployed on the cluster, and capacity to use approximately 7% of a single CPU. For a multi-node Kubernetes cluster environment, pods can get scheduled on different nodes.
A kubeconfig file and context pointing to your cluster.
Install Helm 3. Ensure that the Helm 3 version is < 3.7.0.
Register providers for Azure Arc-enabled Kubernetes
Enter the following commands:
az provider register --namespace Microsoft.Kubernetes az provider register --namespace Microsoft.KubernetesConfiguration az provider register --namespace Microsoft.ExtendedLocation
Monitor the registration process. Registration may take up to 10 minutes.
az provider show -n Microsoft.Kubernetes -o table az provider show -n Microsoft.KubernetesConfiguration -o table az provider show -n Microsoft.ExtendedLocation -o table
Once registered, you should see the
RegistrationState
state for these namespaces change toRegistered
.
Meet network requirements
Generally, connectivity requirements include these principles:
- All connections are TCP unless otherwise specified.
- All HTTP connections use HTTPS and SSL/TLS with officially signed and verifiable certificates.
- All connections are outbound unless otherwise specified.
To use a proxy, verify that the agents meet the network requirements in this article.
Important
Azure Arc agents require the following outbound URLs on https://:443
to function.
For *.servicebus.windows.net
, websockets need to be enabled for outbound access on firewall and proxy.
Endpoint (DNS) | Description |
---|---|
https://management.azure.com |
Required for the agent to connect to Azure and register the cluster. |
https://<region>.dp.kubernetesconfiguration.azure.com |
Data plane endpoint for the agent to push status and fetch configuration information. |
https://login.microsoftonline.com https://<region>.login.microsoft.com login.windows.net |
Required to fetch and update Azure Resource Manager tokens. |
https://mcr.microsoft.com https://*.data.mcr.microsoft.com |
Required to pull container images for Azure Arc agents. |
https://gbl.his.arc.azure.com |
Required to get the regional endpoint for pulling system-assigned Managed Identity certificates. |
https://*.his.arc.azure.com |
Required to pull system-assigned Managed Identity certificates. |
https://k8connecthelm.azureedge.net |
az connectedk8s connect uses Helm 3 to deploy Azure Arc agents on the Kubernetes cluster. This endpoint is needed for Helm client download to facilitate deployment of the agent helm chart. |
guestnotificationservice.azure.com *.guestnotificationservice.azure.com sts.windows.net https://k8sconnectcsp.azureedge.net |
For Cluster Connect and for Custom Location based scenarios. |
*.servicebus.windows.net |
For Cluster Connect and for Custom Location based scenarios. |
https://graph.microsoft.com/ |
Required when Azure RBAC is configured |
*.arc.azure.net |
To manage connected clusters in Azure portal. |
To translate the *.servicebus.windows.net
wildcard into specific endpoints, use the command:
GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<region>`.
To get the region segment of a regional endpoint, remove all spaces from the Azure region name. For example, East US 2 region, the region name is eastus2
.
For example: san-af-<region>-prod.azurewebsites.net
should be san-af-eastus2-prod.azurewebsites.net
in the East US 2 region.
To see a list of all regions, run this command:
az account list-locations -o table
For a complete list of network requirements for Azure Arc features and Azure Arc-enabled services, see Azure Arc network requirements (Consolidated).
Create a resource group
Run the following command:
az group create --name AzureArcTest --location EastUS --output table
Output:
Location Name
---------- ------------
eastus AzureArcTest
Connect an existing Kubernetes cluster
Run the following command:
az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest
Note
If you are logged into Azure CLI using a service principal, an additional parameter needs to be set to enable the custom location feature on the cluster.
Output:
Helm release deployment succeeded
{
"aadProfile": {
"clientAppId": "",
"serverAppId": "",
"tenantId": ""
},
"agentPublicKeyCertificate": "xxxxxxxxxxxxxxxxxxx",
"agentVersion": null,
"connectivityStatus": "Connecting",
"distribution": "gke",
"id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/AzureArcTest/providers/Microsoft.Kubernetes/connectedClusters/AzureArcTest1",
"identity": {
"principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"type": "SystemAssigned"
},
"infrastructure": "gcp",
"kubernetesVersion": null,
"lastConnectivityTime": null,
"location": "eastus",
"managedIdentityCertificateExpirationTime": null,
"name": "AzureArcTest1",
"offering": null,
"provisioningState": "Succeeded",
"resourceGroup": "AzureArcTest",
"tags": {},
"totalCoreCount": null,
"totalNodeCount": null,
"type": "Microsoft.Kubernetes/connectedClusters"
}
Tip
The above command without the location parameter specified creates the Azure Arc-enabled Kubernetes resource in the same location as the resource group. To create the Azure Arc-enabled Kubernetes resource in a different location, specify either --location <region>
or -l <region>
when running the az connectedk8s connect
command.
Important
In some cases, deployment may fail due to a timeout error. Please see our troubleshooting guide for details on how to resolve this issue.
Connect using an outbound proxy server
If your cluster is behind an outbound proxy server, requests must be routed via the outbound proxy server.
Set the environment variables needed for Azure CLI to use the outbound proxy server:
export HTTP_PROXY=<proxy-server-ip-address>:<port> export HTTPS_PROXY=<proxy-server-ip-address>:<port> export NO_PROXY=<cluster-apiserver-ip-address>:<port>
Run the connect command with the
proxy-https
andproxy-http
parameters specified. If your proxy server is set up with both HTTP and HTTPS, be sure to use--proxy-http
for the HTTP proxy and--proxy-https
for the HTTPS proxy. If your proxy server only uses HTTP, you can use that value for both parameters.az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-https https://<proxy-server-ip-address>:<port> --proxy-http http://<proxy-server-ip-address>:<port> --proxy-skip-range <excludedIP>,<excludedCIDR> --proxy-cert <path-to-cert-file>
Note
- Some network requests such as the ones involving in-cluster service-to-service communication need to be separated from the traffic that is routed via the proxy server for outbound communication. The
--proxy-skip-range
parameter can be used to specify the CIDR range and endpoints in a comma-separated way so that any communication from the agents to these endpoints do not go via the outbound proxy. At a minimum, the CIDR range of the services in the cluster should be specified as value for this parameter. For example, let's saykubectl get svc -A
returns a list of services where all the services have ClusterIP values in the range10.0.0.0/16
. Then the value to specify for--proxy-skip-range
is10.0.0.0/16,kubernetes.default.svc,.svc.cluster.local,.svc
. --proxy-http
,--proxy-https
, and--proxy-skip-range
are expected for most outbound proxy environments.--proxy-cert
is only required if you need to inject trusted certificates expected by proxy into the trusted certificate store of agent pods.- The outbound proxy has to be configured to allow websocket connections.
For outbound proxy servers where only a trusted certificate needs to be provided without the proxy server endpoint inputs, az connectedk8s connect
can be run with just the --proxy-cert
input specified. In case multiple trusted certificates are expected, the combined certificate chain can be provided in a single file using the --proxy-cert
parameter.
Note
--custom-ca-cert
is an alias for--proxy-cert
. Either parameters can be used interchangeably. Passing both parameters in the same command will honour the one passed last.
Run the connect command with the --proxy-cert
parameter specified:
az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-cert <path-to-cert-file>
Verify cluster connection
Run the following command:
az connectedk8s list --resource-group AzureArcTest --output table
Output:
Name Location ResourceGroup
------------- ---------- ---------------
AzureArcTest1 eastus AzureArcTest
Note
After onboarding the cluster, it takes around 5 to 10 minutes for the cluster metadata (cluster version, agent version, number of nodes, etc.) to surface on the overview page of the Azure Arc-enabled Kubernetes resource in Azure portal.
Tip
For help troubleshooting problems while connecting your cluster, see Diagnose connection issues for Azure Arc-enabled Kubernetes clusters.
View Azure Arc agents for Kubernetes
Azure Arc-enabled Kubernetes deploys a few agents into the azure-arc
namespace.
View these deployments and pods using:
kubectl get deployments,pods -n azure-arc
Verify all pods are in a
Running
state.Output:
NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/cluster-metadata-operator 1/1 1 1 13d deployment.apps/clusterconnect-agent 1/1 1 1 13d deployment.apps/clusteridentityoperator 1/1 1 1 13d deployment.apps/config-agent 1/1 1 1 13d deployment.apps/controller-manager 1/1 1 1 13d deployment.apps/extension-manager 1/1 1 1 13d deployment.apps/flux-logs-agent 1/1 1 1 13d deployment.apps/kube-aad-proxy 1/1 1 1 13d deployment.apps/metrics-agent 1/1 1 1 13d deployment.apps/resource-sync-agent 1/1 1 1 13d NAME READY STATUS RESTARTS AGE pod/cluster-metadata-operator-9568b899c-2stjn 2/2 Running 0 13d pod/clusterconnect-agent-576758886d-vggmv 3/3 Running 0 13d pod/clusteridentityoperator-6f59466c87-mm96j 2/2 Running 0 13d pod/config-agent-7cbd6cb89f-9fdnt 2/2 Running 0 13d pod/controller-manager-df6d56db5-kxmfj 2/2 Running 0 13d pod/extension-manager-58c94c5b89-c6q72 2/2 Running 0 13d pod/flux-logs-agent-6db9687fcb-rmxww 1/1 Running 0 13d pod/kube-aad-proxy-67b87b9f55-bthqv 2/2 Running 0 13d pod/metrics-agent-575c565fd9-k5j2t 2/2 Running 0 13d pod/resource-sync-agent-6bbd8bcd86-x5bk5 2/2 Running 0 13d
For more information about these agents, see Azure Arc-enabled Kubernetes agent overview.
Clean up resources
You can delete the Azure Arc-enabled Kubernetes resource, any associated configuration resources, and any agents running on the cluster using Azure CLI using the following command:
az connectedk8s delete --name AzureArcTest1 --resource-group AzureArcTest
If the deletion process fails, use the following command to force deletion (adding -y
if you want to bypass the confirmation prompt):
az connectedk8s delete -g AzureArcTest1 -n AzureArcTest --force
This command can also be used if you experience issues when creating a new cluster deployment (due to previously created resources not being completely removed).
Note
Deleting the Azure Arc-enabled Kubernetes resource using the Azure portal removes any associated configuration resources, but does not remove any agents running on the cluster. Best practice is to delete the Azure Arc-enabled Kubernetes resource using az connectedk8s delete
rather than deleting the resource in the Azure portal.
Next steps
Advance to the next article to learn how to deploy configurations to your connected Kubernetes cluster using GitOps.
Feedback
Submit and view feedback for