Azure Kubernetes Service for Edge (preview)
Azure Kubernetes Service (AKS) for Edge provides an extensive and sophisticated set of capabilities that make it simpler to deploy and operate a fully managed Kubernetes cluster in an edge computing scenario.
Important
AKS preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. AKS previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use. For more information, see the following support articles:
What are Edge Zones and Azure public multi-access edge compute?
Edge Zones are small, localized footprints of Azure in a metropolitan area designed to provide low latency connectivity for applications that require the highest level of performance.
Azure public multi-access edge compute (MEC) sites are a type of Edge Zone that are placed in or near mobile operators' data centers in metro areas, and are designed to run workloads that require low latency while being attached to the mobile network. Azure public MEC is offered in partnership with the operators. The placement of the infrastructure offers lower latency for applications that are accessed from mobile devices connected to the 5G mobile network.
Some of the industries and use cases where Azure public MEC can provide benefits are:
- Media streaming and content delivery
- Real-time analytics and inferencing via artificial intelligence and machine learning
- Rendering for mixed reality
- Connected automobiles
- Healthcare
- Immersive gaming experiences
- Low latency applications for the retail industry
To learn more, see the Azure public MEC Overview.
What is AKS for Edge?
Edge Zones provide a suite of Azure services for managing and deploying applications in edge computing environments. One of the key services offered is Azure Kubernetes Service (AKS) for Edge. AKS for Edge enables organizations to meet the unique needs of edge computing while leveraging the container orchestration and management capabilities of AKS, making the deployment and management of edge applications much simpler.
Just like a typical AKS deployment, the Azure platform is responsible for maintaining the AKS control plane and providing the infrastructure, while your organization retains control over the worker nodes that run the applications.
Creating an AKS for Edge cluster uses an optimized architecture that is specifically tailored to meet the unique needs and requirements of edge-based applications and workloads. The control plane of the clusters is created, deployed, and configured in the closest Azure region, while the agent nodes and node pools attached to the cluster are located in an Azure Public MEC Edge Zone.
The components present in an AKS for Edge cluster are identical to those in a typical cluster deployed in an Azure region, ensuring that the same level of functionality and performance is maintained. For more information on these components, see [Kubernetes core concepts for AKS][concepts-cluster-workloads].
Edge Zone and parent region locations
Azure public MEC Edge Zone sites are associated with a parent Azure region that hosts all the control plane functions associated with the services running in the Azure public MEC. The following table lists the Azure public MEC sites, along with their Edge Zone ID and associated parent region for locations that are Generally Available to deploy an AKS cluster to:
Telco provider | Azure public MEC name | Edge Zone ID | Parent region |
---|---|---|---|
AT&T | ATT Atlanta A | attatlanta1 | East US 2 |
AT&T | ATT Dallas A | attdallas1 | South Central US |
AT&T | ATT Detroit A | attdetroit1 | Central US |
For the latest available Public MEC Edge Zones, please refer to Azure public MEC Locations
Deploy a cluster in an Edge Zone location
Prerequisites
Before you can deploy an AKS for Edge cluster, your subscription needs to have access to the targeted Edge Zone location. This access is provided through our onboarding process, done by creating a support request via the Azure portal or by filling out the Azure public MEC sign-up form
Your cluster must be running Kubernetes version 1.24 or later
The identity you're using to create your cluster must have the appropriate minimum permissions. For more information on access and identity for AKS, see Access and identity options for Azure Kubernetes Service (AKS)
Limitations
- AKS for Edge allows for autoscaling only up to 100 nodes in a node pool
Resource constraints
While AKS is fully supported in Azure public MEC Edge Zones, resource constraints may still apply:
In all Edge Zones, the maximum node count is 100
In Azure public MEC Edge Zones, only selected VM SKUs are offered. See the list of available SKUs, as well as additional constraints and limitations, in Azure public MEC key concepts
Deploying an AKS cluster in an Edge Zone is similar to deploying an AKS cluster in any other region. All resource providers provide a field named extendedLocation
, which you can use to deploy resources in an Edge Zone. This allows for precise and targeted deployment of your AKS cluster.
A parameter called extendedLocation
should be used to specify the desired edge zone:
"extendedLocation": {
"name": "<edge-zone-id>",
"type": "EdgeZone",
},
The following example is an Azure Resource Manager template (ARM template) that will deploy a new cluster in an Edge Zone. Provide your own values for the following template parameters:
Subscription: Select an Azure subscription.
Resource group: Select Create new. Enter a unique name for the resource group, such as myResourceGroup, then choose OK.
Location: Select a location, such as East US.
Cluster name: Enter a unique name for the AKS cluster, such as myAKSCluster.
DNS prefix: Enter a unique DNS prefix for your cluster, such as myakscluster.
Linux Admin Username: Enter a username to connect using SSH, such as azureuser.
SSH RSA Public Key: Copy and paste the public part of your SSH key pair (by default, the contents of the
~/.ssh/id_rsa.pub
file).
If you're unfamiliar with ARM templates, see the tutorial on deploying a local ARM template.
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"metadata": {
"_generator": {
"name": "bicep",
"version": "0.9.1.41621",
"templateHash": "2637152180661081755"
}
},
"parameters": {
"clusterName": {
"type": "string",
"defaultValue": "myAKSCluster",
"metadata": {
"description": "The name of the Managed Cluster resource."
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "The location of the Managed Cluster resource."
}
},
"edgeZoneName": {
"type": "String",
"metadata": {
"description": "The name of the Edge Zone"
}
},
"dnsPrefix": {
"type": "string",
"metadata": {
"description": "Optional DNS prefix to use with hosted Kubernetes API server FQDN."
}
},
"osDiskSizeGB": {
"type": "int",
"defaultValue": 0,
"maxValue": 1023,
"minValue": 0,
"metadata": {
"description": "Disk size (in GB) to provision for each of the agent pool nodes. This value ranges from 0 to 1023. Specifying 0 will apply the default disk size for that agentVMSize."
}
},
"agentCount": {
"type": "int",
"defaultValue": 3,
"maxValue": 50,
"minValue": 1,
"metadata": {
"description": "The number of nodes for the cluster."
}
},
"agentVMSize": {
"type": "string",
"defaultValue": "standard_d2s_v3",
"metadata": {
"description": "The size of the Virtual Machine."
}
},
"linuxAdminUsername": {
"type": "string",
"metadata": {
"description": "User name for the Linux Virtual Machines."
}
},
"sshRSAPublicKey": {
"type": "string",
"metadata": {
"description": "Configure all linux machines with the SSH RSA public key string. Your key should include three parts, for example 'ssh-rsa AAAAB...snip...UcyupgH azureuser@linuxvm'"
}
}
},
"resources": [
{
"type": "Microsoft.ContainerService/managedClusters",
"apiVersion": "2022-05-02-preview",
"name": "[parameters('clusterName')]",
"location": "[parameters('location')]",
"extendedLocation": {
"name": "[parameters('edgeZoneName')]",
"type": "EdgeZone"
}
"identity": {
"type": "SystemAssigned"
},
"properties": {
"dnsPrefix": "[parameters('dnsPrefix')]",
"agentPoolProfiles": [
{
"name": "agentpool",
"osDiskSizeGB": "[parameters('osDiskSizeGB')]",
"count": "[parameters('agentCount')]",
"vmSize": "[parameters('agentVMSize')]",
"osType": "Linux",
"mode": "System"
}
],
"linuxProfile": {
"adminUsername": "[parameters('linuxAdminUsername')]",
"ssh": {
"publicKeys": [
{
"keyData": "[parameters('sshRSAPublicKey')]"
}
]
}
}
}
}
],
"outputs": {
"controlPlaneFQDN": {
"type": "string",
"value": "[reference(resourceId('Microsoft.ContainerService/managedClusters', parameters('clusterName'))).fqdn]"
}
}
}
Monitoring
After deploying an AKS for Edge cluster, you can check the status and monitor the cluster's metrics. Monitoring capability is similar to what is available in Azure regions.
Edge Zone availability
High availability is critical at the edge for a variety of reasons. Edge devices are typically deployed in remote or hard-to-reach locations, making maintenance and repair more difficult and time-consuming. Additionally, these devices handle a large volume of latency-sensitive data and transactions, so any downtime can result in significant losses for businesses. By incorporating traffic management with failover capabilities, organizations can ensure that their edge deployment remains up and running even in the event of disruption, helping to minimize the impact of downtime and maintain business continuity.
For increased availability in the Azure public MEC Edge Zone, it's recommended to deploy your workload with an architecture that incorporates traffic management using Azure Traffic Manager routing profiles. This can help ensure failover to the closest Azure region in the event of a disruption. To learn more, see Azure Traffic Manager or view a sample deployment architecture for High Availability in Azure public MEC.
Next steps
After deploying your AKS cluster in an Edge Zone, learn about how you can configure an AKS cluster.
Feedback
Submit and view feedback for