Events
Mar 17, 9 PM - Mar 21, 10 AM
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register nowThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
For applications deployed across multiple clusters, admins often want to route incoming traffic to them across clusters.
You can follow this document to set up layer 4 load balancing for such multi-cluster applications.
Important
Azure Kubernetes Fleet Manager preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. Azure Kubernetes Fleet Manager previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use.
If you don't have an Azure subscription, create an Azure free account before you begin.
Read the conceptual overview of this feature, which provides an explanation of ServiceExport
and MultiClusterService
objects referenced in this document.
You must have a Fleet resource with a hub cluster and member clusters. If you don't have this resource, follow Quickstart: Create a Fleet resource and join member clusters.
The target Azure Kubernetes Service (AKS) clusters on which the workloads are deployed need to be present on either the same virtual network or on peered virtual networks.
You must gain access to the Kubernetes API of the hub cluster by following the steps in Access Fleet hub cluster Kubernetes API.
Set the following environment variables and obtain the kubeconfigs for the fleet and all member clusters:
export GROUP=<resource-group>
export FLEET=<fleet-name>
export MEMBER_CLUSTER_1=aks-member-1
export MEMBER_CLUSTER_2=aks-member-2
az fleet get-credentials --resource-group ${GROUP} --name ${FLEET} --file fleet
az aks get-credentials --resource-group ${GROUP} --name ${MEMBER_CLUSTER_1} --file aks-member-1
az aks get-credentials --resource-group ${GROUP} --name ${MEMBER_CLUSTER_2} --file aks-member-2
Use the Bash environment in Azure Cloud Shell. For more information, see Quickstart for Bash in Azure Cloud Shell.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're running on Windows or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish the authentication process, follow the steps displayed in your terminal. For other sign-in options, see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more information about extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the latest version, run az upgrade.
Note
The steps in this how-to guide refer to a sample application for demonstration purposes only. You can substitute this workload for any of your own existing Deployment and Service objects.
These steps deploy the sample workload from the Fleet cluster to member clusters using Kubernetes configuration propagation. Alternatively, you can choose to deploy these Kubernetes configurations to each member cluster separately, one at a time.
Create a namespace on the fleet cluster:
KUBECONFIG=fleet kubectl create namespace kuard-demo
Output looks similar to the following example:
namespace/kuard-demo created
Apply the Deployment, Service, ServiceExport objects:
KUBECONFIG=fleet kubectl apply -f https://raw.githubusercontent.com/Azure/AKS/master/examples/fleet/kuard/kuard-export-service.yaml
The ServiceExport
specification in the above file allows you to export a service from member clusters to the Fleet resource. Once successfully exported, the service and all its endpoints are synced to the fleet cluster and can then be used to set up multi-cluster load balancing across these endpoints. The output looks similar to the following example:
deployment.apps/kuard created
service/kuard created
serviceexport.networking.fleet.azure.com/kuard created
Create the following ClusterResourcePlacement
in a file called crp-2.yaml
. Notice we're selecting clusters in the eastus
region:
apiVersion: placement.kubernetes-fleet.io/v1
kind: ClusterResourcePlacement
metadata:
name: kuard-demo
spec:
resourceSelectors:
- group: ""
version: v1
kind: Namespace
name: kuard-demo
policy:
affinity:
clusterAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
clusterSelectorTerms:
- labelSelector:
matchLabels:
fleet.azure.com/location: eastus
Apply the ClusterResourcePlacement
:
KUBECONFIG=fleet kubectl apply -f crp-2.yaml
If successful, the output looks similar to the following example:
clusterresourceplacement.placement.kubernetes-fleet.io/kuard-demo created
Check the status of the ClusterResourcePlacement
:
KUBECONFIG=fleet kubectl get clusterresourceplacements
If successful, the output looks similar to the following example:
NAME GEN SCHEDULED SCHEDULEDGEN APPLIED APPLIEDGEN AGE
kuard-demo 1 True 1 True 1 20s
Check whether the service is successfully exported for the member clusters in eastus
region:
KUBECONFIG=aks-member-1 kubectl get serviceexport kuard --namespace kuard-demo
Output looks similar to the following example:
NAME IS-VALID IS-CONFLICTED AGE
kuard True False 25s
KUBECONFIG=aks-member-2 kubectl get serviceexport kuard --namespace kuard-demo
Output looks similar to the following example:
NAME IS-VALID IS-CONFLICTED AGE
kuard True False 55s
You should see that the service is valid for export (IS-VALID
field is true
) and has no conflicts with other exports (IS-CONFLICT
is false
).
Note
It may take a minute or two for the ServiceExport to be propagated.
Create MultiClusterService
on one member to load balance across the service endpoints in these clusters:
KUBECONFIG=aks-member-1 kubectl apply -f https://raw.githubusercontent.com/Azure/AKS/master/examples/fleet/kuard/kuard-mcs.yaml
Note
To expose the service via the internal IP instead of public one, add the annotation to the MultiClusterService:
apiVersion: networking.fleet.azure.com/v1alpha1
kind: MultiClusterService
metadata:
name: kuard
namespace: kuard-demo
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
...
Output looks similar to the following example:
multiclusterservice.networking.fleet.azure.com/kuard created
Verify the MultiClusterService is valid by running the following command:
KUBECONFIG=aks-member-1 kubectl get multiclusterservice kuard --namespace kuard-demo
The output should look similar to the following example:
NAME SERVICE-IMPORT EXTERNAL-IP IS-VALID AGE
kuard kuard <a.b.c.d> True 40s
The IS-VALID
field should be true
in the output. Check out the external load balancer IP address (EXTERNAL-IP
) in the output. It may take a while before the import is fully processed and the IP address becomes available.
Run the following command multiple times using the external load balancer IP address:
curl <a.b.c.d>:8080 | grep addrs
Notice that the IPs of the pods serving the request is changing and that these pods are from member clusters aks-member-1
and aks-member-2
from the eastus
region. You can verify the pod IPs by running the following commands on the clusters from eastus
region:
KUBECONFIG=aks-member-1 kubectl get pods -n kuard-demo -o wide
KUBECONFIG=aks-member-2 kubectl get pods -n kuard-demo -o wide
Azure Kubernetes Service feedback
Azure Kubernetes Service is an open source project. Select a link to provide feedback:
Events
Mar 17, 9 PM - Mar 21, 10 AM
Join the meetup series to build scalable AI solutions based on real-world use cases with fellow developers and experts.
Register nowTraining
Learning path
Azure Kubernetes Service (AKS) application and cluster scalability - Training
Azure Kubernetes Service (AKS) application and cluster scalability
Certification
Microsoft Certified: Azure Network Engineer Associate - Certifications
Demonstrate the design, implementation, and maintenance of Azure networking infrastructure, load balancing traffic, network routing, and more.