Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
This article describes how to use Azure Kubernetes Fleet Manager ResourcePlacement to deploy namespace-scoped resources across clusters in a fleet.
Important
Azure Kubernetes Fleet Manager preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. Azure Kubernetes Fleet Manager previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use.
Prerequisites
- If you don't have an Azure account, create a free account before you begin.
- Read the conceptual overview of namespace-scoped resource placement to understand the concepts and terminology used in this article.
- You need a Fleet Manager with a hub cluster and member clusters. If you don't have one, see Create an Azure Kubernetes Fleet Manager resource and join member clusters by using the Azure CLI.
- You need access to the Kubernetes API of the hub cluster. If you don't have access, see Access the Kubernetes API for an Azure Kubernetes Fleet Manager hub cluster.
Establish the namespace across member clusters
Before you can use ResourcePlacement to deploy namespace-scoped resources, the target namespace must exist on the member clusters. This example shows how to create a namespace on the hub cluster and propagate it to member clusters using ClusterResourcePlacement.
Note
The following example uses the placement.kubernetes-fleet.io/v1beta1 API version. The selectionScope: NamespaceOnly field is a preview feature available in v1beta1 and isn't available in the stable v1 API.
Create a namespace on the hub cluster:
kubectl create namespace my-appCreate a
ClusterResourcePlacementobject to propagate the namespace to all member clusters. Save the following YAML to a file namednamespace-crp.yaml:apiVersion: placement.kubernetes-fleet.io/v1beta1 kind: ClusterResourcePlacement metadata: name: my-app-namespace spec: resourceSelectors: - group: "" kind: Namespace name: my-app version: v1 selectionScope: NamespaceOnly policy: placementType: PickAllApply the
ClusterResourcePlacementto the hub cluster:kubectl apply -f namespace-crp.yamlVerify that the namespace was propagated successfully:
kubectl get clusterresourceplacement my-app-namespaceYour output should look similar to the following example:
NAME GEN SCHEDULED SCHEDULED-GEN AVAILABLE AVAILABLE-GEN AGE my-app-namespace 1 True 1 True 1 15s
Use ResourcePlacement to place namespace-scoped resources
The ResourcePlacement object is created within a namespace on the hub cluster and is used to propagate specific namespace-scoped resources to member clusters. This example demonstrates how to propagate ConfigMaps to specific member clusters using the ResourcePlacement object with a PickFixed placement policy.
For more information, see namespace-scoped resource placement using Azure Kubernetes Fleet Manager ResourcePlacement.
Create ConfigMaps in the namespace on the hub cluster. These ConfigMaps will be propagated to the selected member clusters:
kubectl create configmap app-config \ --from-literal=environment=production \ --from-literal=log-level=info \ -n my-app kubectl create configmap feature-flags \ --from-literal=new-ui=enabled \ --from-literal=api-v2=disabled \ -n my-appCreate a
ResourcePlacementobject to propagate the ConfigMaps. Save the following YAML to a file namedapp-configs-rp.yaml:apiVersion: placement.kubernetes-fleet.io/v1beta1 kind: ResourcePlacement metadata: name: app-configs namespace: my-app spec: resourceSelectors: - group: "" kind: ConfigMap version: v1 name: app-config - group: "" kind: ConfigMap version: v1 name: feature-flags policy: placementType: PickFixed clusterNames: - membercluster1 - membercluster2Note
Replace
membercluster1andmembercluster2with the actual names of your member clusters. You can list available member clusters usingkubectl get memberclusters.Apply the
ResourcePlacementto the hub cluster:kubectl apply -f app-configs-rp.yamlCheck the progress of the resource propagation:
kubectl get resourceplacement app-configs -n my-appYour output should look similar to the following example:
NAME GEN SCHEDULED SCHEDULED-GEN AVAILABLE AVAILABLE-GEN AGE app-configs 1 True 1 True 1 20sView the details of the placement object:
kubectl describe resourceplacement app-configs -n my-appYour output should look similar to the following example:
Name: app-configs Namespace: my-app Labels: <none> Annotations: <none> API Version: placement.kubernetes-fleet.io/v1beta1 Kind: ResourcePlacement Metadata: Creation Timestamp: 2025-11-13T22:08:12Z Finalizers: kubernetes-fleet.io/crp-cleanup kubernetes-fleet.io/scheduler-cleanup Generation: 1 Resource Version: 12345 UID: cec941f1-e48a-4045-b5dd-188bfc1a830f Spec: Policy: Cluster Names: membercluster1 membercluster2 Placement Type: PickFixed Resource Selectors: Group: Kind: ConfigMap Name: app-config Version: v1 Group: Kind: ConfigMap Name: feature-flags Version: v1 Revision History Limit: 10 Strategy: Type: RollingUpdate Status: Conditions: Last Transition Time: 2025-11-13T22:08:12Z Message: found all cluster needed as specified by the scheduling policy, found 2 cluster(s) Observed Generation: 1 Reason: SchedulingPolicyFulfilled Status: True Type: ResourcePlacementScheduled Last Transition Time: 2025-11-13T22:08:12Z Message: All 2 cluster(s) start rolling out the latest resource Observed Generation: 1 Reason: RolloutStarted Status: True Type: ResourcePlacementRolloutStarted Last Transition Time: 2025-11-13T22:08:13Z Message: No override rules are configured for the selected resources Observed Generation: 1 Reason: NoOverrideSpecified Status: True Type: ResourcePlacementOverridden Last Transition Time: 2025-11-13T22:08:13Z Message: Works(s) are succcesfully created or updated in 2 target cluster(s)' namespaces Observed Generation: 1 Reason: WorkSynchronized Status: True Type: ResourcePlacementWorkSynchronized Last Transition Time: 2025-11-13T22:08:13Z Message: The selected resources are successfully applied to 2 cluster(s) Observed Generation: 1 Reason: ApplySucceeded Status: True Type: ResourcePlacementApplied Last Transition Time: 2025-11-13T22:08:13Z Message: The selected resources in 2 cluster(s) are available now Observed Generation: 1 Reason: ResourceAvailable Status: True Type: ResourcePlacementAvailable Observed Resource Index: 0 Placement Statuses: Cluster Name: membercluster1 Conditions: Last Transition Time: 2025-11-13T22:08:12Z Message: Successfully scheduled resources for placement in "membercluster1": picked by scheduling policy Observed Generation: 1 Reason: Scheduled Status: True Type: Scheduled Last Transition Time: 2025-11-13T22:08:12Z Message: Detected the new changes on the resources and started the rollout process Observed Generation: 1 Reason: RolloutStarted Status: True Type: RolloutStarted Last Transition Time: 2025-11-13T22:08:13Z Message: No override rules are configured for the selected resources Observed Generation: 1 Reason: NoOverrideSpecified Status: True Type: Overridden Last Transition Time: 2025-11-13T22:08:13Z Message: All of the works are synchronized to the latest Observed Generation: 1 Reason: AllWorkSynced Status: True Type: WorkSynchronized Last Transition Time: 2025-11-13T22:08:13Z Message: All corresponding work objects are applied Observed Generation: 1 Reason: AllWorkHaveBeenApplied Status: True Type: Applied Last Transition Time: 2025-11-13T22:08:13Z Message: All corresponding work objects are available Observed Generation: 1 Reason: AllWorkAreAvailable Status: True Type: Available Observed Resource Index: 0 Cluster Name: membercluster2 Conditions: Last Transition Time: 2025-11-13T22:08:12Z Message: Successfully scheduled resources for placement in "membercluster2": picked by scheduling policy Observed Generation: 1 Reason: Scheduled Status: True Type: Scheduled Last Transition Time: 2025-11-13T22:08:12Z Message: Detected the new changes on the resources and started the rollout process Observed Generation: 1 Reason: RolloutStarted Status: True Type: RolloutStarted Last Transition Time: 2025-11-13T22:08:13Z Message: No override rules are configured for the selected resources Observed Generation: 1 Reason: NoOverrideSpecified Status: True Type: Overridden Last Transition Time: 2025-11-13T22:08:13Z Message: All of the works are synchronized to the latest Observed Generation: 1 Reason: AllWorkSynced Status: True Type: WorkSynchronized Last Transition Time: 2025-11-13T22:08:13Z Message: All corresponding work objects are applied Observed Generation: 1 Reason: AllWorkHaveBeenApplied Status: True Type: Applied Last Transition Time: 2025-11-13T22:08:13Z Message: All corresponding work objects are available Observed Generation: 1 Reason: AllWorkAreAvailable Status: True Type: Available Observed Resource Index: 0 Selected Resources: Kind: ConfigMap Name: app-config Namespace: my-app Version: v1 Kind: ConfigMap Name: feature-flags Namespace: my-app Version: v1 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal PlacementRolloutStarted 37s placement-controller Started rolling out the latest resources Normal PlacementOverriddenSucceeded 36s placement-controller Placement has been successfully overridden Normal PlacementWorkSynchronized 36s placement-controller Work(s) have been created or updated successfully for the selected cluster(s) Normal PlacementApplied 36s placement-controller Resources have been applied to the selected cluster(s) Normal PlacementAvailable 36s placement-controller Resources are available on the selected cluster(s) Normal PlacementRolloutCompleted 36s placement-controller Placement has finished the rollout process and reached the desired status
Verify resources on member clusters
You can verify that the ConfigMaps were successfully propagated to the member clusters.
Get the credentials for one of your member clusters:
az aks get-credentials --resource-group <resource-group> --name membercluster1Verify that the namespace exists:
kubectl get namespace my-appVerify that the ConfigMaps exist in the namespace:
kubectl get configmap -n my-appYour output should show both ConfigMaps:
NAME DATA AGE app-config 2 2m feature-flags 2 2mView the contents of a ConfigMap:
kubectl describe configmap app-config -n my-app
Clean up resources
If you no longer want to use the ResourcePlacement objects, you can delete them using the kubectl delete command:
kubectl delete resourceplacement app-configs -n my-app
To also remove the namespace ClusterResourcePlacement:
kubectl delete clusterresourceplacement my-app-namespace
To remove the namespace and all resources within it from the hub cluster:
kubectl delete namespace my-app
Related content
To learn more about namespace-scoped resource placement, see the following resources:
- Using ResourcePlacement to deploy namespace-scoped resources
- Using ClusterResourcePlacement to deploy cluster-scoped resources
- Understanding resource placement status output
- Use overrides to customize namespace-scoped resources
- Defining a rollout strategy for resource placement
- Cluster resource placement FAQs
Azure Kubernetes Service