本文說明如何使用 Azure Kubernetes Fleet Manager ResourcePlacement 在艦隊中的叢集間部署命名空間範圍的資源。
這很重要
Azure Kubernetes Fleet Manager 預覽功能可在自助服務、選擇加入的基礎上使用。 預覽是「依現況」及「可用時」提供的,並不包括在服務等級協定和有限保固之內。 客戶支援部門會竭盡全力支援一部分的 Azure Kubernetes 機群管理員預覽功能。 因此,這些功能不適合實際執行用途。
先決條件
- 如果您沒有 Azure 帳戶,請在開始之前建立 免費帳戶 。
- 閱讀 命名空間範圍資源置放的概念概述 ,以了解本文所使用的概念與術語。
- 您需要一個具有中樞叢集和成員叢集的機群管理員。 如果您沒有,請參閱建立 Azure Kubernetes Fleet Manager 資源,並使用 Azure CLI 加入成員叢集。
- 您需要中樞叢集的 Kubernetes API 存取權。 如果您沒有存取權,請參閱存取 Azure Kubernetes 機群管理員中樞叢集的 Kubernetes API。
在成員叢集間建立命名空間
在你能部署 ResourcePlacement 命名空間範圍的資源之前,目標命名空間必須存在於成員叢集上。 此範例展示了如何在樞紐叢集上建立命名空間,並利用 ClusterResourcePlacement將其傳播到成員叢集。
備註
以下範例使用 placement.kubernetes-fleet.io/v1beta1 API版本。 這個 selectionScope: NamespaceOnly 欄位是預覽功能,出現在 v1beta1 中,且在穩定版 v1 API 中沒有。
在樞紐叢集建立命名空間:
kubectl create namespace my-app建立一個
ClusterResourcePlacement物件,將命名空間傳播給所有成員叢集。 將以下 YAML 儲存到名為namespace-crp.yaml的檔案中:apiVersion: placement.kubernetes-fleet.io/v1beta1 kind: ClusterResourcePlacement metadata: name: my-app-namespace spec: resourceSelectors: - group: "" kind: Namespace name: my-app version: v1 selectionScope: NamespaceOnly policy: placementType: PickAll應用
ClusterResourcePlacement到樞紐叢集:kubectl apply -f namespace-crp.yaml確認命名空間是否成功傳播:
kubectl get clusterresourceplacement my-app-namespace您的輸出訊息看起來應類似下列範例:
NAME GEN SCHEDULED SCHEDULED-GEN AVAILABLE AVAILABLE-GEN AGE my-app-namespace 1 True 1 True 1 15s
使用 ResourcePlacement 來放置命名空間範圍的資源
該 ResourcePlacement 物件在集線器叢集的命名空間中建立,並用於將特定命名空間範圍的資源傳播到成員叢集。 此範例示範如何利用帶有ResourcePlacement放置政策的PickFixed物件,將 ConfigMaps 傳播到特定成員叢集。
欲了解更多資訊,請參閱 使用 Azure Kubernetes Fleet Manager ResourcePlacement 進行命名空間範圍的資源配置。
在樞紐叢集的命名空間建立 ConfigMaps。 這些配置地圖將被傳播至所選成員叢集:
kubectl create configmap app-config \ --from-literal=environment=production \ --from-literal=log-level=info \ -n my-app kubectl create configmap feature-flags \ --from-literal=new-ui=enabled \ --from-literal=api-v2=disabled \ -n my-app建立一個
ResourcePlacement物件來傳播 ConfigMaps。 將以下 YAML 儲存到一個檔案名為app-configs-rp.yaml:apiVersion: placement.kubernetes-fleet.io/v1beta1 kind: ResourcePlacement metadata: name: app-configs namespace: my-app spec: resourceSelectors: - group: "" kind: ConfigMap version: v1 name: app-config - group: "" kind: ConfigMap version: v1 name: feature-flags policy: placementType: PickFixed clusterNames: - membercluster1 - membercluster2備註
用你成員叢集的實際名稱來替換
membercluster1和membercluster2。 你可以用kubectl get memberclusters來列出可用的成員叢集。將
ResourcePlacement套用到樞紐叢集:kubectl apply -f app-configs-rp.yaml檢查資源傳播的進度:
kubectl get resourceplacement app-configs -n my-app您的輸出訊息看起來應類似下列範例:
NAME GEN SCHEDULED SCHEDULED-GEN AVAILABLE AVAILABLE-GEN AGE app-configs 1 True 1 True 1 20s檢視放置物件的詳細資料:
kubectl describe resourceplacement app-configs -n my-app您的輸出訊息看起來應類似下列範例:
Name: app-configs Namespace: my-app Labels: <none> Annotations: <none> API Version: placement.kubernetes-fleet.io/v1beta1 Kind: ResourcePlacement Metadata: Creation Timestamp: 2025-11-13T22:08:12Z Finalizers: kubernetes-fleet.io/crp-cleanup kubernetes-fleet.io/scheduler-cleanup Generation: 1 Resource Version: 12345 UID: cec941f1-e48a-4045-b5dd-188bfc1a830f Spec: Policy: Cluster Names: membercluster1 membercluster2 Placement Type: PickFixed Resource Selectors: Group: Kind: ConfigMap Name: app-config Version: v1 Group: Kind: ConfigMap Name: feature-flags Version: v1 Revision History Limit: 10 Strategy: Type: RollingUpdate Status: Conditions: Last Transition Time: 2025-11-13T22:08:12Z Message: found all cluster needed as specified by the scheduling policy, found 2 cluster(s) Observed Generation: 1 Reason: SchedulingPolicyFulfilled Status: True Type: ResourcePlacementScheduled Last Transition Time: 2025-11-13T22:08:12Z Message: All 2 cluster(s) start rolling out the latest resource Observed Generation: 1 Reason: RolloutStarted Status: True Type: ResourcePlacementRolloutStarted Last Transition Time: 2025-11-13T22:08:13Z Message: No override rules are configured for the selected resources Observed Generation: 1 Reason: NoOverrideSpecified Status: True Type: ResourcePlacementOverridden Last Transition Time: 2025-11-13T22:08:13Z Message: Works(s) are succcesfully created or updated in 2 target cluster(s)' namespaces Observed Generation: 1 Reason: WorkSynchronized Status: True Type: ResourcePlacementWorkSynchronized Last Transition Time: 2025-11-13T22:08:13Z Message: The selected resources are successfully applied to 2 cluster(s) Observed Generation: 1 Reason: ApplySucceeded Status: True Type: ResourcePlacementApplied Last Transition Time: 2025-11-13T22:08:13Z Message: The selected resources in 2 cluster(s) are available now Observed Generation: 1 Reason: ResourceAvailable Status: True Type: ResourcePlacementAvailable Observed Resource Index: 0 Placement Statuses: Cluster Name: membercluster1 Conditions: Last Transition Time: 2025-11-13T22:08:12Z Message: Successfully scheduled resources for placement in "membercluster1": picked by scheduling policy Observed Generation: 1 Reason: Scheduled Status: True Type: Scheduled Last Transition Time: 2025-11-13T22:08:12Z Message: Detected the new changes on the resources and started the rollout process Observed Generation: 1 Reason: RolloutStarted Status: True Type: RolloutStarted Last Transition Time: 2025-11-13T22:08:13Z Message: No override rules are configured for the selected resources Observed Generation: 1 Reason: NoOverrideSpecified Status: True Type: Overridden Last Transition Time: 2025-11-13T22:08:13Z Message: All of the works are synchronized to the latest Observed Generation: 1 Reason: AllWorkSynced Status: True Type: WorkSynchronized Last Transition Time: 2025-11-13T22:08:13Z Message: All corresponding work objects are applied Observed Generation: 1 Reason: AllWorkHaveBeenApplied Status: True Type: Applied Last Transition Time: 2025-11-13T22:08:13Z Message: All corresponding work objects are available Observed Generation: 1 Reason: AllWorkAreAvailable Status: True Type: Available Observed Resource Index: 0 Cluster Name: membercluster2 Conditions: Last Transition Time: 2025-11-13T22:08:12Z Message: Successfully scheduled resources for placement in "membercluster2": picked by scheduling policy Observed Generation: 1 Reason: Scheduled Status: True Type: Scheduled Last Transition Time: 2025-11-13T22:08:12Z Message: Detected the new changes on the resources and started the rollout process Observed Generation: 1 Reason: RolloutStarted Status: True Type: RolloutStarted Last Transition Time: 2025-11-13T22:08:13Z Message: No override rules are configured for the selected resources Observed Generation: 1 Reason: NoOverrideSpecified Status: True Type: Overridden Last Transition Time: 2025-11-13T22:08:13Z Message: All of the works are synchronized to the latest Observed Generation: 1 Reason: AllWorkSynced Status: True Type: WorkSynchronized Last Transition Time: 2025-11-13T22:08:13Z Message: All corresponding work objects are applied Observed Generation: 1 Reason: AllWorkHaveBeenApplied Status: True Type: Applied Last Transition Time: 2025-11-13T22:08:13Z Message: All corresponding work objects are available Observed Generation: 1 Reason: AllWorkAreAvailable Status: True Type: Available Observed Resource Index: 0 Selected Resources: Kind: ConfigMap Name: app-config Namespace: my-app Version: v1 Kind: ConfigMap Name: feature-flags Namespace: my-app Version: v1 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal PlacementRolloutStarted 37s placement-controller Started rolling out the latest resources Normal PlacementOverriddenSucceeded 36s placement-controller Placement has been successfully overridden Normal PlacementWorkSynchronized 36s placement-controller Work(s) have been created or updated successfully for the selected cluster(s) Normal PlacementApplied 36s placement-controller Resources have been applied to the selected cluster(s) Normal PlacementAvailable 36s placement-controller Resources are available on the selected cluster(s) Normal PlacementRolloutCompleted 36s placement-controller Placement has finished the rollout process and reached the desired status
在成員叢集上驗證資源
你可以驗證 ConfigMap 是否成功傳播到成員叢集。
取得你其中一個會員群組的憑證:
az aks get-credentials --resource-group <resource-group> --name membercluster1確認命名空間的存在:
kubectl get namespace my-app確認 ConfigMaps 是否存在於命名空間中:
kubectl get configmap -n my-app你的輸出應該會顯示兩個 ConfigMaps:
NAME DATA AGE app-config 2 2m feature-flags 2 2m查看 ConfigMap 的內容:
kubectl describe configmap app-config -n my-app
清理資源
如果你不想再使用這些 ResourcePlacement 物件,可以用以下 kubectl delete 指令刪除它們:
kubectl delete resourceplacement app-configs -n my-app
也要移除命名空間 ClusterResourcePlacement:
kubectl delete clusterresourceplacement my-app-namespace
要從樞紐叢集中移除命名空間及其內所有資源:
kubectl delete namespace my-app
相關內容
欲了解更多關於命名空間範圍資源放置的資訊,請參閱以下資源: