- 使用 Azure Resource Manager 範本部署 Azure Nexus Kubernetes 叢集。
本快速入門說明如何使用 Azure Resource Manager 範本 (ARM 範本) 來建立 Azure Nexus Kubernetes 叢集。
Azure Resource Manager 範本是 JavaScript 物件表示法 (JSON) 檔案,可定義專案的基礎結構和組態。 範本使用宣告式語法。 您可以描述預期的部署,而不需要撰寫程式設計命令順序來建立部署。
先決條件
如果您沒有 Azure 帳戶,請在開始之前建立 免費帳戶 。
在 Azure Cloud Shell 中使用 Bash 環境。 如需詳細資訊,請參閱開始使用 Azure Cloud Shell。
若要在本地執行 CLI 參考命令,請安裝 Azure CLI。 如果您正在 Windows 或 macOS 上執行,請考慮在 Docker 容器中執行 Azure CLI。 如需詳細資訊,請參閱〈如何在 Docker 容器中執行 Azure CLI〉。
如果您使用的是本機安裝,請使用 az login 命令,透過 Azure CLI 來登入。 若要完成驗證程式,請遵循終端機中顯示的步驟。 如需其他登入選項,請參閱 使用 Azure CLI 向 Azure 進行驗證。
出現提示時,請在第一次使用時安裝 Azure CLI 延伸模組。 如需擴充功能的詳細資訊,請參閱 使用和管理 Azure CLI 的擴充功能。
執行 az version 以尋找已安裝的版本和相依程式庫。 若要升級至最新版本,請執行 az upgrade。
安裝必要的 Azure CLI 擴充功能最新版。
本文需要 Azure CLI 2.61.0 版或更新版本。 如果使用 Azure Cloud Shell,則已安裝最新版本。
如果您有多個 Azure 訂用帳戶,則請使用
az account
命令選取應該從中針對資源計費的適當訂用帳戶識別碼。如需支援的 VM SKU 清單,請參閱參考區段中的 VM SKU 資料表。
如需支援的 Kubernetes 版本清單,請參閱支援的 Kubernetes 版本。
使用
az group create
命令建立資源群組。 Azure 資源群組是部署及管理 Azure 資源所在的邏輯群組。 建立資源群組時,系統會提示您指定位置。 此位置是資源群組中繼資料的儲存位置,如果未在資源建立期間指定另一個區域,此位置也會是您在 Azure 中執行資源的位置。 下列範例會在 eastus 位置建立名為 myResourceGroup 的資源群組。az group create --name myResourceGroup --location eastus
下列輸出範例類似於成功建立資源群組:
{ "id": "/subscriptions/<guid>/resourceGroups/myResourceGroup", "location": "eastus", "managedBy": null, "name": "myResourceGroup", "properties": { "provisioningState": "Succeeded" }, "tags": null }
若要部署 Bicep 檔案或 ARM 範本,您需要對要部署的資源具備寫入存取權,並且需要存取 Microsoft.Resources/deployments 資源類型上所有作業的權限。 例如,若要部署叢集,您需要 Microsoft.NetworkCloud/kubernetesclusters/write 和 Microsoft.Resources/deployments/* 權限。 如需角色與權限的清單,請參閱 Azure 內建角色。
您需要 Azure 運算子連接點叢集的
custom location
資源識別碼。您必須根據特定的工作負載需求建立各種網路,而且必須為工作負載提供適當的 IP 位址。 為確保順利實作,建議您諮詢相關的支援小組以取得協助。
本快速入門假設您已有 Kubernetes 概念的基本知識。 如需詳細資訊,請參閱 Azure Kubernetes Services (AKS) 的 Kubernetes 核心概念。
檢閱範本
在部署 Kubernetes 範本之前,讓我們先檢閱內容以瞭解其結構。
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"kubernetesClusterName": {
"type": "string",
"metadata": {
"description": "The name of Nexus Kubernetes cluster"
}
},
"location": {
"type": "string",
"metadata": {
"description": "The Azure region where the cluster is to be deployed"
},
"defaultValue": "[resourceGroup().location]"
},
"extendedLocation": {
"type": "string",
"metadata": {
"description": "The custom location of the Nexus instance"
},
"defaultValue": ""
},
"tags": {
"type": "object",
"metadata": {
"description": "The metadata tags to be associated with the cluster resource"
},
"defaultValue": {}
},
"adminUsername": {
"type": "string",
"metadata": {
"description": "The username for the administrative account on the cluster"
},
"defaultValue": "azureuser"
},
"adminGroupObjectIds": {
"type": "array",
"metadata": {
"description": "The object IDs of Azure Active Directory (AAD) groups that will have administrative access to the cluster"
},
"defaultValue": []
},
"cniNetworkId": {
"type": "string",
"metadata": {
"description": "The Azure Resource Manager (ARM) id of the network to be used as the Container Networking Interface (CNI) network"
}
},
"cloudServicesNetworkId": {
"type": "string",
"metadata": {
"description": "The ARM id of the network to be used for cloud services network"
}
},
"podCidrs": {
"type": "array",
"metadata": {
"description": "The CIDR blocks used for Nexus Kubernetes PODs in the cluster"
},
"defaultValue": ["10.244.0.0/16"]
},
"serviceCidrs": {
"type": "array",
"metadata": {
"description": "The CIDR blocks used for k8s service in the cluster"
},
"defaultValue": ["10.96.0.0/16"]
},
"dnsServiceIp": {
"type": "string",
"metadata": {
"description": "The IP address of the DNS service in the cluster"
},
"defaultValue": "10.96.0.10"
},
"agentPoolL2Networks": {
"type": "array",
"metadata": {
"description": "The Layer 2 networks associated with the initial agent pool"
},
"defaultValue": []
/*
{
"networkId": "string",
"pluginType": "SRIOV|DPDK|OSDevice|MACVLAN"
}
*/
},
"agentPoolL3Networks": {
"type": "array",
"metadata": {
"description": "The Layer 3 networks associated with the initial agent pool"
},
"defaultValue": []
/*
{
"ipamEnabled": "True/False",
"networkId": "string",
"pluginType": "SRIOV|DPDK|OSDevice|MACVLAN|IPVLAN"
}
*/
},
"agentPoolTrunkedNetworks": {
"type": "array",
"metadata": {
"description": "The trunked networks associated with the initial agent pool"
},
"defaultValue": []
/*
{
"networkId": "string",
"pluginType": "SRIOV|DPDK|OSDevice|MACVLAN"
}
*/
},
"l2Networks": {
"type": "array",
"metadata": {
"description": "The Layer 2 networks associated with the cluster"
},
"defaultValue": []
/*
{
"networkId": "string",
"pluginType": "SRIOV|DPDK|OSDevice|MACVLAN"
}
*/
},
"l3Networks": {
"type": "array",
"metadata": {
"description": "The Layer 3 networks associated with the cluster"
},
"defaultValue": []
/*
{
"ipamEnabled": "True/False",
"networkId": "string",
"pluginType": "SRIOV|DPDK|OSDevice|MACVLAN|IPVLAN"
}
*/
},
"trunkedNetworks": {
"type": "array",
"metadata": {
"description": "The trunked networks associated with the cluster"
},
"defaultValue": []
/*
{
"networkId": "string",
"pluginType": "SRIOV|DPDK|OSDevice|MACVLAN"
}
*/
},
"ipAddressPools": {
"type": "array",
"metadata": {
"description": "The LoadBalancer IP address pools associated with the cluster"
},
"defaultValue": []
/*
{
"addresses": [
"string"
],
"autoAssign": "True/False",
"name": "sting",
"onlyUseHostIps": "True/False"
}
*/
},
"fabricPeeringEnabled": {
"type": "string",
"metadata": {
"description": "The indicator to specify if the load balancer peers with the network fabric."
},
"defaultValue": "True"
},
"bgpAdvertisements": {
"type": "array",
"metadata": {
"description": "The association of IP address pools to the communities and peers, allowing for announcement of IPs."
},
"defaultValue": []
/*
{
"advertiseToFabric": "True/False",
"communities": [
"string"
],
"ipAddressPools": [
"string"
],
"pools": [
"string"
]
}
*/
},
"bgpPeers": {
"type": "array",
"metadata": {
"description": "The list of additional BgpPeer entities that the Kubernetes cluster will peer with. All peering must be explicitly defined."
},
"defaultValue": []
/*
{
"bfdEnabled": "True/False",
"bgpMultiHop": "True/False",
"myAsn": 0-4294967295,
"name": "string",
"password": "string",
"peerAddress": "string",
"peerPort": 179
}
*/
},
"kubernetesVersion": {
"type": "string",
"metadata": {
"description": "The version of Kubernetes to be used in the Nexus Kubernetes cluster"
},
"defaultValue": "v1.27.1"
},
"controlPlaneCount": {
"type": "int",
"metadata": {
"description": "The number of control plane nodes to be deployed in the cluster"
},
"defaultValue": 1
},
"controlPlaneZones": {
"type": "array",
"metadata": {
"description": "The zones/racks used for placement of the control plane nodes"
},
"defaultValue": []
/* array of strings Example: ["1", "2", "3"] */
},
"agentPoolZones": {
"type": "array",
"metadata": {
"description": "The zones/racks used for placement of the agent pool nodes"
},
"defaultValue": []
/* array of strings Example: ["1", "2", "3"] */
},
"controlPlaneVmSkuName": {
"type": "string",
"metadata": {
"description": "The size of the control plane nodes"
},
"defaultValue": "NC_G6_28_v1"
},
"systemPoolNodeCount": {
"type": "int",
"metadata": {
"description": "The number of worker nodes to be deployed in the initial agent pool"
},
"defaultValue": 1
},
"workerVmSkuName": {
"type": "string",
"metadata": {
"description": "The size of the worker nodes"
},
"defaultValue": "NC_P10_56_v1"
},
"initialPoolAgentOptions": {
"type": "object",
"metadata": {
"description": "The configurations for the initial agent pool"
},
"defaultValue": {}
/*
"hugepagesCount": int,
"hugepagesSize": "2M/1G"
*/
},
"sshPublicKeys": {
"type": "array",
"metadata": {
"description": "The cluster wide SSH public key that will be associated with the given user for secure remote login"
},
"defaultValue": []
/*
{
"keyData": "ssh-rsa AAAAA...."
},
{
"keyData": "ssh-rsa BBBBB...."
}
*/
},
"controlPlaneSshKeys": {
"type": "array",
"metadata": {
"description": "The control plane SSH public key that will be associated with the given user for secure remote login"
},
"defaultValue": []
/*
{
"keyData": "ssh-rsa AAAAA...."
},
{
"keyData": "ssh-rsa BBBBB...."
}
*/
},
"agentPoolSshKeys": {
"type": "array",
"metadata": {
"description": "The agent pool SSH public key that will be associated with the given user for secure remote login"
},
"defaultValue": []
/*
{
"keyData": "ssh-rsa AAAAA...."
},
{
"keyData": "ssh-rsa BBBBB...."
}
*/
},
"labels": {
"type": "array",
"metadata": {
"description": "The labels to assign to the nodes in the cluster for identification and organization"
},
"defaultValue": []
/*
{
"key": "string",
"value": "string"
}
*/
},
"taints": {
"type": "array",
"metadata": {
"description": "The taints to apply to the nodes in the cluster to restrict which pods can be scheduled on them"
},
"defaultValue": []
/*
{
"key": "string",
"value": "string:NoSchedule|PreferNoSchedule|NoExecute"
}
*/
}
},
"resources": [
{
"type": "Microsoft.NetworkCloud/kubernetesClusters",
"apiVersion": "2025-02-01",
"name": "[parameters('kubernetesClusterName')]",
"location": "[parameters('location')]",
"tags": "[parameters('tags')]",
"extendedLocation": {
"name": "[parameters('extendedLocation')]",
"type": "CustomLocation"
},
"properties": {
"kubernetesVersion": "[parameters('kubernetesVersion')]",
"managedResourceGroupConfiguration": {
"name": "[concat(uniqueString(resourceGroup().name), '-', parameters('kubernetesClusterName'))]",
"location": "[parameters('location')]"
},
"aadConfiguration": {
"adminGroupObjectIds": "[parameters('adminGroupObjectIds')]"
},
"administratorConfiguration": {
"adminUsername": "[parameters('adminUsername')]",
"sshPublicKeys": "[if(empty(parameters('sshPublicKeys')), createArray(), parameters('sshPublicKeys'))]"
},
"initialAgentPoolConfigurations": [
{
"name": "[concat(parameters('kubernetesClusterName'), '-nodepool-1')]",
"administratorConfiguration": {
"adminUsername": "[parameters('adminUsername')]",
"sshPublicKeys": "[if(empty(parameters('agentPoolSshKeys')), createArray(), parameters('agentPoolSshKeys'))]"
},
"count": "[parameters('systemPoolNodeCount')]",
"vmSkuName": "[parameters('workerVmSkuName')]",
"mode": "System",
"labels": "[if(empty(parameters('labels')), json('null'), parameters('labels'))]",
"taints": "[if(empty(parameters('taints')), json('null'), parameters('taints'))]",
"agentOptions": "[if(empty(parameters('initialPoolAgentOptions')), json('null'), parameters('initialPoolAgentOptions'))]",
"attachedNetworkConfiguration": {
"l2Networks": "[if(empty(parameters('agentPoolL2Networks')), json('null'), parameters('agentPoolL2Networks'))]",
"l3Networks": "[if(empty(parameters('agentPoolL3Networks')), json('null'), parameters('agentPoolL3Networks'))]",
"trunkedNetworks": "[if(empty(parameters('agentPoolTrunkedNetworks')), json('null'), parameters('agentPoolTrunkedNetworks'))]"
},
"availabilityZones": "[if(empty(parameters('agentPoolZones')), json('null'), parameters('agentPoolZones'))]",
"upgradeSettings": {
"maxSurge": "1"
}
}
],
"controlPlaneNodeConfiguration": {
"administratorConfiguration": {
"adminUsername": "[parameters('adminUsername')]",
"sshPublicKeys": "[if(empty(parameters('controlPlaneSshKeys')), createArray(), parameters('controlPlaneSshKeys'))]"
},
"count": "[parameters('controlPlaneCount')]",
"vmSkuName": "[parameters('controlPlaneVmSkuName')]",
"availabilityZones": "[if(empty(parameters('controlPlaneZones')), json('null'), parameters('controlPlaneZones'))]"
},
"networkConfiguration": {
"cniNetworkId": "[parameters('cniNetworkId')]",
"cloudServicesNetworkId": "[parameters('cloudServicesNetworkId')]",
"dnsServiceIp": "[parameters('dnsServiceIp')]",
"podCidrs": "[parameters('podCidrs')]",
"serviceCidrs": "[parameters('serviceCidrs')]",
"attachedNetworkConfiguration": {
"l2Networks": "[if(empty(parameters('l2Networks')), json('null'), parameters('l2Networks'))]",
"l3Networks": "[if(empty(parameters('l3Networks')), json('null'), parameters('l3Networks'))]",
"trunkedNetworks": "[if(empty(parameters('trunkedNetworks')), json('null'), parameters('trunkedNetworks'))]"
},
"bgpServiceLoadBalancerConfiguration": {
"ipAddressPools": "[if(empty(parameters('ipAddressPools')), json('null'), parameters('ipAddressPools'))]",
"fabricPeeringEnabled": "[if(empty(parameters('fabricPeeringEnabled')), json('null'), parameters('fabricPeeringEnabled'))]",
"bgpAdvertisements": "[if(empty(parameters('bgpAdvertisements')), json('null'), parameters('bgpAdvertisements'))]",
"bgpPeers": "[if(empty(parameters('bgpPeers')), json('null'), parameters('bgpPeers'))]"
}
}
}
}
]
}
檢閱並儲存名為 kubernetes-deploy.json
的範本檔案之後,請繼續進行至下一小節以部署範本。
部署範本
- 建立名為
kubernetes-deploy-parameters.json
的檔案,並以 JSON 格式新增必要的參數。 您可以使用下列範例作為起點。 使用您自己的值加以取代。
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"kubernetesClusterName":{
"value": "myNexusK8sCluster"
},
"adminGroupObjectIds": {
"value": [
"00000000-0000-0000-0000-000000000000"
]
},
"cniNetworkId": {
"value": "/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/l3Networks/<l3Network-name>"
},
"cloudServicesNetworkId": {
"value": "/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/cloudServicesNetworks/<csn-name>"
},
"extendedLocation": {
"value": "/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"
},
"location": {
"value": "eastus"
},
"sshPublicKeys": {
"value": [
{
"keyData": "ssh-rsa AAAAA...."
},
{
"keyData": "ssh-rsa BBBBB...."
}
]
}
}
}
- 部署範本。
az deployment group create \
--resource-group myResourceGroup \
--template-file kubernetes-deploy.json \
--parameters @kubernetes-deploy-parameters.json
如果沒有足夠的容量可部署要求的叢集節點,則會出現錯誤訊息。 不過,此訊息不會提供可用容量的任何詳細資料。 其指出叢集建立由於容量不足而無法繼續。
備註
容量計算會考慮整個平台叢集,而不是僅限於個別機架。 因此,如果代理程式集區是在容量不足的區域 (其中機架等於區域) 中建立,但另一個區域具有足夠的容量,叢集建立會繼續但最終會逾時。只有在建立叢集或代理程式集區期間未指定特定區域時,這個容量檢查方法才有意義。
檢閱已部署的資源
部署完成之後,您可以使用 CLI 或 Azure 入口網站檢視資源。
若要檢視 myNexusK8sCluster
資源群組中 myResourceGroup
叢集的詳細資料,請執行下列 Azure CLI 命令:
az networkcloud kubernetescluster show \
--name myNexusK8sCluster \
--resource-group myResourceGroup
此外,若要取得與 myNexusK8sCluster
資源群組中 myResourceGroup
叢集相關聯的代理程式集區名稱清單,您可以使用下列 Azure CLI 命令。
az networkcloud kubernetescluster agentpool list \
--kubernetes-cluster-name myNexusK8sCluster \
--resource-group myResourceGroup \
--output table
連接至叢集
現在 Nexus Kubernetes 叢集已成功建立並連線到 Azure Arc,您可以使用叢集連線功能來輕鬆地與其連線。 叢集連線可讓您從任何地方安全地存取和管理叢集,進而方便進行互動式開發、偵錯和叢集管理工作。
如需可用選項的詳細資訊,請參閱連線到 Azure 運算子連接點 Kubernetes 叢集。
備註
在建立連接點 Kubernetes 叢集時,連接點會自動建立專用於儲存叢集資源的受控資源群組,並在此群組內建立 Arc 連線的叢集資源。
若要存取叢集,您必須設定叢集連線 kubeconfig
。 在使用相關的 Microsoft Entra 實體登入 Azure CLI 後,便可取得從任何位置 (甚至在其周圍的防火牆之外) 與叢集通訊所需的 kubeconfig
。
設定
CLUSTER_NAME
、RESOURCE_GROUP
和SUBSCRIPTION_ID
變數。CLUSTER_NAME="myNexusK8sCluster" RESOURCE_GROUP="myResourceGroup" SUBSCRIPTION_ID=<set the correct subscription_id>
使用
az
來查詢受控資源群組,並將其儲存在MANAGED_RESOURCE_GROUP
az account set -s $SUBSCRIPTION_ID MANAGED_RESOURCE_GROUP=$(az networkcloud kubernetescluster show -n $CLUSTER_NAME -g $RESOURCE_GROUP --output tsv --query managedResourceGroupConfiguration.name)
下列命令會啟動 connectedk8s Proxy,以讓您連線到指定 Nexus Kubernetes 叢集的 Kubernetes API 伺服器。
az connectedk8s proxy -n $CLUSTER_NAME -g $MANAGED_RESOURCE_GROUP &
使用
kubectl
將要求傳送至叢集:kubectl get pods -A
您現在應該會看到叢集的回應,其中包含所有節點的清單。
備註
如果您看到錯誤訊息「無法將存取權杖張貼至用戶端 proxyFailed 以連線到 MSI」,則可能需要執行 az login
以向 Azure 重新驗證。
新增代理程式集區
上一個步驟中建立的叢集有單一節點集區。 讓我們使用ARM範本新增第二個代理程式集區。 下列範例會建立名為 myNexusK8sCluster-nodepool-2
的代理程式集區:
- 檢閱範本。
新增代理程式集區範本之前,讓我們先檢閱內容以瞭解其結構。
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"kubernetesClusterName": {
"type": "string",
"metadata": {
"description": "The name of Nexus Kubernetes cluster"
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "The Azure region where the cluster is to be deployed"
}
},
"extendedLocation": {
"type": "string",
"metadata": {
"description": "The custom location of the Nexus instance"
}
},
"adminUsername": {
"type": "string",
"defaultValue": "azureuser",
"metadata": {
"description": "The username for the administrative account on the cluster"
}
},
"agentPoolSshKeys": {
"type": "array",
"metadata": {
"description": "The agent pool SSH public key that will be associated with the given user for secure remote login"
},
"defaultValue": []
/*
{
"keyData": "ssh-rsa AAAAA...."
},
{
"keyData": "ssh-rsa BBBBB...."
}
*/
},
"agentPoolNodeCount": {
"type": "int",
"defaultValue": 1,
"metadata": {
"description": "Number of nodes in the agent pool"
}
},
"agentPoolName": {
"type": "string",
"defaultValue": "nodepool-2",
"metadata": {
"description": "Agent pool name"
}
},
"agentVmSku": {
"type": "string",
"defaultValue": "NC_P10_56_v1",
"metadata": {
"description": "VM size of the agent nodes"
}
},
"agentPoolZones": {
"type": "array",
"defaultValue": [],
"metadata": {
"description": "The zones/racks used for placement of the agent pool nodes"
}
/* array of strings Example: ["1", "2", "3"] */
},
"agentPoolMode": {
"type": "string",
"defaultValue": "User",
"metadata": {
"description": "Agent pool mode"
}
},
"agentOptions": {
"type": "object",
"defaultValue": {},
"metadata": {
"description": "The configurations for the initial agent pool"
}
/*
"hugepagesCount": int,
"hugepagesSize": "2M/1G"
*/
},
"labels": {
"type": "array",
"defaultValue": [],
"metadata": {
"description": "The labels to assign to the nodes in the cluster for identification and organization"
}
/*
{
"key": "string",
"value": "string"
}
*/
},
"taints": {
"type": "array",
"defaultValue": [],
"metadata": {
"description": "The taints to apply to the nodes in the cluster to restrict which pods can be scheduled on them"
}
/*
{
"key": "string",
"value": "string:NoSchedule|PreferNoSchedule|NoExecute"
}
*/
},
"l2Networks": {
"type": "array",
"defaultValue": [],
"metadata": {
"description": "The Layer 2 networks to connect to the agent pool"
}
/*
{
"networkId": "string",
"pluginType": "SRIOV|DPDK|OSDevice|MACVLAN|IPVLAN"
}
*/
},
"l3Networks": {
"type": "array",
"defaultValue": [],
"metadata": {
"description": "The Layer 3 networks to connect to the agent pool"
}
/*
{
"ipamEnabled": "True/False",
"networkId": "string",
"pluginType": "SRIOV|DPDK|OSDevice|MACVLAN|IPVLAN"
}
*/
},
"trunkedNetworks": {
"type": "array",
"defaultValue": [],
"metadata": {
"description": "The trunked networks to connect to the agent pool"
}
/*
{
"networkId": "string",
"pluginType": "SRIOV|DPDK|OSDevice|MACVLAN|IPVLAN"
}
*/
}
},
"resources": [
{
"type": "Microsoft.NetworkCloud/kubernetesClusters/agentpools",
"apiVersion": "2025-02-01",
"name": "[concat(parameters('kubernetesClusterName'), '/', parameters('kubernetesClusterName'), '-', parameters('agentPoolName'))]",
"location": "[parameters('location')]",
"extendedLocation": {
"name": "[parameters('extendedLocation')]",
"type": "CustomLocation"
},
"properties": {
"administratorConfiguration": {
"adminUsername": "[parameters('adminUsername')]",
"sshPublicKeys": "[if(empty(parameters('agentPoolSshKeys')), json('null'), parameters('agentPoolSshKeys'))]"
},
"count": "[parameters('agentPoolNodeCount')]",
"mode": "[parameters('agentPoolMode')]",
"vmSkuName": "[parameters('agentVmSku')]",
"labels": "[if(empty(parameters('labels')), json('null'), parameters('labels'))]",
"taints": "[if(empty(parameters('taints')), json('null'), parameters('taints'))]",
"agentOptions": "[if(empty(parameters('agentOptions')), json('null'), parameters('agentOptions'))]",
"attachedNetworkConfiguration": {
"l2Networks": "[if(empty(parameters('l2Networks')), json('null'), parameters('l2Networks'))]",
"l3Networks": "[if(empty(parameters('l3Networks')), json('null'), parameters('l3Networks'))]",
"trunkedNetworks": "[if(empty(parameters('trunkedNetworks')), json('null'), parameters('trunkedNetworks'))]"
},
"availabilityZones": "[if(empty(parameters('agentPoolZones')), json('null'), parameters('agentPoolZones'))]",
"upgradeSettings": {
"maxSurge": "1"
}
},
"dependsOn": []
}
]
}
檢閱並儲存名為 kubernetes-add-agentpool.json
的範本檔案之後,請繼續進行至下一小節以部署範本。
- 建立名為
kubernetes-nodepool-parameters.json
的檔案,並以 JSON 格式新增必要的參數。 您可以使用下列範例作為起點。 使用您自己的值加以取代。
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"kubernetesClusterName":{
"value": "myNexusK8sCluster"
},
"extendedLocation": {
"value": "/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"
}
}
}
- 部署範本。
az deployment group create \
--resource-group myResourceGroup \
--template-file kubernetes-add-agentpool.json \
--parameters @kubernetes-nodepool-parameters.json
備註
您可以使用初始代理程式集區組態,在叢集本身的初始建立期間新增多個代理程式集區。 不過,如果您想要在初始建立之後新增代理程式集區,您可以使用上述命令為 Nexus Kubernetes 叢集建立其他代理程式集區。
下列輸出範例類似於成功建立代理程式集區。
$ az networkcloud kubernetescluster agentpool list --kubernetes-cluster-name myNexusK8sCluster --resource-group myResourceGroup --output table
This command is experimental and under development. Reference and support levels: https://aka.ms/CLI_refstatus
Count Location Mode Name ProvisioningState ResourceGroup VmSkuName
------- ---------- ------ ---------------------------- ------------------- --------------- -----------
1 eastus System myNexusK8sCluster-nodepool-1 Succeeded myResourceGroup NC_P10_56_v1
1 eastus User myNexusK8sCluster-nodepool-2 Succeeded myResourceGroup NC_P10_56_v1
清理資源
不再需要資源群組時,請加以刪除。 資源群組和資源群組中的所有資源都會被刪除。
使用 az group delete 命令,以移除資源群組、Kubernetes 叢集,以及運算子連接點網路資源以外的所有相關資源。
az group delete --name myResourceGroup --yes --no-wait
後續步驟
您現在可以直接透過叢集連線或透過 Azure Operator Service Manager 來部署 CNF。