Quickstart: Deploy an Azure Nexus Kubernetes cluster using Bicep
Article
Deploy an Azure Nexus Kubernetes cluster using Bicep.
Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. It provides concise syntax, reliable type safety, and support for code reuse. Bicep offers the best authoring experience for your infrastructure-as-code solutions in Azure.
If you prefer to run CLI reference commands locally, install the Azure CLI. If you're running on Windows or macOS, consider running Azure CLI in a Docker container. For more information, see How to run the Azure CLI in a Docker container.
If you're using a local installation, sign in to the Azure CLI by using the az login command. To finish the authentication process, follow the steps displayed in your terminal. For other sign-in options, see Sign in with the Azure CLI.
When you're prompted, install the Azure CLI extension on first use. For more information about extensions, see Use extensions with the Azure CLI.
Run az version to find the version and dependent libraries that are installed. To upgrade to the latest version, run az upgrade.
Create a resource group using the az group create command. An Azure resource group is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're prompted to specify a location. This location is the storage location of your resource group metadata and where your resources run in Azure if you don't specify another region during resource creation. The following example creates a resource group named myResourceGroup in the eastus location.
Azure CLI
az group create --name myResourceGroup --location eastus
The following output example resembles successful creation of the resource group:
To deploy a Bicep file or ARM template, you need write access on the resources you're deploying and access to all operations on the Microsoft.Resources/deployments resource type. For example, to deploy a cluster, you need Microsoft.NetworkCloud/kubernetesclusters/write and Microsoft.Resources/deployments/* permissions. For a list of roles and permissions, see Azure built-in roles.
You need the custom location resource ID of your Azure Operator Nexus cluster.
You need to create various networks according to your specific workload requirements, and it's essential to have the appropriate IP addresses available for your workloads. To ensure a smooth implementation, it's advisable to consult the relevant support teams for assistance.
Before deploying the Kubernetes template, let's review the content to understand its structure.
Bicep
// Azure parameters
@description('The name of Nexus Kubernetes cluster')paramkubernetesClusterNamestring
@description('The Azure region where the cluster is to be deployed')paramlocationstring = resourceGroup().location
@description('The custom location of the Nexus instance')paramextendedLocationstring
@description('The metadata tags to be associated with the cluster resource')paramtagsobject = {}
@description('The username for the administrative account on the cluster')paramadminUsernamestring = 'azureuser'
@description('The object IDs of Azure Active Directory (AAD) groups that will have administrative access to the cluster')paramadminGroupObjectIdsarray = []
// Networking Parameters
@description('The Azure Resource Manager (ARM) id of the network to be used as the Container Networking Interface (CNI) network')paramcniNetworkIdstring
@description('The ARM id of the network to be used for cloud services network')paramcloudServicesNetworkIdstring
@description('The CIDR blocks used for Nexus Kubernetes PODs in the cluster')parampodCidrsarray = ['10.244.0.0/16']
@description('The CIDR blocks used for k8s service in the cluster')paramserviceCidrsarray = ['10.96.0.0/16']
@description('The IP address of the DNS service in the cluster')paramdnsServiceIpstring = '10.96.0.10'
@description('The Layer 2 networks associated with the initial agent pool')paramagentPoolL2Networksarray = []
// {// networkId: 'string'// pluginType: 'SRIOV|DPDK|OSDevice|MACVLAN'// }
@description('The Layer 3 networks associated with the initial agent pool')paramagentPoolL3Networksarray = []
// {// ipamEnabled: 'True/False'// networkId: 'string'// pluginType: 'SRIOV|DPDK|OSDevice|MACVLAN|IPVLAN'// }
@description('The trunked networks associated with the initial agent pool')paramagentPoolTrunkedNetworksarray = []
// {// networkId: 'string'// pluginType: 'SRIOV|DPDK|OSDevice|MACVLAN'// }
@description('The Layer 2 networks associated with the cluster')paraml2Networksarray = []
// {// networkId: 'string'// pluginType: 'SRIOV|DPDK|OSDevice|MACVLAN'// }
@description('The Layer 3 networks associated with the cluster')paraml3Networksarray = []
// {// ipamEnabled: 'True/False'// networkId: 'string'// pluginType: 'SRIOV|DPDK|OSDevice|MACVLAN|IPVLAN'// }
@description('The trunked networks associated with the cluster')paramtrunkedNetworksarray = []
// {// networkId: 'string'// pluginType: 'SRIOV|DPDK|OSDevice|MACVLAN'// }
@description('The LoadBalancer IP address pools associated with the cluster')paramipAddressPoolsarray = []
// {// addresses: [// 'string'// ]// autoAssign: 'True/False'// name: 'string'// onlyUseHostIps: 'True/False'// }// Cluster Configuration Parameters
@description('The version of Kubernetes to be used in the Nexus Kubernetes cluster')paramkubernetesVersionstring = 'v1.27.1'
@description('The number of control plane nodes to be deployed in the cluster')paramcontrolPlaneCountint = 1
@description('The zones/racks used for placement of the control plane nodes')paramcontrolPlaneZonesarray = []
// "string" Example: ["1", "2", "3"]
@description('The zones/racks used for placement of the agent pool nodes')paramagentPoolZonesarray = []
// "string" Example: ["1", "2", "3"]
@description('The size of the control plane nodes')paramcontrolPlaneVmSkuNamestring = 'NC_G6_28_v1'
@description('The number of worker nodes to be deployed in the initial agent pool')paramsystemPoolNodeCountint = 1
@description('The size of the worker nodes')paramworkerVmSkuNamestring = 'NC_P10_56_v1'
@description('The configurations for the initial agent pool')paraminitialPoolAgentOptionsobject = {}
// {// "hugepagesCount": integer,// "hugepagesSize": "2M/1G"// }
@description('The cluster wide SSH public key that will be associated with the given user for secure remote login')paramsshPublicKeysarray = []
// {// keyData: "ssh-rsa AAAAA...."// },// {// keyData: "ssh-rsa AAAAA...."// }
@description('The control plane SSH public key that will be associated with the given user for secure remote login')paramcontrolPlaneSshKeysarray = []
// {// keyData: "ssh-rsa AAAAA...."// },// {// keyData: "ssh-rsa AAAAA...."// }
@description('The agent pool SSH public key that will be associated with the given user for secure remote login')paramagentPoolSshKeysarray = []
// {// keyData: "ssh-rsa AAAAA...."// },// {// keyData: "ssh-rsa AAAAA...."// }
@description('The labels to assign to the nodes in the cluster for identification and organization')paramlabelsarray = []
// {// key: 'string'// value: 'string'// }
@description('The taints to apply to the nodes in the cluster to restrict which pods can be scheduled on them')paramtaintsarray = []
// {// key: 'string'// value: 'string:NoSchedule|PreferNoSchedule|NoExecute'// }
@description('The association of IP address pools to the communities and peers, allowing for announcement of IPs.')parambgpAdvertisementsarray = []
@description('"The list of additional BgpPeer entities that the Kubernetes cluster will peer with. All peering must be explicitly defined.')parambgpPeersarray = []
@description('The indicator to specify if the load balancer peers with the network fabric.')paramfabricPeeringEnabledstring = 'False'resourcekubernetescluster'Microsoft.NetworkCloud/kubernetesClusters@2025-02-01' = {
name: kubernetesClusterNamelocation: locationtags: tagsextendedLocation: {
name: extendedLocationtype: 'CustomLocation'
}
properties: {
kubernetesVersion: kubernetesVersionmanagedResourceGroupConfiguration: {
name: '${uniqueString(resourceGroup().name)}-${kubernetesClusterName}'location: location
}
aadConfiguration: {
adminGroupObjectIds: adminGroupObjectIds
}
administratorConfiguration: {
adminUsername: adminUsernamesshPublicKeys: empty(sshPublicKeys) ? [] : sshPublicKeys
}
initialAgentPoolConfigurations: [
{
name: '${kubernetesClusterName}-nodepool-1'administratorConfiguration: {
adminUsername: adminUsernamesshPublicKeys: empty(agentPoolSshKeys) ? [] : agentPoolSshKeys
}
count: systemPoolNodeCountvmSkuName: workerVmSkuNamemode: 'System'labels: empty(labels) ? null : labelstaints: empty(taints) ? null : taintsagentOptions: empty(initialPoolAgentOptions) ? null : initialPoolAgentOptionsattachedNetworkConfiguration: {
l2Networks: empty(agentPoolL2Networks) ? null : agentPoolL2Networksl3Networks: empty(agentPoolL3Networks) ? null : agentPoolL3NetworkstrunkedNetworks: empty(agentPoolTrunkedNetworks) ? null : agentPoolTrunkedNetworks
}
availabilityZones: empty(agentPoolZones) ? null : agentPoolZonesupgradeSettings: {
maxSurge: '1'
}
}
]
controlPlaneNodeConfiguration: {
administratorConfiguration: {
adminUsername: adminUsernamesshPublicKeys: empty(controlPlaneSshKeys) ? [] : controlPlaneSshKeys
}
count: controlPlaneCountvmSkuName: controlPlaneVmSkuNameavailabilityZones: empty(controlPlaneZones) ? null : controlPlaneZones
}
networkConfiguration: {
cniNetworkId: cniNetworkIdcloudServicesNetworkId: cloudServicesNetworkIddnsServiceIp: dnsServiceIppodCidrs: podCidrsserviceCidrs: serviceCidrsattachedNetworkConfiguration: {
l2Networks: empty(l2Networks) ? null : l2Networksl3Networks: empty(l3Networks) ? null : l3NetworkstrunkedNetworks: empty(trunkedNetworks) ? null : trunkedNetworks
}
bgpServiceLoadBalancerConfiguration: {
bgpAdvertisements: empty(bgpAdvertisements) ? null : bgpAdvertisementsbgpPeers: empty(bgpPeers) ? null : bgpPeersfabricPeeringEnabled: fabricPeeringEnabledipAddressPools: empty(ipAddressPools) ? null : ipAddressPools
}
}
}
}
Once you have reviewed and saved the template file named kubernetes-deploy.bicep, proceed to the next section to deploy the template.
Deploy the Bicep file
Create a file named kubernetes-deploy-parameters.json and add the required parameters in JSON format. You can use the following example as a starting point. Replace the values with your own.
az deployment group create \
--resource-group myResourceGroup \
--template-file kubernetes-deploy.bicep \
--parameters @kubernetes-deploy-parameters.json
If there isn't enough capacity to deploy requested cluster nodes, an error message appears. However, this message doesn't provide any details about the available capacity. It states that the cluster creation can't proceed due to insufficient capacity.
Note
The capacity calculation takes into account the entire platform cluster, rather than being limited to individual racks. Therefore, if an agent pool is created in a zone (where a rack equals a zone) with insufficient capacity, but another zone has enough capacity, the cluster creation continues but will eventually time out. This approach to capacity checking only makes sense if a specific zone isn't specified during the creation of the cluster or agent pool.
Review deployed resources
After the deployment finishes, you can view the resources using the CLI or the Azure portal.
To view the details of the myNexusK8sCluster cluster in the myResourceGroup resource group, execute the following Azure CLI command:
Azure CLI
az networkcloud kubernetescluster show \
--name myNexusK8sCluster \
--resource-group myResourceGroup
Additionally, to get a list of agent pool names associated with the myNexusK8sCluster cluster in the myResourceGroup resource group, you can use the following Azure CLI command.
Azure CLI
az networkcloud kubernetescluster agentpool list \
--kubernetes-cluster-name myNexusK8sCluster \
--resource-group myResourceGroup \
--output table
Connect to the cluster
Now that the Nexus Kubernetes cluster has been successfully created and connected to Azure Arc, you can easily connect to it using the cluster connect feature. Cluster connect allows you to securely access and manage your cluster from anywhere, making it convenient for interactive development, debugging, and cluster administration tasks.
When you create a Nexus Kubernetes cluster, Nexus automatically creates a managed resource group dedicated to storing the cluster resources, within this group, the Arc connected cluster resource is established.
To access your cluster, you need to set up the cluster connect kubeconfig. After logging into Azure CLI with the relevant Microsoft Entra entity, you can obtain the kubeconfig necessary to communicate with the cluster from anywhere, even outside the firewall that surrounds it.
Set CLUSTER_NAME, RESOURCE_GROUP and SUBSCRIPTION_ID variables.
Azure CLI
CLUSTER_NAME="myNexusK8sCluster"
RESOURCE_GROUP="myResourceGroup"
SUBSCRIPTION_ID=<set the correct subscription_id>
Query managed resource group with az and store in MANAGED_RESOURCE_GROUP
Azure CLI
az account set -s$SUBSCRIPTION_ID
MANAGED_RESOURCE_GROUP=$(az networkcloud kubernetescluster show -n$CLUSTER_NAME-g$RESOURCE_GROUP--output tsv --query managedResourceGroupConfiguration.name)
The following command starts a connectedk8s proxy that allows you to connect to the Kubernetes API server for the specified Nexus Kubernetes cluster.
Azure CLI
az connectedk8s proxy -n$CLUSTER_NAME-g$MANAGED_RESOURCE_GROUP &
Use kubectl to send requests to the cluster:
Azure CLI
kubectl get pods -A
You should now see a response from the cluster containing the list of all nodes.
Note
If you see the error message "Failed to post access token to client proxyFailed to connect to MSI", you may need to perform an az login to re-authenticate with Azure.
Add an agent pool
The cluster created in the previous step has a single node pool. Let's add a second agent pool using the Bicep template. The following example creates an agent pool named myNexusK8sCluster-nodepool-2:
Review the template.
Before adding the agent pool template, let's review the content to understand its structure.
Bicep
// Azure Parameters
@description('The name of Nexus Kubernetes cluster')paramkubernetesClusterNamestring
@description('The Azure region where the cluster is to be deployed')paramlocationstring = resourceGroup().location
@description('The custom location of the Nexus instance')paramextendedLocationstring
@description('Tags to be associated with the resource')paramtagsobject = {}
@description('The username for the administrative account on the cluster')paramadminUsernamestring = 'azureuser'
@description('The agent pool SSH public key that will be associated with the given user for secure remote login')paramagentPoolSshKeysarray = []
// {// keyData: "ssh-rsa AAAAA...."// },// {// keyData: "ssh-rsa AAAAA...."// }// Cluster Configuration Parameters
@description('Number of nodes in the agent pool')paramagentPoolNodeCountint = 1
@description('Agent pool name')paramagentPoolNamestring = 'nodepool-2'
@description('VM size of the agent nodes')paramagentVmSkustring = 'NC_P10_56_v1'
@description('The zones/racks used for placement of the agent pool nodes')paramagentPoolZonesarray = []
// "string" Example: ["1", "2", "3"]
@description('Agent pool mode')paramagentPoolModestring = 'User'
@description('The configurations for the initial agent pool')paramagentOptionsobject = {}
// {// "hugepagesCount": integer,// "hugepagesSize": "2M/1G"// }
@description('The labels to assign to the nodes in the cluster for identification and organization')paramlabelsarray = []
// {// key: 'string'// value: 'string'// }
@description('The taints to apply to the nodes in the cluster to restrict which pods can be scheduled on them')paramtaintsarray = []
// {// key: 'string'// value: 'string:NoSchedule|PreferNoSchedule|NoExecute'// }// Networking Parameters
@description('The Layer 2 networks to connect to the agent pool')paraml2Networksarray = []
// {// networkId: 'string'// pluginType: 'SRIOV|DPDK|OSDevice|MACVLAN|IPVLAN'// }
@description('The Layer 3 networks to connect to the agent pool')paraml3Networksarray = []
// {// ipamEnabled: 'True/False'// networkId: 'string'// pluginType: 'SRIOV|DPDK|OSDevice|MACVLAN|IPVLAN'// }
@description('The trunked networks to connect to the agent pool')paramtrunkedNetworksarray = []
// {// networkId: 'string'// pluginType: 'SRIOV|DPDK|OSDevice|MACVLAN|IPVLAN'// }resourceagentPools'Microsoft.NetworkCloud/kubernetesClusters/agentPools@2025-02-01' = {
name: '${kubernetesClusterName}/${kubernetesClusterName}-${agentPoolName}'location: locationtags: tagsextendedLocation: {
name: extendedLocationtype: 'CustomLocation'
}
properties: {
administratorConfiguration: {
adminUsername: adminUsernamesshPublicKeys: empty(agentPoolSshKeys) ? null : agentPoolSshKeys
}
attachedNetworkConfiguration: {
l2Networks: empty(l2Networks) ? null : l2Networksl3Networks: empty(l3Networks) ? null : l3NetworkstrunkedNetworks: empty(trunkedNetworks) ? null : trunkedNetworks
}
count: agentPoolNodeCountmode: agentPoolModevmSkuName: agentVmSkulabels: empty(labels) ? null : labelstaints: empty(taints) ? null : taintsagentOptions: empty(agentOptions) ? null : agentOptionsavailabilityZones: empty(agentPoolZones) ? null : agentPoolZonesupgradeSettings: {
maxSurge: '1'
}
}
}
Once you have reviewed and saved the template file named kubernetes-add-agentpool.bicep, proceed to the next section to deploy the template.
Create a file named kubernetes-nodepool-parameters.json and add the required parameters in JSON format. You can use the following example as a starting point. Replace the values with your own.
az deployment group create \
--resource-group myResourceGroup \
--template-file kubernetes-add-agentpool.bicep \
--parameters @kubernetes-nodepool-parameters.json
Note
You can add multiple agent pools during the initial creation of your cluster itself by using the initial agent pool configurations. However, if you want to add agent pools after the initial creation, you can utilize the above command to create additional agent pools for your Nexus Kubernetes cluster.
The following output example resembles successful creation of the agent pool.
Bash
$ az networkcloud kubernetescluster agentpool list --kubernetes-cluster-name myNexusK8sCluster --resource-group myResourceGroup --output table
This command is experimental and under development. Reference and support levels: https://aka.ms/CLI_refstatus
Count Location Mode Name ProvisioningState ResourceGroup VmSkuName
------- ---------- ------ ---------------------------- ------------------- --------------- -----------
1 eastus System myNexusK8sCluster-nodepool-1 Succeeded myResourceGroup NC_P10_56_v1
1 eastus User myNexusK8sCluster-nodepool-2 Succeeded myResourceGroup NC_P10_56_v1
Clean up resources
When no longer needed, delete the resource group. The resource group and all the resources in the resource group are deleted.
Use the az group delete command to remove the resource group, Kubernetes cluster, and all related resources except the Operator Nexus network resources.
Azure CLI
az group delete --name myResourceGroup --yes--no-wait
Use the Remove-AzResourceGroup cmdlet to remove the resource group, Kubernetes cluster, and all related resources except the Operator Nexus network resources.