Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Deploy an Azure Nexus Kubernetes cluster using Azure PowerShell.
This quick-start guide is designed to help you get started with using Nexus kubernetes cluster. By following the steps outlined in this guide, you're able to quickly and easily create a customized Nexus kubernetes cluster that meets your specific needs and requirements. Whether you're a beginner or an expert in Nexus networking, this guide is here to help. You learn everything you need to know to customize and create Nexus kubernetes cluster.
Before you begin
If you don't have an Azure account, create a free account before you begin.
- Use the Bash environment in Azure PowerShell. For more information, see Quickstart for PowerShell in Azure Cloud Shell.
If you are running PowerShell locally, install the Az PowerShell module and connect to your Azure account using the Connect-AzAccount cmdlet. For more information about installing the Az PowerShell module, see Install Azure PowerShell.
If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the Set-AzContext cmdlet. On setting the subscription, one need not pass the 'SubscriptionID' each time executing the PowerShell command.
Refer the VM SKU table in the reference section for the list of supported VM SKUs.
Refer the supported Kubernetes versions for the list of supported Kubernetes versions.
Create a resource group using the New-AzResourceGroup cmdlet. An Azure resource group is a logical group in which Azure resources are deployed and managed. When you create a resource group, you're prompted to specify a location. This location is the storage location of your resource group metadata and where your resources run in Azure if you don't specify another region during resource creation. The following example creates a resource group named myResourceGroup in the eastus location.
New-AzResourceGroup -Name myResourceGroup -Location eastus
The following output example resembles successful creation of the resource group:
ResourceGroupName : myResourceGroup Location : eastus ProvisioningState : Succeeded Tags : ResourceId : /subscriptions/aaaa0a0a-bb1b-cc2c-dd3d-eeeeee4e4e4e/resourceGroups/myResourceGroup
You need the
custom location
resource ID of your Azure Operator Nexus cluster.You need to create various networks according to your specific workload requirements, and it's essential to have the appropriate IP addresses available for your workloads. To ensure a smooth implementation, it's advisable to consult the relevant support teams for assistance.
This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see Kubernetes core concepts for Azure Kubernetes Service (AKS).
Create an Azure Nexus Kubernetes cluster
The following example creates a cluster named myNexusK8sCluster in resource group myResourceGroup in the eastus location.
Before you run the commands, you need to set several variables to define the configuration for your cluster. Here are the variables you need to set, along with some default values you can use for certain variables:
Variable | Description |
---|---|
LOCATION | The Azure region where you want to create your cluster. |
RESOURCE_GROUP | The name of the Azure resource group where you want to create the cluster. |
SUBSCRIPTION_ID | The ID of your Azure subscription. |
CUSTOM_LOCATION | This argument specifies a custom location of the Nexus instance. |
CSN_ARM_ID | CSN ID is the unique identifier for the cloud services network you want to use. |
CNI_ARM_ID | CNI ID is the unique identifier for the network interface to be used by the container runtime. |
AAD_ADMIN_GROUP_OBJECT_ID | The object ID of the Microsoft Entra group that should have admin privileges on the cluster. |
CLUSTER_NAME | The name you want to give to your Nexus Kubernetes cluster. |
K8S_VERSION | The version of Kubernetes you want to use. |
ADMIN_USERNAME | The username for the cluster administrator. |
SSH_PUBLIC_KEY | The SSH public key that is used for secure communication with the cluster. |
CONTROL_PLANE_COUNT | The number of control plane nodes for the cluster. |
CONTROL_PLANE_VM_SIZE | The size of the virtual machine for the control plane nodes. |
INITIAL_AGENT_POOL_NAME | The name of the initial agent pool. |
INITIAL_AGENT_POOL_COUNT | The number of nodes in the initial agent pool. |
INITIAL_AGENT_POOL_VM_SIZE | The size of the virtual machine for the initial agent pool. |
MODE | The mode of the agent pool containing the node, values apply System or User or NotApplicable |
AGENT_POOL_CONFIGURATION | The parameter specifies the agent pools created for running critical system services and workloads. |
POD_CIDR | The network range for the Kubernetes pods in the cluster, in CIDR notation. |
SERVICE_CIDR | The network range for the Kubernetes services in the cluster, in CIDR notation. |
DNS_SERVICE_IP | The IP address for the Kubernetes DNS service. |
Once you've defined these variables, you can run the Azure PowerShell command to create the cluster. Add the -Debug
flag at the end to provide more detailed output for troubleshooting purposes.
To define these variables, use the following set commands and replace the example values with your preferred values. You can also use the default values for some of the variables, as shown in the following example:
# Azure parameters
$RESOURCE_GROUP="myResourceGroup"
$SUBSCRIPTION="<Azure subscription ID>"
$CUSTOM_LOCATION="/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"
$CUSTOM_LOCATION_TYPE="CustomLocation"
$LOCATION="<ClusterAzureRegion>"
# Network parameters
$CSN_ARM_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/cloudServicesNetworks/<csn-name>"
$CNI_ARM_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/l3Networks/<l3Network-name>"
$POD_CIDR="10.244.0.0/16"
$SERVICE_CIDR="10.96.0.0/16"
$DNS_SERVICE_IP="10.96.0.10"
# AgentPoolConfiguration parameters
$INITIAL_AGENT_POOL_COUNT="1"
$MODE="System"
$INITIAL_AGENT_POOL_NAME="agentpool1"
$INITIAL_AGENT_POOL_VM_SIZE="NC_P10_56_v1"
# NAKS Cluster Parameters
$CLUSTER_NAME="myNexusK8sCluster"
$SSH_PUBLIC_KEY = @{
KeyData = "$(cat ~/.ssh/id_rsa.pub)"
}
$K8S_VERSION="1.24.9"
$AAD_ADMIN_GROUP_OBJECT_ID="3d4c8620-ac8c-4bd6-9a92-f2b75923ef9f"
$ADMIN_USERNAME="azureuser"
$CONTROL_PLANE_COUNT="1"
$CONTROL_PLANE_VM_SIZE="NC_G6_28_v1"
$AGENT_POOL_CONFIGURATION = New-AzNetworkCloudInitialAgentPoolConfigurationObject `
-Count $INITIAL_AGENT_POOL_COUNT `
-Mode $MODE `
-Name $INITIAL_AGENT_POOL_NAME `
-VmSkuName $INITIAL_AGENT_POOL_VM_SIZE
Important
It is essential that you replace the placeholders for CUSTOM_LOCATION, CSN_ARM_ID, CNI_ARM_ID, and AAD_ADMIN_GROUP_OBJECT_ID with your actual values before running these commands.
After defining these variables, you can create the Kubernetes cluster by executing the following Azure PowerShell command:
New-AzNetworkCloudKubernetesCluster -KubernetesClusterName $CLUSTER_NAME `
-ResourceGroupName $RESOURCE_GROUP `
-SubscriptionId $SUBSCRIPTION `
-Location $LOCATION `
-ExtendedLocationName $CUSTOM_LOCATION `
-ExtendedLocationType $CUSTOM_LOCATION_TYPE `
-KubernetesVersion $K8S_VERSION `
-AadConfigurationAdminGroupObjectId $AAD_ADMIN_GROUP_OBJECT_ID `
-AdminUsername $ADMIN_USERNAME `
-SshPublicKey $SSH_PUBLIC_KEY `
-ControlPlaneNodeConfigurationCount $CONTROL_PLANE_COUNT `
-ControlPlaneNodeConfigurationVMSkuName $CONTROL_PLANE_VM_SIZE `
-InitialAgentPoolConfiguration $AGENT_POOL_CONFIGURATION `
-NetworkConfigurationCloudServicesNetworkId $CSN_ARM_ID `
-NetworkConfigurationCniNetworkId $CNI_ARM_ID `
-NetworkConfigurationPodCidr $POD_CIDR `
-NetworkConfigurationDnsServiceIP $SERVICE_CIDR `
-NetworkConfigurationServiceCidr $DNS_SERVICE_IP
If there isn't enough capacity to deploy requested cluster nodes, an error message appears. However, this message doesn't provide any details about the available capacity. It states that the cluster creation can't proceed due to insufficient capacity.
Note
The capacity calculation takes into account the entire platform cluster, rather than being limited to individual racks. Therefore, if an agent pool is created in a zone (where a rack equals a zone) with insufficient capacity, but another zone has enough capacity, the cluster creation continues but will eventually time out. This approach to capacity checking only makes sense if a specific zone isn't specified during the creation of the cluster or agent pool.
After a few minutes, the command completes and returns information about the cluster. For more advanced options, see Quickstart: Deploy an Azure Nexus Kubernetes cluster using Bicep.
Review deployed resources
After the deployment finishes, you can view the resources using the PowerShell or the Azure portal.
To view the details of the myNexusK8sCluster
cluster in the myResourceGroup
resource group, execute the following Azure PowerShell command:
Get-AzNetworkCloudKubernetesCluster -KubernetesClusterName myNexusK8sCluster `
-ResourceGroupName myResourceGroup `
-SubscriptionId <mySubscription>
Additionally, to get a list of agent pool names associated with the myNexusK8sCluster
cluster in the myResourceGroup
resource group, you can use the following Azure PowerShell command.
Get-AzNetworkCloudAgentPool -KubernetesClusterName myNexusK8sCluster `
-ResourceGroupName myResourceGroup `
-SubscriptionId <mySubscription>
Connect to the cluster
Now that the Nexus Kubernetes cluster has been successfully created and connected to Azure Arc, you can easily connect to it using the cluster connect feature. Cluster connect allows you to securely access and manage your cluster from anywhere, making it convenient for interactive development, debugging, and cluster administration tasks.
For more detailed information about available options, see Connect to an Azure Operator Nexus Kubernetes cluster.
Note
When you create a Nexus Kubernetes cluster, Nexus automatically creates a managed resource group dedicated to storing the cluster resources. Within this group, the Arc connected cluster resource is established.
To access your cluster, you need to set up the cluster connect kubeconfig
. After logging into Azure PowerShell with the relevant Microsoft Entra entity, you can obtain the kubeconfig
necessary to communicate with the cluster from anywhere, even outside the firewall that surrounds it.
Set CLUSTER_NAME, RESOURCE_GROUP, LOCATION and SUBSCRIPTION_ID variables.
$CLUSTER_NAME="myNexusK8sCluster" $LOCATION="<ClusterAzureRegion>" $MANAGED_RESOURCE_GROUP=(Get-AzNetworkCloudKubernetesCluster -KubernetesClusterName $CLUSTER_NAME ` -SubscriptionId <mySubscription> ` -ResourceGroupName myResourceGroup ` |Select-Object -Property ManagedResourceGroupConfigurationName)
Run the following command to connect to the cluster.
New-AzConnectedKubernetes -ClusterName $CLUSTER_NAME -ResourceGroupName $MANAGED_RESOURCE_GROUP -Location $LOCATION
Use
kubectl
to send requests to the cluster:kubectl get pods -A
You should now see a response from the cluster containing the list of all nodes.
Note
If you see the error message "Failed to post access token to client proxyFailed to connect to MSI", you may need to perform an az login
to re-authenticate with Azure.
Add an agent pool
The cluster created in the previous step has a single node pool. Let's add a second agent pool using the New-AzNetworkCloudAgentPool
create command. The following example creates an agent pool named myNexusK8sCluster-nodepool-2
:
You can also use the default values for some of the variables, as shown in the following example:
$RESOURCE_GROUP="myResourceGroup"
$SUBSCRIPTION="<Azure subscription ID>"
$CUSTOM_LOCATION="/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"
$CUSTOM_LOCATION_TYPE="CustomLocation"
$LOCATION="<ClusterAzureRegion>"
$CLUSTER_NAME="myNexusK8sCluster"
$AGENT_POOL_NAME="myNexusK8sCluster-nodepool-2"
$AGENT_POOL_VM_SIZE="NC_P10_56_v1"
$AGENT_POOL_COUNT="1"
$AGENT_POOL_MODE="User"
After defining these variables, you can add an agent pool by executing the following Azure PowerShell command:
New-AzNetworkCloudAgentPool -KubernetesClusterName $CLUSTER_NAME `
-Name $AGENT_POOL_NAME `
-ResourceGroupName $RESOURCE_GROUP `
-SubscriptionId $SUBSCRIPTION `
-ExtendedLocationName $CUSTOM_LOCATION `
-ExtendedLocationType $CUSTOM_LOCATION_TYPE `
-Location $LOCATION `
-Count $AGENT_POOL_COUNT `
-Mode $AGENT_POOL_MODE `
-VMSkuName $AGENT_POOL_VM_SIZE
After a few minutes, the command completes and returns information about the agent pool. For more advanced options, see Quickstart: Deploy an Azure Nexus Kubernetes cluster using Bicep.
Note
You can add multiple agent pools during the initial creation of your cluster itself by using the initial agent pool configurations. However, if you want to add agent pools after the initial creation, you can utilize the above command to create additional agent pools for your Nexus Kubernetes cluster.
The following output example resembles successful creation of the agent pools.
Get-AzNetworkCloudAgentPool -KubernetesClusterName myNexusK8sCluster `
-ResourceGroupName myResourceGroup `
-SubscriptionId <mySubscription>
Location Name SystemDataCreatedAt SystemDataCreatedBy SystemDataCreatedByType SystemDataLastModifiedAt SystemDataLastModifiedBy
-------- ---- ------------------- ------------------- ----------------------- ------------------------ ------------
eastus myNexusK8sCluster-nodepool-1 09/21/2023 18:14:59 <identity> User 07/18/2023 17:46:45 <identity>
eastus myNexusK8sCluster-nodepool-2 09/25/2023 17:44:02 <identity> User 07/18/2023 17:46:45 <identity>
Clean up resources
When no longer needed, delete the resource group. The resource group and all the resources in the resource group are deleted.
Use the Remove-AzResourceGroup cmdlet to remove the resource group, Kubernetes cluster, and all related resources except the Operator Nexus network resources.
Remove-AzResourceGroup -Name myResourceGroup
Next steps
You can now deploy the CNFs either directly via cluster connect or via Azure Operator Service Manager.