Deploy host networking with Network ATC
Applies to: Azure Stack HCI, versions 22H2 and 21H2
This article guides you through the requirements, best practices, and deployment of Network ATC. Network ATC simplifies the deployment and network configuration management for Azure Stack HCI clusters. This provides an intent-based approach to host network deployment. By specifying one or more intents (management, compute, or storage) for a network adapter, you can automate the deployment of the intended configuration. For more information on Network ATC, including an overview and definitions, see Network ATC overview.
If you have feedback or encounter any issues, review the Requirements and best practices section, check the Network ATC event log, and work with your Microsoft support team.
Requirements and best practices
The following are requirements and best practices for using Network ATC in Azure Stack HCI:
Supported on Azure Stack HCI, version 22H2.
All servers in the cluster must be running Azure Stack HCI, version 22H2.
Must use physical hosts that are Azure Stack HCI certified.
A maximum of 16 nodes supported per cluster.
Adapters in the same Network ATC intent must be symmetric (of the same make, model, speed, and configuration) and available on each cluster node. Network ATC, after version 22H2, will confirm symmetric properties for adapters on the node, and across the cluster before deploying an intent. Asymmetric adapters will lead to a failure in deploying any intent. For more information on adapter symmetry, see Switch Embedded Teaming (SET)
Each physical adapter specified in an intent, must use the same name on all nodes in the cluster.
Ensure each network adapter has an "Up" status, as verified by the PowerShell
Get-NetAdapter
cmdlet.Each node must have the following Azure Stack HCI features installed:
- Network ATC
- Data Center Bridging (DCB)
- Failover Clustering
- Hyper-V Here's an example of installing the required features via PowerShell:
Install-WindowsFeature -Name NetworkATC, Data-Center-Bridging, Hyper-V, Failover-Clustering -IncludeManagementTools
Best practice: Insert each adapter in the same PCI slot(s) in each host. This leads to ease in automated naming conventions by imaging systems.
Best practice: Configure the physical network (switches) prior to Network ATC including VLANs, MTU, and DCB configuration. See Physical Network Requirements for more information.
Important
Updated: Deploying Network ATC in virtual machines may be used for test and validation purposes only. VM-based deployment requires an override to the default adapter settings to disable the NetworkDirect property. For more information on submission of an override, please see: Override default network settings.
Deploying Network ATC in standalone mode may be used for test and validation purposes only.
Common Network ATC commands
There are several new PowerShell commands included with Network ATC. Run theGet-Command -ModuleName NetworkATC
cmdlet to identify them. Ensure PowerShell is run as an administrator.
The Remove-NetIntent
cmdlet removes an intent from the local node or cluster. This does not destroy the invoked configuration.
Example intents
Network ATC modifies how you deploy host networking, not what you deploy. Multiple scenarios may be implemented so long as each scenario is supported by Microsoft. Here are some examples of common deployment options, and the PowerShell commands needed. These are not the only combinations available but they should give you an idea of the possibilities.
For simplicity we only demonstrate two physical adapters per SET team, however it is possible to add more. Refer to Plan Host Networking for more information.
Fully converged intent
For this intent, compute, storage, and management networks are deployed and managed across all cluster nodes.
Add-NetIntent -Name ConvergedIntent -Management -Compute -Storage -AdapterName pNIC01, pNIC02
Converged compute and storage intent; separate management intent
Two intents are managed across cluster nodes. Management uses pNIC01, and pNIC02; Compute and storage are on different adapters.
Add-NetIntent -Name Mgmt -Management -AdapterName pNIC01, pNIC02
Add-NetIntent -Name Compute_Storage -Compute -Storage -AdapterName pNIC03, pNIC04
Fully disaggregated intent
For this intent, compute, storage, and management networks are all managed on different adapters across all cluster nodes.
Add-NetIntent -Name Mgmt -Management -AdapterName pNIC01, pNIC02
Add-NetIntent -Name Compute -Compute -AdapterName pNIC03, pNIC04
Add-NetIntent -Name Storage -Storage -AdapterName pNIC05, pNIC06
Storage-only intent
For this intent, only storage is managed. Management and compute adapters are not be managed by Network ATC.
Compute and management intent
For this intent, compute and management networks are managed, but not storage.
Multiple compute (switch) intent
For this intent, multiple compute switches are managed.
Add-NetIntent -Name Compute1 -Compute -AdapterName pNIC03, pNIC04
Add-NetIntent -Name Compute2 -Compute -AdapterName pNIC05, pNIC06
Default Network ATC values
This section lists some of the key default values used by Network ATC.
22H2 default values
This section covers additional default values that Network ATC will be setting in versions 22H2 and later.
Automatic storage IP addressing
If you choose the -Storage
intent type, Network ATC after version 22H2, will configure your IP Addresses, Subnets and VLANs for you. Network ATC does this in a consistent and uniform manner across all nodes in your cluster.
The default IP Address for each adapter on each node in the storage intent will be set up as follows:
Adapter | IP Address and Subnet | VLAN |
---|---|---|
pNIC1 | 10.71.1.X | 711 |
pNIC2 | 10.71.2.X | 712 |
pNIC3 | 10.71.3.X | 713 |
The IP Addresses and subnets are consistent with the VLANs assigned to the adapters.
To override Automatic Storage IP Addressing, create a storage override and pass the override when creating an intent:
$storageOverride = new-NetIntentStorageOverrides
$storageOverride.EnableAutomaticIPGeneration = $false
Add-NetIntent -Name Storage_Compute -Storage -Compute -AdapterName 'pNIC01', 'pNIC02' -StorageOverrides $storageoverride
Cluster network settings
Version 22H2 and later, Network ATC configures a set of Cluster Network Features by default. The defaults are listed below:
Property | Default |
---|---|
EnableNetworkNaming | $true |
EnableLiveMigrationNetworkSelection | $true |
EnableVirtualMachineMigrationPerformance | $true |
VirtualMachineMigrationPerformanceOption | Default will be always calculated: SMB, TCP or Compression |
MaximumVirtualMachineMigrations | 1 |
MaximumSMBMigrationBandwidthInGbps | Default will be calculated based on set-up |
21H2 default values
Default VLANs
The following default VLANs are used. These VLANs must be available on the physical network for proper operation.
Adapter Intent | Default Value |
---|---|
Management | Configured VLAN for management adapters isn't modified |
Storage Adapter 1 | 711 |
Storage Adapter 2 | 712 |
Storage Adapter 3 | 713 |
Storage Adapter 4 | 714 |
Storage Adapter 5 | 715 |
Storage Adapter 6 | 716 |
Storage Adapter 7 | 717 |
Storage Adapter 8 | 718 |
Future Use | 719 |
Consider the following command:
Add-NetIntent -Name Cluster_ComputeStorage -Storage -AdapterName pNIC01, pNIC02, pNIC03, pNIC04
The physical NIC (or virtual NIC if required) is configured to use VLANs 711, 712, 713, and 714 respectively.
Note
Network ATC allows you to change the VLANs used with the StorageVlans
parameter on Add-NetIntent
.
Default Data Center Bridging (DCB) configuration
Network ATC establishes the following priorities and bandwidth reservations. This configuration should also be configured on the physical network.
Policy | Use | Default Priority | Default Bandwidth Reservation |
---|---|---|---|
Cluster | Cluster Heartbeat reservation | 7 | 2% if the adapter(s) are <= 10 Gbps; 1% if the adapter(s) are > 10 Gbps |
SMB_Direct | RDMA Storage Traffic | 3 | 50% |
Default | All other traffic types | 0 | Remainder |
Note
Network ATC allows you to override default settings like default bandwidth reservation. For examples, see Update or override network settings.
Next steps
- Manage your Network ATC deployment. See Manage Network ATC.
- Learn more about Stretched clusters.
Feedback
Submit and view feedback for