แก้ไข

แชร์ผ่าน


Review single-server storage deployment network reference pattern for Azure Stack HCI

Applies to: Azure Stack HCI, versions 23H2 and 22H2

This article describes the single-server storage network reference pattern that you can use to deploy your Azure Stack HCI solution. The information in this article also helps you determine if this configuration is viable for your deployment planning needs. This article is targeted towards the IT administrators who deploy and manage Azure Stack HCI in their datacenters.

For information about other network patterns, see Azure Stack HCI network deployment patterns.

Introduction

Single-server deployments provide cost and space benefits while helping to modernize your infrastructure and bring Azure hybrid computing to locations that can tolerate the resiliency of a single server. Azure Stack HCI running on a single-server behaves similarly to Azure Stack HCI on a multi-node cluster: it brings native Azure Arc integration, the ability to add servers to scale out the cluster, and it includes the same Azure benefits.

It also supports the same workloads, such as Azure Virtual Desktop (AVD) and AKS on Azure Stack HCI, and is supported and billed the same way.

Scenarios

Use the single-server storage pattern in the following scenarios:

  • Facilities that can tolerate lower level of resiliency. Consider implementing this pattern whenever your location or service provided by this pattern can tolerate a lower level of resiliency without impacting your business.

  • Food, healthcare, finance, retail, government facilities. Some food, healthcare, finance, and retail scenarios can apply this option to minimize their costs without impacting core operations and business transactions.

Although Software Defined Networking (SDN) Layer 3 (L3) services are fully supported on this pattern, routing services such as Border Gateway Protocol (BGP) may need to be configured for the firewall device on the top-of-rack (TOR) switch.

Network security features such as microsegmentation and Quality of Service (QoS) don't require extra configuration for the firewall device, as they're implemented at the virtual network adapter layer. For more information, see Microsegmentation with Azure Stack HCI.

Note

Single servers must use only a single drive type: Non-volatile Memory Express (NVMe) or Solid-State (SSD) drives.

Physical connectivity components

As illustrated in the diagram below, this pattern has the following physical network components:

  • For northbound/southbound traffic, the Azure Stack HCI cluster is implemented using a single TOR L2 or L3 switch.
  • Two teamed network ports to handle the management and compute traffic connected to the switch.
  • Two disconnected RDMA NICs that are only used if add a second server to your cluster for scale-out. This means no increased costs for cabling or physical switch ports.
  • (Optional) A BMC card can be used to enable remote management of your environment. For security purposes, some solutions might use a headless configuration without the BMC card.

Diagram showing single-server physical connectivity layout.

The following table lists some guidelines for a single-server deployment:

Network Management & compute Storage BMC
Link speed At least 1Gbps if RDMA is disabled, 10Gbps recommended. At least 10Gbps. Check with hardware manufacturer.
Interface type RJ45, SFP+, or SFP28 SFP+ or SFP28 RJ45
Ports and aggregation Two teamed ports Optional to allow adding a second server; disconnected ports. One port
RDMA Optional. Depends on requirements for guest RDMA and NIC support. N/A N/A

Network ATC intents

The single-server pattern uses only one Network ATC intent for management and compute traffic. The RDMA network interfaces are optional and disconnected.

Diagram showing Network ATC intents for the single-server switchless pattern.

Management and compute intent

The management and compute intent has the following characteristics:

  • Intent type: Management and compute
  • Intent mode: Cluster mode
  • Teaming: Yes - pNIC01 and pNIC02 are teamed
  • Default management VLAN: Configured VLAN for management adapters is ummodified
  • PA VLAN and vNICs: Network ATC is transparent to PA vNICs and VLANs
  • Compute VLANs and vNICs: Network ATC is transparent to compute VM vNICs and VLANs

Storage intent

The storage intent has the following characteristics:

  • Intent type: None
  • Intent mode: None
  • Teaming: pNIC03 and pNIC04 are disconnected
  • Default VLANs: None
  • Default subnets: None

Follow these steps to create a network intent for this reference pattern:

  1. Run PowerShell as Administrator.

  2. Run the following command:

    Add-NetIntent -Name <management_compute> -Management -Compute -ClusterName <HCI01> -AdapterName <pNIC01, pNIC02>
    

For more information, see Deploy host networking: Compute and management intent.

Logical network components

As illustrated in the diagram below, this pattern has the following logical network components:

Diagram showing single-server logical connectivity layout.

Storage network VLANs

Optional - this pattern doesn't require a storage network.

OOB network

The Out of Band (OOB) network is dedicated to supporting the "lights-out" server management interface also known as the baseboard management controller (BMC). Each BMC interface connects to a customer-supplied switch. The BMC is used to automate PXE boot scenarios.

The management network requires access to the BMC interface using Intelligent Platform Management Interface (IPMI) User Datagram Protocol (UDP) port 623.

The OOB network is isolated from compute workloads and is optional for non-solution-based deployments.

Management VLAN

All physical compute hosts require access to the management logical network. For IP address planning, each physical compute host must have at least one IP address assigned from the management logical network.

A DHCP server can automatically assign IP addresses for the management network, or you can manually assign static IP addresses. When DHCP is the preferred IP assignment method, we recommend that you use DHCP reservations without expiration.

The management network supports the following VLAN configurations:

  • Native VLAN - you aren't required to supply VLAN IDs. This is required for solution-based installations.

  • Tagged VLAN - you supply VLAN IDs at the time of deployment. tenant connections on each gateway, and switches network traffic flows to a standby gateway if a gateway fails.

Gateways use Border Gateway Protocol to advertise GRE endpoints and establish point-to-point connections. SDN deployment creates a default gateway pool that supports all connection types. Within this pool, you can specify how many gateways are reserved on standby in case an active gateway fails.

For more information, see What is RAS Gateway for SDN?

The management network supports all traffic used for management of the cluster, including Remote Desktop, Windows Admin Center, and Active Directory.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Compute VLANs

In some scenarios, you don’t need to use SDN Virtual Networks with Virtual Extensible LAN (VXLAN) encapsulation. Instead, you can use traditional VLANs to isolate your tenant workloads. Those VLANs are configured on the TOR switch's port in trunk mode. When connecting new VMs to these VLANs, the corresponding VLAN tag is defined on the virtual network adapter.

HNV Provider Address (PA) network

The Hyper-V Network Virtualization (HNV) Provider Address (PA) network serves as the underlying physical network for East/West (internal-internal) tenant traffic, North/South (external-internal) tenant traffic, and to exchange BGP peering information with the physical network. This network is only required when there's a need for deploying virtual networks using VXLAN encapsulation for another layer of isolation and for network multitenancy.

For more information, see Plan an SDN infrastructure: Management and HNV Provider.

Network isolation options

The following network isolation options are supported:

VLANs (IEEE 802.1Q)

VLANs allow devices that must be kept separate to share the cabling of a physical network and yet be prevented from directly interacting with one another. This managed sharing yields gains in simplicity, security, traffic management, and economy. For example, a VLAN can be used to separate traffic within a business based on individual users or groups of users or their roles, or based on traffic characteristics. Many internet hosting services use VLANs to separate private zones from one other, allowing each customer's servers to be grouped in a single network segment no matter where the individual servers are located in the data center. Some precautions are needed to prevent traffic "escaping" from a given VLAN, an exploit known as VLAN hopping.

For more information, see Understand the usage of virtual networks and VLANs.

Default network access policies and microsegmentation

Default network access policies ensure that all virtual machines (VMs) in your Azure Stack HCI cluster are secure by default from external threats. With these policies, we'll block inbound access to a VM by default, while giving the option to enable selective inbound ports and thus securing the VMs from external attacks. This enforcement is available through management tools like Windows Admin Center.

Microsegmentation involves creating granular network policies between applications and services. This essentially reduces the security perimeter to a fence around each application or VM. This fence permits only necessary communication between application tiers or other logical boundaries, thus making it exceedingly difficult for cyberthreats to spread laterally from one system to another. Microsegmentation securely isolates networks from each other and reduces the total attack surface of a network security incident.

Default network access policies and microsegmentation are realized as five-tuple stateful (source address prefix, source port, destination address prefix, destination port, and protocol) firewall rules on Azure Stack HCI clusters. Firewall rules are also known as Network Security Groups (NSGs). These policies are enforced at the vSwitch port of each VM. The policies are pushed through the management layer, and the SDN Network Controller distributes them to all applicable hosts. These policies are available for VMs on traditional VLAN networks and on SDN overlay networks.

For more information, see What is Datacenter Firewall?.  

QoS for VM network adapters

You can configure Quality of Service (QoS) for a VM network adapter to limit bandwidth on a virtual interface to prevent a high-traffic VM from contending with other VM network traffic. You can also configure QoS to reserve a specific amount of bandwidth for a VM to ensure that the VM can send traffic regardless of other traffic on the network. This can be applied to VMs attached to traditional VLAN networks as well as VMs attached to SDN overlay networks.

For more information, see Configure QoS for a VM network adapter.

Virtual networks

Network virtualization provides virtual networks to VMs similar to how server virtualization (hypervisor) provides VMs to the operating system. Network virtualization decouples virtual networks from the physical network infrastructure and removes the constraints of VLAN and hierarchical IP address assignment from VM provisioning. Such flexibility makes it easy for you to move to (Infrastructure-as-a-Service) IaaS clouds and is efficient for hosters and datacenter administrators to manage their infrastructure, and maintaining the necessary multi-tenant isolation, security requirements, and overlapping VM IP addresses.

For more information, see Hyper-V Network Virtualization.

L3 networking services options

The following L3 networking service options are available:

Virtual network peering

Virtual network peering lets you connect two virtual networks seamlessly. Once peered, for connectivity purposes, the virtual networks appear as one. The benefits of using virtual network peering include:

  • Traffic between VMs in the peered virtual networks gets routed through the backbone infrastructure through private IP addresses only. The communication between the virtual networks doesn't require public Internet or gateways.
  • A low-latency, high-bandwidth connection between resources in different virtual networks.
  • The ability for resources in one virtual network to communicate with resources in a different virtual network.
  • No downtime to resources in either virtual network when creating the peering.

For more information, see Virtual network peering.

SDN software load balancer

Cloud Service Providers (CSPs) and enterprises that deploy Software Defined Networking (SDN) can use Software Load Balancer (SLB) to evenly distribute customer network traffic among virtual network resources. SLB enables multiple servers to host the same workload, providing high availability and scalability. It's also used to provide inbound Network Address Translation (NAT) services for inbound access to VMs, and outbound NAT services for outbound connectivity.

Using SLB, you can scale out your load balancing capabilities using SLB VMs on the same Hyper-V compute servers that you use for your other VM workloads. SLB supports rapid creation and deletion of load balancing endpoints as required for CSP operations. In addition, SLB supports tens of gigabytes per cluster, provides a simple provisioning model, and is easy to scale out and in. SLB uses Border Gateway Protocol to advertise virtual IP addresses to the physical network.

For more information, see What is SLB for SDN?

SDN VPN gateways

SDN Gateway is a software-based Border Gateway Protocol (BGP) capable router designed for CSPs and enterprises that host multi-tenant virtual networks using Hyper-V Network Virtualization (HNV). You can use RAS Gateway to route network traffic between a virtual network and another network, either local or remote.

SDN Gateway can be used to:

  • Create secure site-to-site IPsec connections between SDN virtual networks and external customer networks over the internet.

  • Create Generic Routing Encapsulation (GRE) connections between SDN virtual networks and external networks. The difference between site-to-site connections and GRE connections is that the latter isn't an encrypted connection.

    For more information about GRE connectivity scenarios, see GRE Tunneling in Windows Server.

  • Create Layer 3 (L3) connections between SDN virtual networks and external networks. In this case, the SDN gateway simply acts as a router between your virtual network and the external network.

SDN Gateway requires SDN Network Controller. Network Controller performs the deployment of gateway pools, configures

Next steps

Learn about two-node patterns - Azure Stack HCI network deployment patterns.