Plan Software Defined Networking Deployment

 

Updated: March 3, 2016

Applies To: Windows Server Technical Preview

The topics in this section provide deployment planning and prerequisite information about the Software Defined Networking technologies that are included in Windows Server® 2016 Technical Preview.

Note

For additional Software Defined Networking documentation, you can use the following library sections.

Review the following information to help plan your Software Defined Network (SDN) deployment. After you review this information, see Deploy Software Defined Networks using scripts for deployment information.

Note

In addition to this topic, the following SDN planning content is available.

Prerequisites

This topic describes a number of hardware and software prerequisites, including:

  • Physical network

    You need access to your physical network devices to configure VLANs, Routing, BGP, and Quality of Service (QOS).

  • Physical compute hosts

    These are used for Hyper-V and are required to host SDN infrastructure and tenant virtual machines. Specific network hardware is required in these hosts for best performance.

Physical Network Configuration

The infrastructure is depicted in the following diagram:

SDN Infrastructure

The physical network must be configured so that the following networks are available. Subnets and VLAN IDs are examples and can be customized for your environment:

Network Name

Subnet

Mask

VLAN ID on trunk

Gateway

Reservations (examples)

Management

The subnet that connects the hosts.

You can use DHCP or static IP addressing.

10.60.34.0

24

7

10.60.34.1

10.60.34.1 Router

10.60.34.4 Network Controller

10.60.34.10 Compute1 …

10.60.34.41 Compute32

HNV PA

The subnet for the Provider Addresses.

10.60.33.128

25

11

10.60.33.129

10.60.33.129 Router

10.60.33.132 SLBMUX1

Transit

Used by the HNV gateway for peering of the north/south networks.

10.60.35.0

24

10

10.60.35.1

10.60.35.1 Router

VIP

The subnet for the SLB/MUX VIPs.

10.127.134.128

27

NA

10.127.134.129

10.127.134.130 SLBM VIP

GRE VIP

The subnet for VIP Addresses for GRE S2S connectivity.

10.127.134.192

27

NA

10.127.134.193

10.127.134.193

Default GW (Router)

If any of your networks are untagged or in access mode, use VLAN ID 0 for these networks when preparing the SDN configuration file.

The Management, HNV PA, and GRE VIP subnets must be routable to each other. The VIP subnet will get advertised into the network via BGP and will not have a VLAN assigned or be pre-configured in the router.

Active Directory and DNS must be available and reachable from these subnets, and does not require Windows Server 2016 Technical Preview. For more information, see Active Directory Domain Services Overview.

Compute

All Hyper-V hosts must have Windows Server 2016 Technical Preview installed, Hyper-V enabled, and a virtual switch created with one physical adapter connected and connected to the Management VLAN. The host must be reachable via a Management IP address. Any storage type that is compatible with Hyper-V, shared or local may be used.

Tip

It is convenient if you use the same name for all your virtual switches, but it is not mandatory. See the comment associated with the vSwitchName variable in the config.psd1 file.

Host requirements

Host

Hardware Requirements

Software Requirements

Host-01

Virtual Machine Host

4-Core 2.66 GHz CPU

32 GB of RAM

300 GB Disk Space

1 Gb physical network adapter

OS: Windows Server 2016 Technical Preview 

Hyper-V Role installed

Host-02

Virtual Machine Host

4-Core 2.66 GHz CPU

32 GB of RAM

300 GB Disk Space

1 Gb physical network adapter

OS: Windows Server 2016 Technical Preview

Hyper-V Role Installed

Host-03

Virtual Machine Host

4-Core 2.66 GHz CPU

32 GB of RAM

300 GB Disk Space

1 Gb physical network adapter

OS: Windows Server 2016 Technical Preview

Hyper-V Role Installed

Host-04

Virtual Machine Host

4-Core 2.66 GHz CPU

32 GB of RAM

300 GB Disk Space

1 Gb physical network adapter

OS: Windows Server 2016 Technical Preview

Hyper-V Role Installed

Role requirements

Role

vCPU requirements

Memory requirements

Disk requirements

Network controller (single node, HA x 3)

4 vCPUs

4 GB min (8 GB recommended)

75 GB for the OS drive

SLB MUX

8 vCPUs

8 GB recommended

N/A

RAS Gateway BGP router VM for SLB MUX peering

(alternatively use ToR as BGP Router)

2 vCPUs

2 GB

N/A

RAS Gateway Multitenant BGP Router VM

1 logical core for each VM (8 vCPUs)

2 GB

64 GB VHD size

Workload VM

Depends on workload

Depends on workload

Depends on workload

Optional infrastructure

2 vCPUs

4 GB recommended

N/A

Network hardware

  • Network Interface Cards (NICs)

    To achieve best performance specific capabilities are required in the network interface cards you use in your Hyper-V hosts and storage hosts.

    Remote Direct Memory Access (RDMA) is a kernel bypass technique which makes it possible to transfer large amounts of data quite rapidly. Because the transfer is performed by the DMA engine on the network adapter, the CPU is not used for the memory movement, which frees the CPU to perform other work.

    Switch Embedded Teaming (SET) is an alternative NIC Teaming solution that you can use in environments that include Hyper-V and the Software Defined Networking (SDN) stack in Windows Server 2016 Technical Preview. SET integrates some NIC Teaming functionality into the Hyper-V Virtual Switch.

    For more information, see Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET).

  • Switches

    If you deploy RDMA, your switches must be configured with Data Center Bridging (DCB) and Priority Flow Control (PFC).

    For more information, see Remote Direct Memory Access (RDMA) and Switch Embedded Teaming (SET).

Routing infrastructure

A BGP peer that is external to your SDN infrastructure is required for SDN components (SLB MUXes and gateways) that dynamically advertise routes into virtual networks.

BGP peering is typically configured in a managed switch or router as part of the network infrastructure. This BGP router peer in the network infrastructure must be configured to have its own ASN and allow peering from an ASN that is assigned to the SDN components. In addition, the BGP router peer must be configured to allow peering from the entire subnet that your SLB MUXes and gateways are connected to on the front-end, or peering by individual IPs.

You must obtain the following information from your physical router, or from the network administrator in control of that router :

  • Router ASN

  • Router Peer IP

  • ASN for use by SDN components

You or your network administrator must configure the BGP router peer to accept connections from the IP or subnet that your gateway and MUXes are using.

For more information, see Border Gateway Protocol (BGP).

See Also

Installation and preparation requirements for deploying Network Controller
Software Defined Networking (SDN)