What's New in Hyper-V Network Virtualization
Applies To: Windows Server 2012 R2, System Center 2012 R2
This topic describes the new or changed features and functionality for Hyper-V Network Virtualization (HNV) in Windows Server 2012 R2.
HNV provides a virtual network abstraction on top of a physical network. This abstraction provides virtual networks to virtual machines which operate the same as a physical network. This is similar to the abstraction hypervisors provide to the operating system running in virtual machines. HNV provides this abstraction through an overlay network on top of the physical network for each VM network. For a HNV overview, see Hyper-V Network Virtualization Overview.
What’s new in Hyper-V Network Virtualization in Windows Server 2012 R2
The following HNV features have been added or updated for Windows Server 2012 R2.
Feature/functionality |
Windows Server 2012 |
Windows Server 2012 R2 |
---|---|---|
X |
||
X |
X |
|
X |
X |
|
X |
X |
|
X |
||
X |
X |
|
X |
X |
Inbox HNV Gateway
The inbox HNV gateway is a multi-tenant gateway that performs Site-to-Site (VPN), NAT, and Forwarding functions.
What value does this change add?
It is now easier to set up a gateway which can connect multiple tenant VPN connections in a hybrid cloud scenario, supports multiple tenants connecting to the Internet, and forwards network traffic from a datacenter network to virtual networks in a private cloud scenario.
What works differently?
You can use System Center 2012 R2 Virtual Machine Manager to fully manage the HNV gateway.
Supports guest clustering for high availability
Includes BGP for dynamic routes update
HNV Architecture
The HNV filter moved from being an NDIS lightweight filter (LWF) to being part of the Hyper-V virtual switch.
What value does this change add?
Forwarding switch extensions can co-exist with HNV allowing multiple network virtualization solutions (one provided by HNV and another provided by the forwarding switch extension) to co-exist on the same host running Hyper-V.
What works differently?
Improved interoperability with switch extensions
The HNV NDIS LWF does not have to be bound to network adapters anymore. Once you attach a network adapter to the virtual switch you can enable HNV simply by assigning a Virtual Subnet ID to a particular virtual network adapter. For those using Virtual Machine Manager to manage VM networks this is transparent, but for anyone using Windows PowerShell this will save an often-missed step.
HNV interoperability with Hyper-V Virtual Switch Extensions
Switch extensions work in both the Customer Address (CA) and Provider Address (PA) space.
What value does this change add?
Third-party switch extensions can now work with HNV traffic, because the extensions have visibility into both the provider address (PA) space, and the customer address (CA) space.
What works differently?
The HNV module was moved to inside the virtual switch so that extensions can see both the provider (PA) and virtual (CA) IP address spaces. This allows forwarding and other types of extensions to make decisions with knowledge of both address spaces.
Hybrid forwarding is implemented. Hybrid forwarding directs packets to different forwarding agents, based upon the packet type. In the Windows Server 2012 R2 implementation, an NVGRE packet is forwarded by the HNV module. A packet that is not NVGRE is forwarded normally by the forwarding extension.
HNV VM Network Diagnostics
Some new diagnostic tools have been included.
What value does this change add?
This enhances your ability to diagnose HNV networks.
What works differently?
Enhanced ping.exe (ping –p) to allow pinging to and from provider addresses
Two new Windows PowerShell cmdlets (Test-VMNetworkAdapter and Select-NetVirtualizationNextHop) that enables diagnostics of HNV policy and the Customer Address space.
Added the ability for Message Analyzer to decode NVGRE packets
For more details, check out the New Networking Diagnostics with PowerShell in Windows Server R2 blog post.
Dynamic IP Address Learning
HNV learns about the IP address of a virtual machine that has been assigned manually or set via DHCP on the virtual network.
What value does this change add?
Enables high availability scenarios for both virtual machines on a VM network and the HNV gateway.
Allows you to run DHCP, DNS, and Active Directory in your VM networks.
What works differently?
For broadcast or multicast packets in a VM network, a PA multicast IP address is used if configured. If a PA multicast address is not available an intelligent PA unicast replication is used.
Packets are unicasted only to PA addresses that are configured for the particular virtual subnet the packet is on. In addition, only one unicast packet is sent per host no matter how many relevant virtual machines are on the host.
Once a host learns a new IP address it will then notify Virtual Machine Manager. At this point, the learned IP address becomes part of the centralized policy that Virtual Machine Manager pushes out. This allows for both rapid dissemination of HNV routing policy and limits the network overhead for disseminating this HNV routing policy.
Includes support for Duplicate Address Detection (DAD), network unreachability detection (NUD) and Address Resolution Protocol (ARP) packets in the CA address space for both IPv4 and IPv6. The HNV filter also provides a reliable ARP proxy for any known routing policies once again reducing the amount of control traffic that goes out on the physical network.
Fully supported in Windows PowerShell.
HNV + Windows NIC Teaming
Integrates HNV and Windows NIC Teaming to allow multiple network adapters to be placed into a team for the purposes of bandwidth aggregation, and/or traffic failover to maintain connectivity in the event of a network component failure.
What value does this change add?
Integrating HNV with Windows NIC Teaming increases HNV network throughput and reliability.
What works differently?
Both inbound and outbound spread of virtualized traffic on a NIC Team is enabled. This means that both traffic leaving a host or coming into a host can utilize all the network adapters in the NIC team.
NVGRE Encapsulated Task Offload
In NDIS 6.30 and later (Windows Server 2012 and later), Network Virtualization using Generic Routing Encapsulation (NVGRE) task offload makes it possible to use Generic Routing Encapsulation (GRE)-encapsulated packets with:
Large Send Offload (LSO)
Receive Side Scaling (RSS)
Virtual Machine Queue (VMQ)
What value does this change add?
To increase its performance, NVGRE can offload tasks to a network adapter that has the appropriate task offload capabilities.
What works differently?
Two partners have announced that their next generation network adapters will support NVGRE Encapsulated Task Offload. You can read the press releases from Mellanox and Emulex for more details.
Microsoft is continuing to work with additional network vendors to enable NVGRE Task Offload. More announcements will be made in the future.