次の方法で共有


Hyper-V Network Virtualization technical details

 

Applies To: Windows Server 2012 R2

Server virtualization enables multiple server instances to run concurrently on a single physical host; yet server instances are isolated from each other. Each virtual machine essentially operates as if it is the only server running on the physical computer. Network virtualization provides a similar capability, in which multiple virtual network infrastructures run on the same physical network (potentially with overlapping IP addresses), and each virtual network infrastructure operates as if it is the only virtual network running on the shared network infrastructure. Figure 1 shows this relationship.

Figure 1: Server virtualization versus network virtualization

Hyper-V Network Virtualization Concepts

In Hyper-V Network Virtualization (HNV), a customer is defined as the “owner” of a group of virtual machines that are deployed in a datacenter. A customer can be a corporation or enterprise in a multitenant public datacenter, or a division or business unit within a private datacenter. Each customer can have one or more VM networks in the datacenter, and each VM network consists of one or more virtual subnets.

VM network

  • Each VM network consists of one or more virtual subnets. A VM network forms an isolation boundary where the virtual machines within a VM network can communicate with each other. As a result, virtual subnets in the same VM network must not use overlapping IP address prefixes.

  • Each VM network has a Routing Domain which identifies the VM network. The Routing Domain ID (RDID), which identifies the VM network, is assigned by datacenter administrators or datacenter management software, such as System Center 2012 R2 Virtual Machine Manager (VMM). The RDID is a Windows GUID — for example, “{11111111-2222-3333-4444-000000000000}”.

Virtual subnets

  • A virtual subnet implements the Layer 3 IP subnet semantics for the virtual machines in the same virtual subnet. The virtual subnet is a broadcast domain (similar to a VLAN). Virtual machines in the same virtual subnet must use the same IP prefix.

  • Each virtual subnet belongs to a single VM network (RDID), and it is assigned a unique Virtual Subnet ID (VSID). The VSID must be unique within the datacenter and is in the range 4096 to 2^24-2).

A key advantage of the VM network and routing domain is that it allows customers to bring their network topologies to the cloud. Figure 2 shows an example where the Contoso Corp has two separate networks, the R&D Net and the Sales Net. Because these networks have different routing domain IDs, they cannot interact with each other. That is, Contoso R&D Net is isolated from Contoso Sales Net even though both are owned by Contoso Corp. Contoso R&D Net contains three virtual subnets. Note that both the RDID and VSID are unique within a datacenter.

Figure 2: Customer networks and virtual subnets

In Figure 2, the virtual machines with VSID 5001 can have their packets routed or forwarded by HNV to virtual machines with VSID 5002 or VSID 5003. Before delivering the packet to the Hyper-V switch, HNV will update the VSID of the incoming packet to the VSID of the destination virtual machine. This will only happen if both VSIDs are in the same RDID. If the VSID that is associated with the packet does not match the VSID of the destination virtual machine, the packet will be dropped. Therefore, virtual network adapters with RDID1 cannot send packets to virtual network adapters with RDID2.

Note

In the packet flow description above, the term “virtual machine” actually means the “virtual network adapter” on the virtual machine. The common case is that a virtual machine only has a single virtual network adapter. In this case, the words virtual machine and virtual network adapter can conceptually mean the same thing. Because a virtual machine can have multiple virtual network adapters, and these virtual network adapters can have different VirtualSubnetIDs (VSIDs) or RoutingDomainIDs (RDIDs), HNV specifically focuses on the packets sent and received between virtual network adapters.

Each virtual subnet defines a Layer 3 IP subnet and a Layer 2 (L2) broadcast domain boundary similar to a VLAN. When a virtual machine broadcasts a packet, this broadcast is limited to the virtual machines that are attached to switch ports with the same VSID. Each VSID can be associated with a multicast address in the PA. All broadcast traffic for a VSID is sent on this multicast address.

Note

HNV does NOT depend on broadcast or multicast. For broadcast or multicast packets in a VM network, a PA multicast IP address is used if configured. However, many datacenter operators do not enable multicast in their environments. As a result, when a PA multicast address is not available an intelligent PA unicast replication is used. This means that packets are unicasted only to PA addresses that are configured for the particular virtual subnet the packet is on. In addition, only one unicast packet per host is sent no matter how many relevant virtual machines are on the host.

In addition to being a broadcast domain, the VSID provides isolation. A virtual network adapter in HNV is connected to a Hyper-V switch port that has a VSID ACL. If a packet arrives on this Hyper-V switch port with a different VSID the packet is dropped. Packets will only be delivered on a Hyper-V switch port if the VSID of the packet matches the VSID of the switch port. This is the reason, in the above example of Figure 2, that packets flowing from VSID 5001 to 5003 must have the VSID in the packet modified before delivery to the destination virtual machine.

If the Hyper-V switch port does not have a VSID ACL, the virtual network adapter that is attached to that switch port is not part of a HNV virtual subnet. Packets sent from a virtual network adapter that does not have a VSID ACL will pass unmodified through the Hyper-V switch.

When a virtual machine sends a packet, the VSID of the Hyper-V switch port is associated with this packet. On the receiving side, HNV delivers to the Hyper-V switch the VSID in the OOB along with the decapsulated packet. On the receiving end, HNV performs a policy lookup and adds the VSID to the OOB data before the packet is passed to the Hyper-V switch.

Note

Hyper-V Switch Extensions can operate in both the Provider Address (PA) space and the Customer Address (CA) space. This means the VSID is available to the switch extensions. This allows the switch extension to become multitenant aware. For example, a firewall switch extension can differentiate CA IP address 10.1.1.5 with OOB containing VSID 5001 from the same CA IP address with VSID 6001.

Routing in Hyper-V Network Virtualization

As in physical networks, routing is an important part of HNV. There are two key aspects to understand: how packets are routed between virtual subnets and how packets are routed outside a virtual network.

Routing Between Virtual Subnets

In a physical network, a subnet is the Layer 2 (L2) domain where computers (virtual and physical) can directly communicate with each other without having to be routed. In Windows, if you statically configure a network adapter you can set a “default gateway” which is the IP address to send all traffic that is going out of the particular subnet so that it can be routed appropriately. This is typically the router for your physical network. HNV uses a built in router that is part of every host to form a distributed router for a virtual network. This means that every host, in particular the Hyper-V Virtual Switch, acts as the default gateway for all traffic that is going between Virtual Subnets that are part of the same VM network. In Windows Server 2012 and Windows Server 2012 R2 the address used as the default gateway is the lowest entry for the subnet (as an example, it is the “.1” address for a /24 subnet prefix). This address is reserved in each virtual subnet for the default gateway and cannot be used by virtual machines in the virtual subnet.

HNV acting as a distributed router allows for a very efficient way for all traffic inside a VM Network to be routed appropriately because each host can directly route the traffic to the appropriate host without needing an intermediary. This is particularly true when two virtual machines in the same VM Network but different Virtual Subnets are on the same physical host. As you will see later in this section, the packet never has to leave the physical host.

Routing Outside a Virtual Network

Most customer deployments will require communication from the HNV environment to resources that are not part of the HNV environment. Network Virtualization gateways are required to allow communication between the two environments. Scenarios requiring a HNV Gateway include Private Cloud and Hybrid Cloud. Basically, HNV gateways are required for VPNs and routing.

Gateways can come in different physical form factors. They can be built on Windows Server 2012 R2, incorporated into a Top of Rack (TOR) switch, a load balancer, put into other existing network appliances, or can be a new stand-alone network appliance.

Private Cloud (Routing)

Large enterprises may be either hesitant, or for compliance reasons, unable to move some of their services and data to a public cloud hoster. However, enterprises still want to obtain the benefits of the cloud provided by HNV by consolidating their datacenter resources into a private cloud. In a private cloud deployment, overlapping IP addresses may not be needed because corporations typically have sufficient internal non-routable address (e.g. 10.x.x.x or 192.x.x.x) space. Consider the example shown in Figure 3.

Figure 3: Private Cloud Deployment

Notice in this example that the Customer Addresses in the virtual subnets are 157.x addresses while the IP addresses in the non-network virtualized part of the network (Corp Net) are also 157.x addresses. In this case the PA addresses for the virtual subnets in the datacenter are 10.x IP addresses. This deployment allows the enterprise to take advantage of HNV’s ability to offer flexibility in both virtual machine placement and cross-subnet live migration in the datacenter fabric. This increases datacenter efficiency thereby reducing both Operational Expenses (OpEx) and Capital Expenses (CapEx). In this scenario the HNV gateway provides routing between the 10.x and 157.1 IP addresses.

Hybrid Cloud (Site to site VPN)

A key advantage of HNV is that it can seamlessly extend an on-premise datacenter to a Windows Server 2012 based cloud datacenter. This is called a Hybrid Cloud model as shown in Figure 4.

Figure 4: Hybrid Cloud Deployment

In this scenario an internal subnet, such as the subnet containing web servers is moved from the Enterprise Network into a Cloud Hoster’s datacenter. Taking advantage of Bring Your Own IP Address offered by the Hoster, the Enterprise does not need to change the network configuration of the Web Server virtual machine or any other network endpoint that references that Web Server. The Hoster provides a secure link via a HNV Gateway Appliance. The Enterprise admins need only configure their on-premise VPN with the appropriate IP address. The Web Server virtual machine is unaware that it has been moved to the cloud. It remains domain-joined with Active Directory (AD) and uses the Enterprise’s DNS server. The Web Server virtual machine also continues to interact with other servers in the enterprise such as a SQL Server.

The HNV Gateway can support multiple site to site (S2S) VPN tunnels as shown in Figure 5. Note that a VMM is not pictured in the diagram but is required for HNV deployments.

Figure 5: HNV Gateway

Packet Encapsulation

Each virtual network adapter in HNV is associated with two IP addresses:

  • Customer Address (CA)   The IP address that is assigned by the customer, based on their intranet infrastructure. This address enables the customer to exchange network traffic with the virtual machine as if it had not been moved to a public or private cloud. The CA is visible to the virtual machine and reachable by the customer.

  • Provider Address (PA)   The IP address that is assigned by the hoster or the datacenter administrators based on their physical network infrastructure. The PA appears in the packets on the network that are exchanged with the server running Hyper-V that is hosting the virtual machine. The PA is visible on the physical network, but not to the virtual machine.

The CAs maintain the customer's network topology, which is virtualized and decoupled from the actual underlying physical network topology and addresses, as implemented by the PAs. The following diagram shows the conceptual relationship between virtual machine CAs and network infrastructure PAs as a result of network virtualization.

Figure 6: Conceptual diagram of network virtualization over physical infrastructure

In the diagram, customer virtual machines are sending data packets in the CA space, which traverse the physical network infrastructure through their own virtual networks, or “tunnels”. In the example above, the tunnels can be thought of as “envelopes” around the Contoso and Fabrikam data packets with green shipping labels (PA addresses) to be delivered from the source host on the left to the destination host on the right. The key is how the hosts determine the “shipping addresses” (PA’s) corresponding to the Contoso and the Fabrikam CA’s, how the “envelope” is put around the packets, and how the destination hosts can unwrap the packets and deliver to the Contoso and Fabrikam destination virtual machines correctly.

This simple analogy highlighted the key aspects of network virtualization:

  • Each virtual machine CA is mapped to a physical host PA. There can be multiple CAs associated with the same PA.

  • Virtual machines send data packets in the CA spaces, which are put into an “envelope” with a PA source and destination pair based on the mapping.

  • The CA-PA mappings must allow the hosts to differentiate packets for different customer virtual machines.

As a result, the mechanism to virtualize the network is to virtualize the network addresses used by the virtual machines. The next section describes the actual mechanism of address virtualization.

Network virtualization through address virtualization

HNV supports Network Virtualization for Generic Routing Encapsulation (NVGRE) as the mechanism to virtualize the IP Address:

Generic Routing Encapsulation This network virtualization mechanism uses the Generic Routing Encapsulation (NVGRE) as part of the tunnel header. In NVGRE, the virtual machine’s packet is encapsulated inside another packet. The header of this new packet has the appropriate source and destination PA IP addresses in addition to the Virtual Subnet ID, which is stored in the Key field of the GRE header, as shown in Figure 7.

Figure 7: Network virtualization - NVGRE encapsulation

The Virtual Subnet ID allows hosts to identify the customer virtual machine for any given packet, even though the PA’s and the CA’s on the packets may overlap. This allows all virtual machines on the same host to share a single PA, as shown in Figure 7.

Sharing the PA has a big impact on network scalability. The number of IP and MAC addresses that need to be learned by the network infrastructure can be substantially reduced. For instance, if every end host has an average of 30 virtual machines, the number of IP and MAC addresses that need to be learned by the networking infrastructure is reduced by a factor of 30.The embedded Virtual Subnet IDs in the packets also enable easy correlation of packets to the actual customers.

With Windows Server 2012 and later, HNV fully supports NVGRE out of the box; it does NOT require upgrading or purchasing new network hardware such as NICs (Network Adapters), switches, or routers. This is because the NVGRE packet on the wire is a regular IP packet in the PA space, which is compatible with today’s network infrastructure.

Windows Server 2012 made working with standards a high priority. Along with key industry partners (Arista, Broadcom, Dell, Emulex, Hewlett Packard, and Intel) Microsoft published a draft RFC that describes the use of Generic Routing Encapsulation (GRE), which is an existing IETF standard, as an encapsulation protocol for network virtualization. For more information, see the following Internet Draft: Network Virtualization using Generic Routing Encapsulation. As NVGRE-aware becomes commercially available the benefits of NVGRE will become even greater.

Multitenant deployment example

The following diagram shows an example deployment of two customers moving in a cloud datacenter with the CA-PA relationship defined by the HNV policies.

Figure 8: Multi-tenant deployment example

Consider the example in Figure 8. Prior to moving to the hosting provider's shared IaaS service:

  • Contoso Corp ran a SQL Server (named SQL) at the IP address 10.1.1.11 and a web server (named Web) at the IP address 10.1.1.12, which uses its SQL Server for database transactions.

  • Fabrikam Corp ran a SQL Server, also named SQL and assigned the IP address 10.1.1.11, and a web server, also named Web and also at the IP address 10.1.1.12, that uses its SQL Server for database transactions.

Contoso Corp and Fabrikam Corp move their respective SQL Servers and web servers to the same hosting provider's shared IaaS service where, coincidentally, they run the SQL virtual machines on Hyper-V Host 1 and the Web (IIS7) virtual machines on Hyper-V Host 2. All virtual machines maintain their original intranet IP addresses (their CAs).

Both companies are assigned the following Virtual Subnet ID (VSID) and PAs by their hosting provider when the virtual machines are provisioned:

  • PAs of Contoso Corp's virtual machines: VSID is 5001, SQL is 192.168.1.10, Web is 192.168.2.20

  • PAs of Fabrikam Corp's virtual machines: VSID is 6001, SQL is 192.168.1.10, Web is 192.168.2.20

The hosting provider creates policy settings, consisting of a customer virtual subnet for Fabrikam Corp that maps the CAs of the Fabrikam Corp virtual machines to their assigned PAs and VSID, and a separate customer virtual subnet for Contoso Corp that maps the CAs of the Contoso Corp virtual machines to their assigned PAs and VSID. The provider applies these policy settings to Hyper-V Host 1 and Hyper-V Host 2.

When the Contoso Corp Web virtual machine on Hyper-V Host 2 queries its SQL Server at 10.1.1.11, the following happens:

Hyper-V Host 2, based on its policy settings, translates the addresses in the packet from:

  • Source: 10.1.1.12 (the CA of Contoso Corp Web)

  • Destination: 10.1.1.11 (the CA of Contoso Corp SQL)

The encapsulated packet contains:

  • GRE header with VSID: 5001

  • Outer source: 192.168.2.20 (the PA for Contoso Corp Web)

  • Outer destination: 192.168.1.10 (the PA for Contoso Corp SQL)

When the packet is received at Hyper-V Host 1, based on its policy settings, it will decapsulate the NVGRE packet with:

  • Outer source: 192.168.2.20 (the PA for Contoso Corp Web)

  • Outer destination: 192.168.1.10 (the PA for Contoso Corp SQL)

  • GRE header with VSID: 5001

The decapsulated packet (the original packet sent from the Contoso Corp Web virtual machine) is delivered to the Contoso Corp SQL virtual machine:

  • Source: 10.1.1.12 (the CA of Contoso Corp Web)

  • Destination: 10.1.1.11 (the CA of Contoso Corp SQL)

When the Contoso Corp SQL virtual machine on Hyper-V Host 1 responds to the query, the following happens:

Hyper-V Host 1, based on its policy settings, translates the addresses in the packet from:

  • Source: 10.1.1.11 (the CA of Contoso Corp SQL)

  • Destination 10.1.1.12 (the CA of Contoso Corp Web)

The packet is encapsulated with:

  • GRE header with VSID: 5001

  • Outer source: 192.168.1.10 (the PA for Contoso Corp SQL)

  • Outer destination: 192.168.2.20 (the PA for Contoso Corp Web)

When it is received at Hyper-V Host 2, it will, based on its policy settings, decapsulate the packet:

  • Source: 192.168.1.10 (the PA for Contoso Corp SQL)

  • Destination: 192.168.2.20 (the PA for Contoso Corp Web)

  • GRE header with VSID: 5001

The decapsulated packet is delivered to the Contoso Corp Web virtual machine with:

  • Source: 10.1.1.11 (the CA of Contoso Corp SQL)

  • Destination: 10.1.1.12 (the CA of Contoso Corp Web)

A similar process for traffic between the Fabrikam Corp Web and SQL virtual machines uses the HNV policy settings for the Fabrikam Corp. As a result, with HNV, Fabrikam Corp and Contoso Corp virtual machines interact as if they were on their original intranets. They can never interact with each other, even though they are using the same IP addresses.

The separate addresses (CAs and PAs), the policy settings of the Hyper-V hosts, and the address translation between the CA and the PA for inbound and outbound virtual machine traffic isolate these sets of servers. Furthermore, the virtualization mappings and transformation decouples the virtual network architecture from the physical network infrastructure. Although Contoso SQL and Web and Fabrikam SQL and Web reside in their own CA IP subnets (10.1.1/24), their physical deployment happens on two hosts in different PA subnets, 192.168.1/24 and 192.168.2/24, respectively. The implication is that cross-subnet virtual machine provisioning and live migration become possible with HNV.

Note

For more information about packet flow, download the Hyper-V Network Virtualization Packet Flow PowerPoint presentation at https://www.microsoft.com/download/details.aspx?id=34782.

Hyper-V Network Virtualization architecture

In Windows Server 2012, HNV policy enforcement and IP virtualization was performed by the Network Driver Interface Specification (NDIS) Lightweight Filter (LWF) called Windows Network Virtualization (WNV). The WNV filter was located below the Hyper-V switch as shown in Figure 9. A side effect of this architecture was that switch extensions could only see the CA address space traffic, but not for the PA space.

In Windows Server 2012 R2, HNV is now a part of the virtual switch, enabling extensions to gain visibility into both the CA and PA space addresses.

Figure 9: HNV Architecture

Each virtual machine network adapter is configured with an IPv4, and/or an IPv6 address. These are the CAs that will be used by the virtual machines to communicate with each other, and they are carried in the IP packets from the virtual machines. HNV virtualizes the CAs to PAs based on the network virtualization policies.

A virtual machine sends a packet with source address CA1, which is virtualized based on HNV policy in the Hyper-V switch. A special network virtualization access control list based on VSID isolates the virtual machine from other virtual machines that are not part of the same virtual subnet or part of the same routing domain.

Hyper-V Network Virtualization Policy Management

The Windows platform provides public APIs for datacenter management software to manage HNV. Virtual Machine Manager is one such datacenter management product. The management software contains all of the HNV policies. Because the virtual machine manager must be aware of virtual machines and more importantly provisions virtual machines and complete customer virtual networks in the datacenter and must be multi-tenant aware, managing HNV policy is a natural extension for policy-based networking.

Summary

Cloud-based datacenters can provide many benefits such as improved scalability and better resource utilization. To realize these potential benefits requires a technology that fundamentally addresses the issues of multi-tenant scalability in a dynamic environment. HNV was designed to address these issues and also improve the operational efficiency of the datacenter by decoupling the virtual network topology for the physical network topology. Building on an existing standard, HNV runs in today’s datacenter and as NVGRE-aware hardware becomes available the benefits will continue to increase. Customers, with HNV can now consolidate their datacenters into a private cloud or seamlessly extend their datacenters to a hoster’s environment with a hybrid cloud.

See also

To learn more about HNV see the following links:

Content type

References

Product evaluation

Architecture

Hyper-V Network Virtualization Gateway Architectural Guide

Solution Guidance

Deploy highly scalable tenant network infrastructure for hosting providers

Community Resources

Hotfixes

Recommended hotfixes, updates and known solutions for Windows Server 2012 and Windows Server 2012 R2 Hyper-V Network Virtualization (HNV) Environments

Sample Scripts

RFC

NVGRE Draft RFC

Related Technologies

Hyper-V Virtual Switch Overview