共用方式為


Building Your Cloud Infrastructure: Converged Data Center with File Server Storage

 

Applies To: Windows Server 2012

This document contains the instructions to create a private or public cloud configuration that uses:

  • Two subnets - one for cloud infrastructure traffic (live migration, cluster, storage and management) one for tenant traffic.

  • NIC Teaming for network bandwidth aggregation and failover for the infrastructure and tenant subnet.

  • A dedicated file server storage cluster that hosts virtual machine VHDX and configuration files.

  • A dedicated Hyper-V compute cluster that runs the virtual machine workloads.

The design pattern discussed in this document is one of three design patterns we suggest for building the core cloud network, compute and storage infrastructure. For information about the other two cloud infrastructure design patterns, please see:

Design Considerations and Requirements for the Converged Data Center with File Server Storage Pattern

The Converged Data Center with File Server Storage cloud infrastructure design pattern focuses on the following key requirements in the areas of networking, compute and storage:

Networking

  • You require that cloud infrastructure traffic be physically separated from cloud tenant traffic. The requirement is met by creating separate NIC teams for infrastructure and tenant traffic and connecting them to different subnets/segments.

  • You require that infrastructure traffic (i.e. live migration, cluster, storage, management) all receive guaranteed levels of bandwidth. The requirement is met by using Windows QoS policies on the parent Hyper-V partition..

  • You require that tenant traffic from different tenants receive guaranteed levels of bandwidth. The requirement is met by using Hyper-V virtual switch QoS policies.

  • You require the highest networking performance possible to support hosting Virtual Machines and VHD/X files on a file server using the SMB 3.0 protocol. The requirement is met by installing and enabling Remote Direct Memory Access (Remote DMA or RDMA) network adapters on both the Hyper-V Server cluster and the File server cluster.

Storage

  • You require the ability to scale and manage storage separately from the compute infrastructure. This requirement can be met by creating a dedicated storage cluster that will host the VHDX and virtual machine configuration files. The Hyper-V compute cluster will connect to the file server cluster using SMB 3.0 to connect to file shares on the storage cluster. The file server cluster will be configured to use the new Windows Server 2012 Scale-out File Server feature.

  • You require a low cost storage option. This requirement is met by using Serial Attached SCSI (SAS) disks in shared JBOD enclosures managed through Storage Spaces. Alternatively, each member of the file server failover cluster can connect to iSCSI, Fibre Channel or Fibre Channel over Ethernet. Only the SAS scenario is described in this document.

  • You require a resilient storage solution. This requirement is met by using two file servers configured as a failover cluster, with well-connected (shared JBODs) storage so that all members of the file server failover cluster are directly connected to storage with Storage Spaces configured as a mirrored space to guarantee against data loss in the case of disk failures. In addition, each member of the file server failover cluster is able to access the shared storage by using Windows Server 2012 Failover Clustering and Cluster Shared Volumes Version 2 (CSV v2) volumes to store virtual machine files and metadata.

Compute

  • You require the ability to scale your compute infrastructure separately from your storage infrastructure. This requirement is met by creating a dedicated Hyper-V compute cluster that connects to remote file server storage for virtual machines and virtual machine configuration files. Local disks on the compute nodes are only used for the boot partition but not for the virtual machines.

  • You require that virtual machines will be continuously available and resilient to hardware failures. This requirement can be met by using Windows Server 2012 Failover Clustering together with the Hyper-V Server Role.

  • You require the highest number of virtual machines possible per host server (i.e. increased density). This requirement is met by using processor offload technologies, such as Remote Direct Memory Access (RDMA), Receive Side Scaling, Receive Side Coalescing (RSC), and Datacenter Bridging (DCB).

Overview

A Windows Server® 2012 cloud infrastructure described in this document is a high-performing and highly available Hyper-V cluster that hosts virtual machines that can be managed to create private or public clouds connected to a converged 10Gb Ethernet network, and using dedicated file servers as the storage nodes. This document explains how to configure the basic building blocks for such a cloud. It does not cover the System Center or other management software aspects of deployments; the focus is on configuring the core Windows Server computers that are used to build cloud infrastructure.

For background information on creating clouds using Windows Server 2012, see Building Infrastructure as a Service Clouds using Windows Server "8".

This cloud configuration consists of the following:

  • Multiple computers in a dedicated Hyper-V compute cluster.

    A Hyper-V cluster is created using the Windows Server 2012 Failover Cluster feature. The Windows Server 2012 Failover Clustering feature set is tightly integrated with the Hyper-V server role and enables a high level of availability from a compute and networking perspective. In addition, Windows Server 2012 Failover Clustering enhances virtual machine mobility which is critical in a cloud environment. The Hyper-V cluster is a dedicated compute cluster and does not host storage for virtual machines and virtual machine configuration files.

  • Multiple computers in a dedicated Scale-out File Server storage cluster.

    A File Server cluster is created using the Windows Server 2012 Failover Cluster feature. Windows Server 2012 includes a new file server capability known as "Scale-out File Server for applications" that enables you to store virtual machine and virtual machine configuration files in a file share and make these files continuously available. When you separate the file server cluster from the compute cluster you are able to scale compute and storage resources independently.

  • A converged networking infrastructure that supports physical segmentation of infrastructure and tenant traffic.

    Each computer in the Hyper-V failover cluster must have at least two network adapters so that one adapter can host the cloud infrastructure traffic and one adapter can support tenant traffic. If resiliency against NIC failures is required, then you can add additional network adapters on each of the networks, and team them using Windows Server 2012 Load Balancing and Failover (LBFO) NIC Teaming. The NICs can be 10 GbE or 1 GbE network adapters. These NICs will be used for live migration, cluster, storage, management (together referred to as "infrastructure" traffic) and tenant traffic.

  • The appropriate networking hardware (e.g. Ethernet switches, cables, etc.) to connect all of the computers in the Hyper-V cluster to each other and to a larger network from which the hosted virtual machines are available.

Figure 1 provides a high-level view of the scenario layout. Key elements of the configuration include:

  • A File Server cluster that hosts the virtual hard disks and virtual machine configuration files

  • The File Server cluster is connected only to the infrastructure network

  • The File Server cluster is connected to block storage either through HBAs or over a 10 GB Ethernet network (as in the case of iSCSI or Fibre Channel of Ethernet [FCoE])

  • A Hyper-V compute cluster that hosts the virtual machine workloads.

  • The Hyper-V compute cluster is connected to the datacenter network and the tenant network using teamed network adapters.

  • Cloud infrastructure traffic flows to and from the host-based 10GbE network adapter

  • Tenant network traffic to and from the virtual machines flows through the Hyper-V virtual switch which is bound to the Tenant Network NIC team.

Note

Although this configuration uses SAS storage on the file server cluster, you can easily choose to use other types of storage, such as iSCSI or Fibre Channel-based SAN storage. You can find more information about storage configuration for a non-SAS scenario in the document Building Your Cloud Infrastructure: Non-Converged Enterprise Configuration, which describes how to configure the SAN storage.

Note

At least one Active Directory Domain Services (AD DS) domain controller is needed for centralized security and management of the cluster member computers (not shown). It must be reachable by all of the cluster member computers, including the members of the shared storage cluster. DNS services are also required and are not depicted.

This configuration highlights the following technologies and features of Windows Server 2012:

  • Load Balancing and Failover (LBFO): Load Balancing and Failover logically combines multiple network adapters to provide bandwidth aggregation and traffic failover to prevent connectivity loss in the event of a network component failure. Load Balancing with Failover is also known as NIC Teaming in Windows Server 2012.

  • Windows Server Quality of Service (QoS): Windows Server 2012 includes improved QoS features that lets you manage bandwidth and provide predictable network performance to traffic to and from the host operating system.

  • Data Center Bridging (DCB): DCB provides hardware-based bandwidth allocation to a specific type of traffic and enhances Ethernet transport reliability with the use of priority-based flow control. Hardware-based bandwidth allocation is essential if traffic bypasses the operating system and is offloaded to a converged network adapter, which might support Internet Small Computer System Interface (iSCSI), Remote Direct Memory Access (RDMA) over Converged Ethernet, or Fiber Channel over Ethernet (FCoE).

  • Storage Spaces: Storage Spaces makes it possible for you to create cost-effective disk pools that present themselves as a single mass storage location on which virtual disks or volumes can created and formatted.

The following sections describe how to set up this cloud configuration using UI-based tools and Windows PowerShell.

After the cloud is built, you can validate the configuration by doing the following:

  • Install and configure virtual machines

  • Migrate running virtual machines between servers in the Hyper-V cluster (live migration)

Install and configure the Converged Data Center with File Server Storage cloud infrastructure

In this section we will cover step by step how to configure the cloud infrastructure compute and storage scale units described in this document.

Creating this cloud infrastructure configuration consists of the following steps:

  • Step 1: Initial node configuration

  • Step 2: Initial network configuration

  • Step 3: Initial storage configuration

  • Step 4: File Server failover cluster setup

  • Step 5: Hyper-V failover cluster setup

  • Step 6: Configure Hyper-V settings

  • Step 7: Cloud validation

The following table summarizes the steps that this document describes:

Step

Task

Target

Tasks

1

Initial Node Configuration for Compute and Storage Clusters

All Nodes

  • 1.1-Enable BIOS settings required for Hyper-V on the nodes in the Hyper-V cluster

  • 1.2-Perform a clean operating system installation on all nodes in the Hyper-V and File Server clusters

  • 1.3-Perform post installation tasks on all nodes in the Hyper-V and File Server clusters:

    • Set Windows PowerShell execution policy

    • Enable Windows PowerShell remoting

    • Enable Remote Desktop Protocol and Firewall rules

    • Join the domain

  • 1.4-Install roles and features using default settings, rebooting as needed on the Hyper-V failover cluster:

    • Hyper-V (plus management tools)

    • Data Center Bridging (DCB)

    • Failover clustering (plus management tools)

  • 1.5-Install roles and features using default settings, rebooting as needed on the File Server failover cluster:

    • Failover clustering (plus management tools)

    • File Server Role

    • File Sharing and storage management tools

2

Initial Network Configuration

All Nodes

  • 2.1-Disable unused and disconnected interfaces and rename active connections

  • 2.2-Create the infrastructure network NIC team and the tenant NIC team on each member of the Hyper-V Cluster and assign IP addressing information.

  • 2.3-Create the infrastructure NIC team on each member of the file server cluster and assign IP addressing information

  • 2.4-Configure QoS settings for infrastructure traffic

3

Initial Storage Configuration

Single Node

  • 3.1-Present all shared storage to relevant nodes

  • 3.2-For multipath scenarios, install and configure multipath I/O (MPIO) as necessary

  • 3.3-All shared disks: Wipe, bring online and initialize

4

File Server Failover Cluster Setup

Single Node

  • 4.1-Run through the Cluster Validation Wizard

  • 4.2-Address any indicated warnings and/or errors

  • 4.3-Complete the Create Cluster Wizard (setting name and IP but do not add eligible storage)

  • 4.4-Create the cluster storage pool

  • 4.5-Create the quorum virtual disk

  • 4.6-Create the virtual machine storage virtual disk

  • 4.7-Add the virtual machine storage virtual disk to Cluster Shared Volumes

  • 4.8-Add folders to the cluster shared volume

  • 4.9-Configure Quorum Settings

  • 4.10-Add the Scale-Out File Server for Applications Role

5

Hyper-V Failover Cluster Setup

Single Node

  • 5.1-Run through the Cluster Validation Wizard

  • 5.2-Address any indicated warnings and/or errors

  • 5.3-Complete the Create Cluster Wizard

  • 5.4-Verify cluster quorum configuration and modify as necessary

  • 5.5-Configure cluster network metrics

6

File Shares Setup

Single Node

  • 6.1-Create Shares and Configure Hyper-V Settings using a Script

  • 6.2-Configure Kerberos Constrained Delegation

7

Cloud Validation

Single Node

  • 7.1-Create the TenantNetSwitch

  • 7.2-Create a New Virtual machine

  • 7.3-Test network connectivity from the virtual machine

  • 7.4-Perform a Live Migration

  • 7.5-Perform a quick migration

Step 1: Initial node configuration

In step 1, you will perform the following steps on all nodes of the Hyper-V and File Server clusters:

  • 1.1 Enable BIOS settings required for Hyper-V on the nodes in the Hyper-V cluster.

  • 1.2 Perform a clean operating system installation on all nodes in the Hyper-V and File Server clusters.

  • 1.3 Perform post-installation tasks on all nodes in the Hyper-V and File Server clusters.

  • 1.4 Install roles and features using the default settings on the Hyper-V failover cluster.

  • 1.5 Install roles and features using the default settings on the filer server failover cluster

1.1 Enable BIOS settings required for Hyper-V on the Nodes in the Hyper-V Cluster

You will need to enable virtualization support in the BIOS of each cluster member prior to installing the Hyper-V server role. The procedure for enabling processor virtualization support will vary with your processors' make and model and the system BIOS. Please refer to your hardware documentation for the appropriate procedures. In addition, confirm that all systems have the latest BIOS updates installed.

1.2 Perform a clean operating system installation on all nodes in the Hyper-V and File Server Clusters

Install Windows Server 2012 using the Full Installation option.

1.3 Perform post-installation tasks on all nodes in the Hyper-V and File Server Clusters

There are several tasks you need to complete on each node of the compute and file server clusters after the operating system installation is complete. These include:

  • Join each node to the domain

  • Enable remote access to each node via the Remote Desktop Protocol.

  • Set the Windows PowerShell execution policy.

  • Enable Windows PowerShell remoting.

Perform the following steps to join each node to the domain:

  1. Press the Windows Key on the keyboard and then press R. Type Control Panel and then click OK.

  2. In the Control Panel window, click System and Security, and then click System.

  3. In the System window under Computer name, domain, and workgroup settings, click Change settings.

  4. In the System Properties dialog box, click Change.

  5. Under Member of, click Domain, type the name of the domain, and then click OK.

Run the following Windows PowerShell commands on each node of the compute and file server clusters to enable remote access using the Remote Desktop Protocol (RDP), to enable PowerShell execution policy and enable PowerShell Remoting:

(Get-WmiObject Win32_TerminalServiceSetting -Namespace root\cimv2\terminalservices).SetAllowTsConnections(1,1)
Set-ExecutionPolicy Unrestricted –Force
Enable-PSRemoting –Force

1.4 Install roles and features using the default settings on the Hyper-V Failover Cluster

The following roles and features will be installed on each node of the Hyper-V compute cluster:

  • Hyper-V and Hyper-V management Tools

  • Data Center Bridging (DCB)

  • Failover cluster and failover cluster management tools

Perform the following steps on each node in the cluster to install the required roles and features:

  1. In Server Manager, click Dashboard in the console tree.

  2. In Welcome to Server Manager, click 2 Add roles and features, and then click Next.

  3. On the Before You Begin page of the Add Roles and Features Wizard, click Next.

  4. On the Installation Type page, click Next.

  5. On the Server Selection page, click Next.

  6. On the Server Roles page, select Hyper-V from the Roles list. In the Add Roles and Features Wizard dialog box, click Add Features. Click Next.

  7. On the Features page, select Data Center Bridging and Failover Clustering from the Features list. In the Add Roles and Features Wizard dialog box, click Add Features. Click Next.

    Note

    If you plan to use Multipath I/O for your storage solution, select the Multipath I/O feature while performing step 7.

  8. On the Hyper-V page, click Next.

  9. On the Virtual Switches page, click Next.

  10. On the Migration page, click Next.

  11. On the Default Stores page, click Next.

  12. On the Confirmation page, put a checkmark in the Restart the destination server automatically if required checkbox and then in the Add Roles and Features dialog box click Yes, then click Install.

  13. On the Installation progress page, click Close after the installation has succeeded.

  14. Restart the computer. This process might require restarting the computer twice. If so, the installer will trigger the multiple restarts automatically.

After you restart the server, open Server Manager and confirm that the installation completed successfully. Click Close on the Installation Progress page.

1.5 Install roles and features using the default settings on the File Server Failover Cluster

The following roles and features will be installed on each node of the file server failover cluster:

  • Failover cluster and failover cluster management tools

  • Datacenter Bridging (DCB)

  • Storage management tools

Perform the following steps on each node in the file server failover cluster to install the required roles and features:

  1. In Server Manager, click Dashboard in the console tree.

  2. In Welcome to Server Manager, click 2 Add roles and features, and then click Next.

  3. On the Before You Begin page of the Add Roles and Features Wizard, click Next.

  4. On the Installation Type page, click Next.

  5. On the Server Selection page, click Next.

  6. On the Server Roles page, expand the File and Storage Services node, then expand the File and iSCSI Services node and select File Server. Click Next.

  7. On the Features page, select Data Center Bridging and Failover Clustering from the Features list. In the Add Roles and Features Wizard dialog box, click Add Features. Expand Remote Server Administrator Tools and then expand Role Administration Tools. Expand File Services Tools. Select Share and Storage Management Tool. Click Next.

    Note

    If you plan to use Multipath I/O for your storage solution, select the Multipath I/O feature while performing step 7.

  8. On the Confirmation page, put a checkmark in the Restart the destination server automatically if required checkbox and then in the Add Roles and Features dialog box click Yes, then click Install.

  9. On the Installation progress page, click Close after the installation has succeeded.

  10. Restart the computer. This process might require restarting the computer twice. If so, the installer will trigger the multiple restarts automatically.

After you restart the server, open Server Manager and confirm that the installation completed successfully. Click Close on the Installation Progress page.

Step 2: Initial network configuration

The network configuration on each node in both the compute and file server clusters needs to be configured. The networking configuration on the compute cluster needs to be configured to support the converged networking scenario where all traffic, including infrastructure and tenant traffic, moves through the Hyper-V virtual switch. The network configuration on the file server cluster will connect it to the infrastructure network. You will perform the following procedures to complete the initial networking configuration for the compute and file server clusters:

  • 2.1 Disable unused and disconnected interfaces and rename active connections.

  • 2.2 Create the infrastructure network NIC team and the tenant network NIC team on each member of the Hyper-V cluster.

  • 2.3 Create the infrastructure network NIC team on each member of the file server cluster.

  • 2.4 Configure QoS settings for infrastructure traffic

2.1 Disable unused and disconnected interfaces and rename active connections

You can simplify the configuration and avoid errors when running the wizards and running PowerShell commands by disabling all network interfaces that are either unused or disconnected. You can disable these network interfaces in the Network Connections window.

For the remaining network adapters for all servers in the compute and storage failover clusters, do the following:

  1. Connect them to the appropriate network switch ports.

  2. To help you more easily recognize the active network adapters, rename them with names that indicate their use or their connection to the intranet or Internet (for example, HosterNet1 and HosterNet2 for the infrastructure network NICs and TenantNet1 and TenantNet2 for the tenant network NICs). You can do this in the Network Connections window.

2.2 Create the infrastructure and the tenant networks NIC teams on each member of the Hyper-V cluster

Network Load Balancing and Failover (LBFO) enables bandwidth aggregation and network adapter failover to prevent connectivity loss in the event of a network card or port failure. This feature is commonly referred to as "NIC Teaming". In this scenario you will create one team that will be connected to the HosterNet subnet (which is the cloud infrastructure network).

To configure the network adapter teams by using Server Manager, do the following on each computer in the Hyper-V compute cluster:

Note

Some steps in the following procedure will temporarily interrupt network connectivity. We recommend that all servers be accessible over a keyboard, video, and mouse (KVM) switch so that you can check on the status of these machines if network connectivity is unavailable for more than five minutes.

  1. From Server Manager, click Local Server in the console tree.

  2. In Properties, click Disabled, which you'll find next to Network adapter teaming.

  3. In the NIC Teaming window, click the name of the server computer in Servers.

  4. In Teams, click Tasks, and then click New Team.

  5. In the New Team window, in the Team Name text box, enter the name of the network adapter team for the infrastructure traffic subnet (example: HosterNetTeam).

  6. In the Member adapters list select the two network adapters connected to the converged traffic subnet (in this example, HosterNet1 and HosterNet2), and then click OK. Note that there may be a delay of several minutes before connectivity is restored after making this change. To ensure that you see the latest state of the configuration, right click your server name in the Servers section in the NIC Teaming window and click Refresh Now. There may be a delay before the connection displays as Active. You may need to refresh several times before seeing the status change.

  7. Repeat the procedure to create a NIC Team for the tenant network NICs and give it an informative name, such as TenantNetTeam.

  8. Close the NIC Teaming window.

  9. Assign static IP addresses to your NIC teams.

Configure static IPv4 addressing information for the new network adapter team connected to the infrastructure and tenant traffic subnets (example: HosterNetTeam and TenantNetTeam). The IP addresses will be the ones that you will use when connecting to the host system for management purposes. You can configure the IP addressing information in the Properties of the team in the Network Connections window. You will see new adapters where the names of the teamed network adapters are the names you assigned in step 5. You will lose connectivity for a few moments after assigning the new IP addressing information.

Note

You might need to manually refresh the display of the NIC Teaming window to show the new team and there may be a delay in connectivity as the network adapter team is created. If you are managing this server remotely, you might temporarily lose connectivity to the server.

2.3 Create the infrastructure network NIC team on each member of the File Server cluster

You now are ready to team the network adapters on the servers in the file server failover cluster. The file server failover cluster is connected only to the infrastructure network, so you will create a single team on each server in the failover cluster..

To configure the network adapter teams by using Server Manager, do the following on each computer in the file server failover cluster:

Note

Several steps in the following procedure will temporarily interrupt network connectivity. We recommend that all servers be accessible over a keyboard, video, and mouse (KVM) switch so that you can check on the status of these machines if network connectivity is unavailable for more than five minutes.

  1. From Server Manager, click Local Server in the console tree.

  2. In Properties, click Disabled, which you'll find next to Network adapter teaming.

  3. In the NIC Teaming window, click the name of the server computer in Servers.

  4. In Teams, click Tasks, and then click New Team.

  5. In the New Team window, in the Team Name text box, enter the name of the network adapter team for the converged traffic subnet (example: HosterNetTeam).

  6. In the Member adapters list select the two network adapters connected to the converged traffic subnet (in this example, HosterNet1 and HosterNet2), and then click OK. Note that there may be a delay of several minutes before connectivity is restored after making this change. To ensure that you see the latest state of the configuration, right click your server name in the Servers section in the NIC Teaming window and click Refresh Now. There may be a delay before the connection displays as Active. You may need to refresh several times before seeing the status change.

  7. Close the NIC Teaming window.

Configure a static IPv4 addressing configuration for the new network adapter team connected to the infrastructure traffic subnet (example: HosterNet Team). This IP address is the one that you will use when connecting to the host system for management purposes. You can do this in the Properties of the team in the Network Connections window. You will see a new adapter where the name of the teamed network adapter is the name you assigned in step 5. You will lose connectivity for a few moments after assigning the new IP addressing information.

Note

You might need to manually refresh the display of the NIC Teaming window to show the new team and there may be a delay in connectivity as the network adapter team is created. If you are managing this server remotely, you might temporarily lose connectivity to the server.

2.4 Configure QoS settings for infrastructure traffic

If both your 10 GbE network adapter and 10 GbE-capable switch support Data Center Bridging and you want to use DCB for QoS offload you can take advantage of Windows Server 2012 support for Data Center Bridging (DCB).

Whether or not you use DCB, you must create QoS policies to classify and tag each network traffic type. You must use Windows PowerShell commands (New-NetQosPolicy -Name) to create new QoS policies for each type of traffic on each computer in the cluster. Here are some example commands (make sure to open all PowerShell windows as administrator):

New-NetQosPolicy –Name "Live Migration policy" –LiveMigration –MinBandwidthWeightAction 20
New-NetQosPolicy –Name "SMB policy" –SMB –MinBandwidthWeightAction 50
New-NetQosPolicy –Name "Cluster policy" -IPProtocolMatchCondition UDP -IPDstPortMatchCondition 3343 –MinBandwidthWeightAction 20
New-NetQosPolicy –Name "Management policy" –DestinationAddress 10.7.124.0/24 –MinBandwidthWeightAction 10

These commands use the MinBandwidthWeightAction parameter, which specifies a minimum bandwidth as a relative weighting of the total. The -LiveMigration and -SMB filters are built in Windows Server 2012. They match packets sent to TCP port 6600 (live migration) and TCP port 445 (SMB protocol used for file storage), respectively. The cluster service traffic uses UDP port 3343. The example for management traffic is the address range 10.7.124.0/24, which corresponds to the Hoster subnet in this example.

As a result, live migration, SMB, cluster, and management traffic will have roughly 20 percent, 50 percent, 20 percent, and 10 percent of the total bandwidth, respectively. To display the resulting traffic classes, run the Get-NetQosTrafficClass Windows PowerShell command.

If you are not using DCB, Windows will enforce performance isolation through these QoS policies.

If your network adapters and switch support DCB, enable DCB on the network adapters attached to the converged network subnet using Windows PowerShell. The following are examples:

Enable-NetAdapterQos "HosterNet1"
Enable-NetAdapterQos "HosterNet2"

Note that HosterNet1 and HosterNet2 are the names assigned to the individual network adapters and not the NIC Team names.

To verify the settings on a network adapter, use the get-netadapterqos command. The following is example output after running the Windows PowerShell command (note that if QoS is not supported on the adapter you will see an error in the output):

Network Adapter Name : HosterNet 1
QOS Enabled : True
MACsec Bypass Supported : False
Pre-IEEE DCBX Supported : True
IEEE DCBX Supported : False
Traffic Classes (TCs) : 8
ETS-Capable TCs : 8
PFC-Enabled TCs : 8
Operational TC Mappings : TC TSA Bandwidth Priorities
                   -- --- --------- ----------
                          0  ETS 20%       0-3,5,7
                          1  ETS 50%       4
                          2  ETS 30%       6

For an example of a Windows PowerShell script that configures LBFO and performance isolation settings, see ConfigureNetworking.ps1 which can be downloaded at: https://gallery.technet.microsoft.com/scriptcenter/Windows-Server-2012-Cloud-e0b7753a

Step 3: Initial storage configuration

With the initial cluster node configuration complete, you are ready to perform initial storage configuration tasks on all nodes of the cluster. Initial storage configuration tasks include:

  • 3.1 Present all shared storage to relevant nodes.

  • 3.2 Install and configure MPIO as necessary for multipath scenarios.

  • 3.3 Wipe, bring online, and initialize all shared disks.

3.1 Present all shared storage to relevant nodes

In a SAS scenario, connect the SAS adapters to each storage device. Each cluster node should have two adapters in them if high availability to storage access is required.

3.2 Install and configure MPIO as necessary for multipath scenarios

If you have multiple data paths to storage (for example, two SAS cards) make sure to install the Microsoft® Multipath I/O (MPIO) on each node. This step might require you to restart the system. For more information about MPIO, see What's New in Microsoft Multipath I/O.

3.3 Wipe, bring online, and initialize all shared disks

To prevent issues with the storage configuration procedures that are detailed later is this document; confirm that the disks in your storage solution have not been previously provisioned. The disks should have no partitions or volumes. They should also be initialized so that there is a master book record (MBR) or GUID partition table (GPT) on the disks and then brought online. You can use the Disk Management console or Windows PowerShell to accomplish this task. This task must be completed on each node in the cluster.

Note

If you have previously configured these disks with Windows Server 2012 Storage Spaces pools, you will need to delete these storage pools prior to proceeding with the storage configuration described in this document. Please refer to the TechNet Wiki Article "How to Delete Storage Pools and Virtual Disks Using PowerShell".

Step 4: File server failover cluster setup

You are now ready to complete the failover cluster settings for the file server cluster. Failover cluster setup includes the following steps:

  • 4.1 Run through the Cluster Validation Wizard.

  • 4.2 Address any indicated warnings and/or errors.

  • 4.3 Complete the Create Failover Cluster Wizard.

  • 4.4 Create the clustered storage pool.

  • 4.5 Create the quorum virtual disk.

  • 4.6 Create the virtual machine storage virtual disk.

  • 4.7 Add the virtual machine storage virtual disk to Cluster Shared Volumes.

  • 4.8 Add folders to the cluster shared volume.

  • 4.9 Configure Quorum Settings.

  • 4.10 Add the Scale-Out File Server for Applications role.

4.1 Run through the Cluster Validation Wizard

The Cluster Validation Wizard will query multiple components in the intended cluster hosts and confirm that the hardware and software is ready to support failover clustering. On one of the nodes in the server cluster, perform the following steps to run the Cluster Validation Wizard:

  1. In the Server Manager, click Tools, and then click Failover Cluster Manager.

  2. In the Failover Cluster Manager console, in the Management section, click Validate Configuration.

  3. On the Before You Begin page of the Validate a Configuration Wizard, click Next.

  4. On the Select Servers or a Cluster page, type the name of the local server, and then click Add. After the name appears in the Selected servers list, type the name of another Hyper-V cluster member computer, and then click Add. Repeat this step for all computers in the Hyper-V cluster. When all of the servers of the Hyper-V cluster appear in the Selected servers list, click Next.

  5. On the Testing Options page, click Next.

  6. On the Confirmation page, click Next. The time to complete the validation process will vary with the number of nodes in the cluster and can take some time to complete.

  7. On the Summary page, the summary text will indicate that the configuration is suitable for clustering. Confirm that there is a checkmark in the Create the cluster now using the validated nodes... checkbox.

4.2 Address any indicated warnings and/or errors

Click the Reports button to see the results of the Cluster Validation. Address any issues that have led to cluster validation failure. After correcting the problems, run the Cluster Validation Wizard again. After the cluster passes validation, then proceed to the next step. Note that you may see errors regarding disk storage. You may see this if you haven't yet initialized the disks. Click Finish.

4.3 Complete the Create Failover Cluster Wizard

After passing cluster validation, you are ready to complete the cluster configuration.

Perform the following steps to complete the cluster configuration:

  1. On the Before You Begin page of the Create Cluster Wizard, click Next.

  2. On the Access Point for Administering the Cluster page, enter a valid NetBIOS name for the cluster, and then select the network you want the cluster on and then type in a static IP address for the cluster, and then click Next. In this example, the network you would select is the Management Network. Unselect all other networks that appear here.

  3. On the Confirmation page, clear Add all eligible storage to the cluster checkbox and then click Next.

  4. On the Creating New Cluster page you will see a progress bar as the cluster is created.

  5. On the Summary page, click Finish.

  6. In the console tree of the Failover Cluster Manager snap-in, open the Networks node under the cluster name.

  7. Right-click the cluster network that corresponds to the management network adapter network ID (subnet), and then click Properties. On the General tab, confirm that Allow cluster communications on this network is not selected and that Allow clients to connect through this network is enabled. In the Name text box, enter a friendly name for this network (for example, ManagmentNet), and then click OK.

  8. Right-click the cluster network that corresponds to the Cluster network adapter network ID (subnet) and then click Properties. On the General tab, confirm that Allow cluster communications on this network is selected and that Allow clients to connect through this network is not enabled. In the Name text box, enter a friendly name for this network (for example, ClusterNet), and then click OK.

  9. Right-click the cluster network that corresponds to the live migration network adapter network ID (subnet) and then click Properties. On the General tab, confirm that Allow cluster communications on this network is selected and that Allow clients to connect through this network is not enabled. In the Name text box, enter a friendly name for this network (for example, LiveMigrationNet), and then click OK.

4.4 Create a cluster storage pool

Perform the following steps on one of the members of the cluster to create the storage pool:

  1. In the left pane of the Failover Cluster Manager, expand the server name and then expand the Storage node. Click Storage Pools.

  2. In the Actions pane, click New Storage Pool.

  3. On the Before You Begin page, click Next.

  4. On the Storage Pool Name page, enter a name for the storage pool in the Name text box. Enter an optional description for the storage pool in the Description text box. In the Select the group of available disks (also known as a primordial pool) that you want to use list, select the name you assigned to the cluster (this is the NetBIOS name you assigned to the cluster when you created the cluster). Click Next.

  5. On the Physical Drives page, select the drives that you want to participate in the storage pool. Then click Next.

  6. On the Confirmation page, confirm the settings and click Create.

  7. On the Results page, you should receive the message You have successfully completed the New Storage Pool Wizard. Remove the checkmark from the Create a virtual disk when the wizard closes checkbox. Then click Close.

4.5 Create the quorum virtual disk

Now that you have created the storage pool, you can create virtual disks within that storage pool. A virtual disk is sometimes called a logical unit number or LUN and it represents a collection of one or more physical disks from the previously created storage pool. The layout of data across the physical discs can increase the reliability and performance of the physical disk.

You will need to create at least two virtual disks:

  • A virtual disk that can be used as a quorum witness disks. This disk can be configured as a 1 GB virtual disk.

  • A virtual disk that will be assigned to a cluster shared volume.

Perform the following steps to create the quorum virtual disk:

  1. In the Failover Cluster Manager console, expand the Storage node in the left pane of the console. Right click Pools and click Add Disk.

  2. In the New Virtual Disk Wizard on the Before You Begin page, click Next.

  3. On the Storage Pool page, select your server name in the Server section and then select the storage pool you created earlier in the Storage pool section. Click Next.

  4. On the Virtual Disk Name page, enter a name for the virtual disk in the Name text box. You can also enter an optional description in the Description text box. Click Next.

  5. On the Storage Layout page, in the Layout section, select Mirror. Click Next.

  6. On the Resiliency Settings select Two-way mirror and click Next.

  7. On the Size page, in the Virtual disk size text box, enter a size for the new virtual disk, which in this example will be 1 GB. Use the drop down box to select GB. Also, you can put a checkmark in the Create the largest virtual disk possible, up to the specified size checkbox, but this is not required or desired when creating a witness disk. When this option is selected it allows the wizard to calculate the largest size virtual disk you can create given the disks you have assigned to the pool, regardless of the number you put in the Virtual disk size text box. Click Next.

  8. On the Confirmation page, review your settings and click Create.

  9. On the Results page, put a checkmark in the Create a volume when this wizard closes checkbox. Click Close.

  10. On the Before You Begin page of the New Volume Wizard, click Next.

  11. On the Server and Disk page, select the name of the cluster from the Server list. In the Disk section, select the virtual disk you just created. You can identify this disk by looking in the Virtual Disk column, where you will see the name of the virtual disk you created. Click Next.

  12. On the Size page, accept the default volume size, and click Next.

  13. On the Drive Letter or Folder page, select the Drive letter and select a drive letter. Click Next.

  14. On the File System Settings page, from the File system drop down list, select NTFS. Use the default setting in the Allocation unit size list. Click Next.

  15. On the Confirmation page, click Create.

  16. On the Results page, click Close.

4.6 Create the virtual machine storage virtual disk

Perform the following steps to create the virtual machine storage virtual disk:

  1. In the Failover Cluster Manager console, expand the Storage node in the left pane of the console. Right click Pools and click Add Disk.

  2. In the New Virtual Disk Wizard on the Before You Begin page, click Next.

  3. On the Storage Pool page, select your server name in the Server section and then select the storage pool you created earlier in the Storage pool section. Click Next.

  4. On the Virtual Disk Name page, enter a name for the virtual disk in the Name text box. You can also enter an optional description in the Description text box. Click Next.

  5. On the Storage Layout page, in the Layout section, select Mirror. Click Next.

  6. On the Resiliency Settings select Two-way mirror and click Next.

  7. On the Size page, in the Virtual disk size text box, enter a size for the new virtual disk. Use the drop down box to select MB, GB or TB. Also, you can put a checkmark in the Create the largest virtual disk possible, up to the specified size checkbox. When this option is selected it allows the wizard to calculate the largest size virtual disk you can create given the disks you have assigned to the pool, regardless of the number you put in the Virtual disk size text box. Click Next.

  8. On the Confirmation page, review your settings and click Create.

  9. On the Results page, put a checkmark in the Create a volume when this wizard closes checkbox. Click Close.

  10. On the Before You Begin page of the New Volume Wizard, click Next.

  11. On the Server and Disk page, select the name of the cluster from the Server list. In the Disk section, select the virtual disk you just created. You can identify this disk by looking in the Virtual Disk column, where you will see the name of the virtual disk you created. Click Next.

  12. On the Size page, accept the default volume size, and click Next.

  13. On the Drive Letter or Folder page, select the Don't Assign to a drive letter or folder and select a drive letter. Click Next.

  14. On the File System Settings page, from the File system drop down list, select NTFS. Use the default setting in the Allocation unit size list. Note that ReFS is not supported in a Cluster Shared Volume configuration. Click Next.

  15. On the Confirmation page, click Create.

  16. On the Results page, click Close.

4.7 Add the virtual machine storage virtual disk to Cluster Shared Volumes

The virtual disk you created for virtual machine storage is now ready to be added to a Cluster Shared Volume. Perform the following steps to add the virtual disk to a Cluster Shared Volume.

  1. In the Failover Cluster Manager, in the left pane of the console, expand the Storage node and click Disks. In the middle pane of the console, in the Disks section, right click the virtual disk you created in the previous step and then click Add to Cluster Shared Volumes.

  2. Proceed to the next step.

4.8 Add folders to the cluster shared volume

Now you need to create the folders on the virtual disk located on the Cluster Shared Volume to store the virtual machine files and the virtual machine data files.

Perform the following steps to create a file share to store the running VMs of the Hyper-V cluster:

  1. Open Windows Explorer and navigate to the C: drive and then double-click Cluster Storage and then double-click Volume 1.

  2. Create two folders in Volume 1. One of the folders will contain the .vhd files for the virtual machines (for example, VHDdisks) and one folder will contain the virtual machine configuration files (for example, VHDsettings)

4.9 Configure Quorum Settings

Perform the following steps to configure quorum settings for the cluster:

  1. In the left pane of the Failover Cluster Manager console, right click on the name of the cluster and click More Actions and click Configure Cluster Quorum Settings.

  2. On the Before You Begin page, click Next.

  3. On the Quorum Configuration Option page, select Use typical settings (recommended) and click Next.

  4. On the Confirmation page, click Next.

4.10 Add the Scale-Out File Server for Applications Role

The file server failover cluster will provide continuous availability of the virtual machine files to the nodes in the compute cluster. Windows Server 2012 includes the Scale Out File Server for Applications feature the enables the level of continuous availability required for hosting virtual machine files.

Perform the following steps to configure the file server failover cluster as a Scale-Out File Server for Applications:

  1. In the Failover Cluster Management console, right click on Roles and click Configure Role.

  2. On the Before You Begin page, click Next.

  3. On the Select Role page, select File Server and click Next.

  4. On the File Server Type page, select Scale-Out File Server for application data and click Next.

  5. On the Client Access Point page, enter a NetBIOS name for the client access point and then click Next.

  6. On the Confirmation page, click Next.

  7. On the Summary page, click Finish.

Step 5: Hyper-V Failover Cluster Setup

To setup the Hyper-V compute failover cluster you will need to:

  • 5.1 Run through the cluster validation wizard

  • 5.2 Address any indicated warnings and/or errors

  • 5.3 Complete the Create Cluster Wizard

  • 5.4 Verify cluster quorum configuration and modify as necessary

  • 5.5 Configure Cluster Networks

5.1 Run through the cluster validation wizard

The Cluster Validation Wizard will query multiple components in the intended cluster hosts and confirm that the hardware and software is ready to support failover clustering. On one of the nodes in the server cluster, perform the following steps on one of the members of the Hyper-V failover cluster to run the Cluster Validation Wizard:

  1. In the Server Manager, click Tools, and then click Failover Cluster Manager.

  2. In the Failover Cluster Manager console, in the Management section, click Validate Configuration.

  3. On the Before You Begin page of the Validate a Configuration Wizard, click Next.

  4. On the Select Servers or a Cluster page, type the name of the local server, and then click Add. After the name appears in the Selected servers list, type the name of another Hyper-V cluster member computer, and then click Add. Repeat this step for all computers in the Hyper-V cluster. When all of the servers of the Hyper-V cluster appear in the Selected servers list, click Next.

  5. On the Testing Options page, click Next.

  6. On the Confirmation page, click Next. The time to complete the validation process will vary with the number of nodes in the cluster and can take some time to complete.

On the Summary page, the summary text will indicate that the configuration is suitable for clustering. Confirm that there is a checkmark in the Create the cluster now using the validated nodes... checkbox.

5.2 Address any indicated warnings and/or errors

Click the Reports button to see the results of the Cluster Validation. Address any issues that have led to cluster validation failure. After correcting the problems, run the Cluster Validation Wizard again. After the cluster passes validation, then proceed to the next step. Note that you may see errors regarding disk storage. You may see this if you haven't yet initialized the disks. Click Finish.

5.3 Complete the create cluster wizard

After passing cluster validation, you are ready to complete the cluster configuration.

Perform the following steps to complete the cluster configuration:

  1. On the Before You Begin page of the Create Cluster Wizard, click Next.

  2. On the Access Point for Administering the Cluster page, enter a valid NetBIOS name for the cluster, and then select the network you want the cluster on and then type in a static IP address for the cluster, and then click Next. The network you will use in this scenario is the infrastructure network.

  3. On the Confirmation page, clear Add all eligible storage to the cluster checkbox and then click Next.

  4. On the Creating New Cluster page you will see a progress bar as the cluster is created.

  5. On the Summary page, click Finish.

5.4 Verify cluster quorum configuration and modify as necessary

In most situations, use the quorum configuration that the cluster software identifies as appropriate for your cluster. Change the quorum configuration only if you have determined that the change is appropriate for your cluster. For more information about quorum configuration see Understanding Quorum Configurations in a Failover Cluster.

In most cases the Hyper-V compute cluster will not have shared storage. Because of this, you may want to set up a file share witness on the file server cluster or another location.

Perform the following steps to configure the quorum model to use the file share witness:

  1. In the Failover Cluster Manager, in the left pane of the console, right click the cluster name, point to More Actions and click Configure Cluster Quorum Settings.

  2. On the Before You Begin page, click Next.

  3. On the Select Quorum Configuration Option page, select Add or change the quorum witness. Click Next.

  4. On the Select Quorum Witness page, select Configure a file share witness (recommended for special configurations) and click Next.

  5. On the Configure File Share Witness page, enter the path to the file share witness and click Next.

  6. On the Confirmation page, click Next.

  7. On the Configure Cluster Quorum Settings page, click Next.

  8. On the Summary page, click Finish.

5.5 Configure Cluster Networks

Perform the following steps to complete the compute cluster by configuring the cluster network settings:

  1. In the console tree of the Failover Cluster Manager snap-in, open the Networks node under the cluster name.

  2. Right-click the cluster network that corresponds to the TenantNet network adapter network ID (subnet), and then click Properties. On the General tab, confirm that Allow cluster communications on this network is not selected and that Do not allow cluster network communication on this network is enabled. In the Name text box, enter a friendly name for this network (for example, TenantNet), and then click OK.

  3. Right-click the cluster network that corresponds to the infrastructure network adapter network ID (subnet) and then click Properties. On the General tab, confirm that Allow cluster communications on this network is selected and that Allow clients to connect through this network is enabled. In the Name text box, enter a friendly name for this network (for example, HosterNet), and then click OK.

Step 6: Configure Share and Hyper-V settings using a Script

To finalize the Hyper-V configuration, you will need to:

  • 6.1 Create Shares and Configure Hyper-V Settings using a Script

  • 6.2 Configure Kerberos Constrained Delegation

6.1 Create Shares and Configure Hyper-V Settings using a Script

You will need shares available that will store the virtual machine disk files and the virtual machine configuration files. There are a number of ways that this can be done. In the example used in this document, you will use a script that will:

  1. Create shares that will contain the virtual machine disk and configuration files on the CSV in the file server cluster

  2. Configure the correct share and NTFS permissions on those folders

  3. Configure the servers in the Hyper-V compute cluster to use the correct share locations as default virtual machine disk and configuration file location.

The following script will perform these actions for you.

Insert script here (when we get it)

6.2 Configure Kerberos Constrained Delegation

In order to fully manage the storage on the file server failover cluster from a machine in the Hyper-V cluster, you will need to configure Kerberos Constrained Delegation. For details on why this is required please see the article, Using Constrained Delegation to remotely manage a server running Hyper-V that uses CIFS/SMB file shares.

Perform the following steps to configure the members of the compute cluster to be trusted for Kerberos Constrained Delegation:

  1. On a domain controller responsible for your compute and file server cluster environment, open Control Panel and then open Administrative Tools. Open Active Directory Users and Computers.

  2. In the Active Directory Users and Computers console, expand the domain name and then click on Computers.

  3. In the right pane of the console, right click on the name of one of the computers in the Hyper-V compute failover cluster and then click Properties.

  4. In the Properties dialog box for the member of the Hyper-V compute failover cluster member, click on the Delegation tab.

  5. On the Delegation tab, select the Trust this computer for delegation to the specified services only. Then select Use Kerberos only. Click Add.

  6. In the Add Services dialog box, in the Service Type column, find the cifs entries. Click on the entry that corresponds to the name of the file server failover cluster and click OK.

  7. Click OK in the server's Properties dialog box.

Step 7: Cloud validation

To verify the configuration of your cloud environment, perform the following operations.

  • 7.1 Create the TenantNetSwitch.

  • 7.2 Create a new virtual machine.

  • 7.3 Test network connectivity from the virtual machine.

  • 7.4 Perform a live migration.

  • 7.5 Perform a quick migration

7.1 Create the TenentNetSwitch

Before you create a virtual machine, you will need to create a virtual switch that is connected to the TenantNet so that the virtual machine can connect to the network. Perform the following steps to create the TenantNet virtual switch:

  1. Open the Hyper-V Manager console. In the Hyper-V Manager console, in the Actions pane, click Virtual Switch Manager.

  2. In the right pane of the Virtual Switch Manager dialog box, select External and then click Create Virtual Switch.

  3. In the Virtual Switch Properties section of the dialog box, enter a name for the virtual switch in the Name text box (in this example it will be TenantNetSwitch). In the Connection type section, select the External network option. Then select the NIC team representing the TenantNet from the drop down box. Click OK.

7.2 Create a new virtual machine

To create a new virtual machine in the cluster environment, perform the following steps.

  1. Open Failover Cluster Manager, click Roles under the cluster name, click Virtual Machines under the Actions pane, and then click New Virtual Machine.

  2. On the New Virtual Machine page, select the cluster node where you want to create the virtual machine, and then click OK.

  3. On the Before you Begin page of the New Virtual Machine Wizard, click Next.

  4. On the Specify Name and Location page, enter a friendly name for this virtual machine and then click Next.

  5. On the Assign Memory page, enter the amount of memory that will be used for this virtual machine (minimum for this lab is 1024 MB RAM) and then click Next.

  6. On the Configuring Networking page, click Next.

  7. On the Connect Virtual Hard Disk page, leave the default options selected and click Next.

  8. On the Installation Options page, select Install an operating system from a boot CD/DVD-ROM and then select the location where the CD/DVD is located. If you are installing the new operating system based on an ISO file, make sure to select the option Image file (.iso) and browse for the file location. If you prefer to PXE boot, that option will be described in later steps. After you select the appropriate option for your scenario, click Next.

  9. On the Completing the New Virtual Machine Wizard page, review the options, and then click Finish.

  10. The virtual machine creation process starts. After it is finished, you will see the Summary page, where you can access the report created by the wizard. If the virtual machine was created successfully, click Finish.

  11. If you want to PXE boot the virtual machine, you will need to create a Legacy Network Adapter. Right click the new virtual machine and click Settings.

  12. In the Settings dialog box, select the Legacy Network Adapter option and click Add.

  13. In the Legacy Network Adapter dialog box, connect it to the virtual switch (such as TenantNetSwitch) and enable virtual LAN identification and assign the appropriate network identifier.

Note that if the virtual machine continues to use the legacy network adapter it will not be able to leverage many of the features available in the Hyper-V virtual switch. You may want to replace the legacy network adapter after the operating system is installed.

At this point your virtual machine is created and you should use the Failover Cluster Manager to start the virtual machine and perform the operating system installation according to the operating system that you choose. For the purpose of this validation, the guest operating system can be any Windows Server version.

7.3 Test network connectivity from the virtual machine

Once you finish installing the operating system in the virtual machine you should log on and verify if this virtual machine was able to obtain IP address from the enterprise network. Assuming that in this network you have a DHCP server, this virtual machine should be able to obtain the IP address. To perform the basic network connectivity test use the following approach.

  • Use ping command for a reachable IP address in the same subnet.

  • Use ping command for the same destination but now using the full qualified domain name for the destination host. The goal here is to test basic name resolution.

Note

If you installed Windows 8 Developer Preview in this virtual machine you need to open Windows Firewall with Advanced Security and create a new rule to allow Internet Control Message Protocol (ICMP) before performing the previous tests. This may be true for other hosts you want to ping − confirm that the host-based firewall on the target allows for ICMP Echo Requests.

After you confirm that this basic test is working properly, leave a command prompt window open and enter the command ping <Destination_IP_Address_or_FQDN> -t. The goal here is to have a continuous test while you perform the live migration to the second node.

Note

If you prefer to work with PowerShell, instead of the ping command you can use the Test-Connection command. This cmdlet provides you a number of connectivity testing options that exceed what is available with the simple ping command.

7.4 Perform a live migration

To perform a live migration of this virtual machine from the current cluster node to the other node in the cluster, perform the following steps.

  1. In the Failover Cluster Manager, click Roles under the cluster name. On the Roles pane, right click the virtual machine that you created, click Move, click Live Migration, and then click Select Node.

  2. On the Move Virtual Machine page, select the node that you want to move the virtual machine to and click OK.

You will notice in the Status column when the live migration starts, it will take some time for the Information column to update the current state of the migration. While the migration is taking place you can go back to the virtual machine that has the ping running and observe if there is any packet loss.

7.5 Perform a quick migration

To perform the quick migration of this virtual machine from the current node to the other one, perform the following steps.

  1. On the Failover Cluster Manager, click Roles under the cluster name. In the Roles pane, right-click the virtual machine that you created, click Move, click Quick Migration and then click Select Node.

  2. On the Move Virtual Machine window, select the node that you want to move the virtual machine to, and then click OK.

You will notice in the status that the quick migration will start faster than the live migration did. While the migration is taking place you can go back to the virtual machine that has the ping running and observe if there is any packet loss.