Hyper-V scalability in Windows Server 2012 and Windows Server 2012 R2
Applies To: Hyper-V Server 2012, Windows Server 2012 R2, Windows Server 2012, Hyper-V Server 2012 R2
Hyper-V in Windows Server® 2012 and Windows Server® 2012 R2supports significantly larger configurations of virtual and physical components than in previous releases of Hyper-V. This increased capacity enables you to run Hyper-V on large physical computers and to virtualize high-performance, scale-up workloads. This topic lists the supported maximum configuration for the various components. As you plan your deployment of Hyper-V, consider the maximums that apply to each virtual machine as well as those that apply to the physical computer that runs the Hyper-V role.
Note
For information about System Center Virtual Machine Manager (VMM), see Virtual Machine Manager. VMM is a Microsoft product for managing a virtualized data center that is sold separately.
Virtual machines
The following table lists the maximums that apply to each virtual machine.
Component | Maximum | Notes |
---|---|---|
Virtual processors | 64 | The number of virtual processors supported by a guest operating system might be lower. For more information, see the Hyper-V overview. |
Memory | 1 TB | Review the requirements for the specific operating system to determine the minimum and recommended amounts. |
Virtual hard disk capacity | 64 TB supported by the VHDX format introduced in Windows Server 2012 and Windows® 8; 2040 GB supported by the VHD format. | Each virtual hard disk is stored on physical media as either a .vhdx or a .vhd file, depending on the format used by the virtual hard disk. |
Virtual IDE disks | 4 | The startup disk (sometimes referred to as the boot disk) must be attached to one of the IDE devices. The startup disk can be either a virtual hard disk or a physical disk attached directly to a virtual machine. |
Virtual SCSI controllers | 4 | Use of virtual SCSI devices requires integration services to be installed in the guest operating system. For a list of the guest operating systems for which integration services are available, see the Hyper-V overview. |
Virtual SCSI disks | 256 | Each SCSI controller supports up to 64 disks, which means that each virtual machine can be configured with as many as 256 virtual SCSI disks. (4 controllers x 64 disks per controller) |
Virtual Fibre Channel adapters | 4 | As a best practice, we recommended that you connect each virtual Fibre Channel Adapter to a different virtual SAN. |
Size of physical disks attached directly to a virtual machine | Varies | Maximum size is determined by the guest operating system. |
Snapshots | 50 | The actual number may be lower, depending on the available storage. Each snapshot is stored as an .avhd file that consumes physical storage. |
Virtual network adapters | 12 | - 8 can be the “network adapter” type. This type provides better performance and requires a virtual machine driver that is included in the integration services packages. - 4 can be the “legacy network adapter” type. This type emulates a specific physical network adapter and supports the Pre-execution Boot Environment (PXE) to perform network-based installation of an operating system. |
Virtual floppy devices | 1 virtual floppy drive | None. |
Serial (COM) ports | 2 | None. |
Server running Hyper-V
The following table lists the requirements and maximums that apply to the server running Hyper-V.
Component | Maximum | Notes |
---|---|---|
Logical processors | 320 | Both of the following must be available and enabled in the BIOS: - Hardware-assisted virtualization - Hardware-enforced Data Execution Prevention (DEP) |
Virtual processors per logical processor | No ratio imposed by Hyper-V. | None. |
Running virtual machines per server | 1024 | None. |
Virtual processors per server | 2048 | None. |
Memory | 4 TB | None. |
Storage | Limited by what is supported by the management operating system. No limits imposed by Hyper-V. | Note: Microsoft supports network-attached storage (NAS) for Hyper-V in Windows Server 2012 and Windows Server 2012 R2 when using SMB 3.0. NFS-based storage is not supported. |
Virtual storage area networks (SANs) | No limits imposed by Hyper-V | None. |
Physical network adapters | No limits imposed by Hyper-V. | None. |
Network adapter teams (NIC Teaming) | No limits imposed by Hyper-V. | For more information about NIC Teaming, see NIC Teaming. |
Virtual switches | Varies; no limits imposed by Hyper-V. | The practical limit depends on the available computing resources. |
Virtual network switch ports per server | Varies; no limits imposed by Hyper-V. | The practical limit depends on the available computing resources. |
Failover Clusters and Hyper-V
The following table lists the maximums that apply to highly available servers running Hyper-V. It is important to do capacity planning to ensure that there will be enough hardware resources to run all the virtual machines in a clustered environment. For more information about requirements for failover clusters, see Failover Clustering Hardware Requirements and Storage Options.
Component | Maximum | Notes |
---|---|---|
Nodes per cluster | 64 | Consider the number of nodes you want to reserve for failover, as well as maintenance tasks such as applying updates. We recommend that you plan for enough resources to allow for 1 node to be reserved for failover, which means it remains idle until another node is failed over to it. (This is sometimes referred to as a passive node.) You can increase this number if you want to reserve additional nodes. There is no recommended ratio or multiplier of reserved nodes to active nodes; the only specific requirement is that the total number of nodes in a cluster cannot exceed the maximum of 64. |
Running virtual machines per cluster and per node | 8,000 per cluster | Several factors can affect the real number of virtual machines that can be run at the same time on one node, such as: - Amount of physical memory being used by each virtual machine. - Networking and storage bandwidth. - Number of disk spindles, which affects disk I/O performance. |