Hyper-V 2025: NUMA Spanning forces cross-node memory even for small VMs, and no oversubscription possible with spanning off

ReubenTishkoff 0 Reputation points
2025-06-29T14:16:25.6933333+00:00

Hi all,

I’ve been testing Hyper-V on Windows Server 2025 with a dual-socket AMD EPYC 9175F server (16 cores per socket / 32 total).

Each NUMA node has its own directly attached NVMe storage and RAM. I’m running multiple VMs, each with 16 vCPUs and 50 GB RAM.

I’m hitting two important limitations:

  1. With NUMA Spanning enabled, even the first VM gets its vCPUs and memory split across the two NUMA nodes — despite both nodes having sufficient resources to host the entire VM. This affects locality:
  • RAM ends up partially on each node
  • CPU cores are mixed across sockets
  • Performance suffers (important for SQL Server, etc.)
  1. With NUMA Spanning disabled, I expected to preserve locality — and I do — but now I can’t oversubscribe CPU cores per node at all. Even with idle VMs and low host load:
  • Hyper-V refuses to launch a fifth VM with 16 vCPUs
  • I have 16/32 physical cores per socket, and Hyper-V won’t assign more than 32 vCPUs per NUMA node

This did not happen on Windows Server 2022. There I could oversubscribe cores (e.g., 10 VMs with 16 vCPUs each per node) and the scheduler managed CPU time without strict NUMA enforcement. ❓ Is this a change in Hyper-V 2025?

  • Has NUMA enforcement become stricter?
  • Is per-node oversubscription explicitly blocked when spanning is off?
  • Is there a hidden setting (PowerShell, registry, undocumented) to recover the 2022 behavior?
  • Any help or clarification from Microsoft or anyone who’s dug into this would be really appreciated.

Thanks in advance.

Windows for business | Windows Server | Storage high availability | Virtualization and Hyper-V
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Smith Pham 1,790 Reputation points Independent Advisor
    2025-07-02T14:48:31.56+00:00

    Dear Team,

    • Is this a change in Hyper-V 2025? Yes. The behavior you are experiencing is a noticeable change from Windows Server 2022. Multiple users with similar hardware (dual-socket AMD EPYC) have reported the same issues.
    • Has NUMA enforcement become stricter? Yes. With NUMA spanning disabled, Hyper-V in Windows Server 2025 now appears to strictly enforce that a virtual machine's vCPUs cannot exceed the number of logical processors within a single physical NUMA node. This prevents the vCPU oversubscription per-node that was possible in previous versions.
    • Is per-node oversubscription explicitly blocked when spanning is off? Yes, this appears to be the case. When NUMA spanning is turned off to enforce memory and CPU locality, Hyper-V 2025 will not allow you to start a VM if its vCPU count, when added to the vCPUs of running VMs on that node, exceeds the logical core count of the node.
    • Is there a hidden setting to recover the 2022 behavior? There is no officially documented registry key, PowerShell command, or other setting to revert to the more relaxed NUMA scheduling behavior of Windows Server 2022.

    Potential Workaround: CPU Groups

    While there isn't a simple toggle to restore the old behavior, you may be able to achieve your goal of both NUMA locality and CPU oversubscription by using a feature called CPU Groups. This feature allows you to partition the host's logical processors and assign VMs to specific groups, effectively pinning them to a NUMA node.

    This approach requires manual configuration but should provide the control you're looking for.

    How to Configure CPU Groups (Conceptual Steps)

    The exact PowerShell cmdlets for this in the final release of Windows Server 2025 are still being fully documented, but the process involves these steps:

    Identify Your NUMA Node's Processors: First, determine which logical processors belong to each NUMA node.

    PowerShell

    Get-VMHostNumaNode
    

    This command will list your NUMA nodes and the logical processor IDs associated with each one. Note these IDs for the next steps.

    Create a CPU Group for Each NUMA Node: You will need to create a CPU group for each NUMA node to which you want to isolate VMs. Historically, this has been done with the cpugroups.exe command-line utility, as direct PowerShell cmdlets were not available for group creation. You may need to find and download this tool.

    For a host with two NUMA nodes, you would create two groups:

    DOS

    cpugroups.exe creategroup /groupid:
    

    Assign Processors to Your CPU Groups: After creating the groups, you need to populate them with the logical processors from your NUMA nodes.

    Assign a VM to a CPU Group: Finally, assign your virtual machine to the desired CPU group. This is the crucial step for enforcing locality. While the definitive cmdlet is not yet widely documented, it is expected to be a parameter on the Set-VMProcessor cmdlet, potentially -CpuGroupId or -ResourcePoolName.

    Example (Hypothetical):

    PowerShell

    # Get the virtual machine you want to configure
    

    By disabling NUMA spanning and assigning VMs to specific CPU groups that align with your physical NUMA nodes, you should be able to enforce locality for both CPU and memory. Within this manually defined boundary, the hypervisor is expected to allow for the oversubscription of vCPUs, as it did in Windows Server 2022. Is this a change in Hyper-V 2025? Yes. The behavior you are experiencing is a noticeable change from Windows Server 2022. Multiple users with similar hardware (dual-socket AMD EPYC) have reported the same issues.

    Has NUMA enforcement become stricter? Yes. With NUMA spanning disabled, Hyper-V in Windows Server 2025 now appears to strictly enforce that a virtual machine's vCPUs cannot exceed the number of logical processors within a single physical NUMA node. This prevents the vCPU oversubscription per-node that was possible in previous versions.

    Is per-node oversubscription explicitly blocked when spanning is off? Yes, this appears to be the case. When NUMA spanning is turned off to enforce memory and CPU locality, Hyper-V 2025 will not allow you to start a VM if its vCPU count, when added to the vCPUs of running VMs on that node, exceeds the logical core count of the node.

    Is there a hidden setting to recover the 2022 behavior? There is no officially documented registry key, PowerShell command, or other setting to revert to the more relaxed NUMA scheduling behavior of Windows Server 2022.


    Potential Workaround: CPU Groups

    While there isn't a simple toggle to restore the old behavior, you may be able to achieve your goal of both NUMA locality and CPU oversubscription by using a feature called CPU Groups. This feature allows you to partition the host's logical processors and assign VMs to specific groups, effectively pinning them to a NUMA node.

    This approach requires manual configuration but should provide the control you're looking for.

    How to Configure CPU Groups (Conceptual Steps)

    The exact PowerShell cmdlets for this in the final release of Windows Server 2025 are still being fully documented, but the process involves these steps:

    Identify Your NUMA Node's Processors: First, determine which logical processors belong to each NUMA node.

    PowerShell

    Get-VMHostNumaNode
    

    This command will list your NUMA nodes and the logical processor IDs associated with each one. Note these IDs for the next steps.

    Create a CPU Group for Each NUMA Node: You will need to create a CPU group for each NUMA node to which you want to isolate VMs. Historically, this has been done with the cpugroups.exe command-line utility, as direct PowerShell cmdlets were not available for group creation. You may need to find and download this tool.

    For a host with two NUMA nodes, you would create two groups:

    DOS

    cpugroups.exe creategroup /groupid:
    

    Assign Processors to Your CPU Groups: After creating the groups, you need to populate them with the logical processors from your NUMA nodes.

    Assign a VM to a CPU Group: Finally, assign your virtual machine to the desired CPU group. This is the crucial step for enforcing locality. While the definitive cmdlet is not yet widely documented, it is expected to be a parameter on the Set-VMProcessor cmdlet, potentially -CpuGroupId or -ResourcePoolName.

    Example (Hypothetical):

    PowerShell

    # Get the virtual machine you want to configure
    

    By disabling NUMA spanning and assigning VMs to specific CPU groups that align with your physical NUMA nodes, you should be able to enforce locality for both CPU and memory. Within this manually defined boundary, the hypervisor is expected to allow for the oversubscription of vCPUs, as it did in Windows Server 2022.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.