Multiple GPU Assignments to a Hyper-V VM with DDA

Anonymous
2023-11-30T11:20:47+00:00

I recently configured Discrete Device Assignment (DDA) on my Windows Server with Hyper-V and successfully assigned a GPU to a virtual machine using the steps outlined in the following reference manuals

  1. https://docs.nvidia.com/grid/5.0/grid-vgpu-user-guide/index.html#using-gpu-pass-through-windows-server-hyper-v
  2. https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/deploy/deploying-graphics-devices-using-dda
  3. https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment

My Setup:

  • Windows Server with Hyper-V
  • Multiple GPUs available (Example: NVIDIA RTX A400)

What I've Done:
Successfully assigned one GPU to a VM using DDA

  • Obtain the location path of the GPU that I want to assign to a VM: "PCIROOT(36)#PCI(0000)#PCI(0000)"
  • Dismount the device:  Dismount-VMHostAssignableDevice -LocationPath "PCIROOT(36)#PCI(0000)#PCI(0000)" -Force
  • Assign the device to the VM: Add-VMAssignableDevice -LocationPath "PCIROOT(36)#PCI(0000)#PCI(0000)" -VMName Debian12_Dev

Power on the VM, and the guest OS (Debian) is able to use the GPU.

Now, I want to add multiple GPUs to a single VM using Hyper-V DDA.

I tried the following:

  • Obtain the location path of GPU1 & GPU2 that I want to assign to a VM:
    • GPU1 device location path: PCIROOT(80)#PCI(0200)#PCI(0000)#PCI(1000)#PCI(0000)
    • GPU2 device location path: PCIROOT(36)#PCI(0000)#PCI(0000)- Dismount the devices:
  • Dismount the devices: Dismount-VMHostAssignableDevice -LocationPath "PCIROOT(80)#PCI(0200)#PCI(0000)#PCI(1000)#PCI(0000)" -Force Dismount-VMHostAssignableDevice -LocationPath "PCIROOT(36)#PCI(0000)#PCI(0000)" -Force
  • Assign the devices to the VM: Add-VMAssignableDevice -LocationPath "PCIROOT(36)#PCI(0000)#PCI(0000)" -VMName Debian12_Dev Add-VMAssignableDevice -LocationPath "PCIROOT(80)#PCI(0200)#PCI(0000)#PCI(1000)#PCI(0000)" -VMName Debian12_Dev

Power on the VM, but the guest OS (Debian) identifies only one GPU.

Question:
Has anyone tried adding multiple GPUs to a single VM using Hyper-V DDA? If so, what steps did you follow, and did you encounter any challenges?

I'm seeking to optimize GPU resources for specific workloads within a single VM and would appreciate any insights, experiences, or tips from the community.

Thanks in advance!

Windows for business Windows Server Storage high availability Virtualization and Hyper-V

Locked Question. This question was migrated from the Microsoft Support Community. You can vote on whether it's helpful, but you can't add comments or replies or follow the question. To protect privacy, user profiles for migrated questions are anonymized.

0 comments No comments
{count} votes
Accepted answer
  1. Anonymous
    2023-12-21T10:43:29+00:00

    Hi Xu Gu San,

    Thanks for the response !

    The aforementioned issues were successfully resolved by configuring MMIO space, as outlined in the official Microsoft document: [Microsoft Official Document]

    GPUs, particularly, require additional MMIO space for the VM to access the memory of that device. While each VM starts with 128 MB of low MMIO space and 512 MB of high MMIO space by default, certain devices or multiple devices may require more space, potentially exceeding these values.

    Subsequently, I reconfigured the VM following the instructions in the Microsoft Official Document: [VM Preparation for Graphics Devices]

    Solution :

    • Enabled Write-Combining on the CPU using the cmdlet: Set-VM -GuestControlledCacheTypes $true -VMName VMName
    • Configured the 32-bit MMIO space with the cmdlet: Set-VM -LowMemoryMappedIoSpace 3Gb -VMName VMName
    • Configured greater than 32-bit MMIO space with the cmdlet: Set-VM -HighMemoryMappedIoSpace 33280Mb -VMName VMName

    Dismount the GPUs

    To dismount the GPU devices from the host:

    • Located the device's location path
    • Copied the device's location path
    • Disabled the GPU in Device Manager

    Dismounted the GPU devices from the host partition using the cmdlet:

    Dismount-VMHostAssignableDevice -Force -LocationPath $locationPath1
    
    Dismount-VMHostAssignableDevice -Force -LocationPath $locationPath2
    

    Assigned the GPU devices to the VM using the cmdlet:

    Add-VMAssignableDevice -LocationPath $locationPath1 -VMName VMName
    
    Add-VMAssignableDevice -LocationPath $locationPath2 -VMName VMName
    

    The configuration of the VM for DDA has been successfully completed.

    Both GPUs are now recognized in my Linux Hyper-V VM:

    root@DEB-HYPERV-6fabb3a422fb6e499b57dd2e11a7aa59:~# lspci
    0000:00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (AGP disabled) (rev 03)
    0000:00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 01)
    0000:00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
    0000:00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 02)
    0000:00:08.0 VGA compatible controller: Microsoft Corporation Hyper-V virtual VGA
    0000:00:0a.0 Ethernet controller: Digital Equipment Corporation DECchip 21140 [FasterNet] (rev 20)
    076a:00:00.0 3D controller: NVIDIA Corporation TU104GL [Tesla T4] (rev a1)
    e95f:00:00.0 3D controller: NVIDIA Corporation TU104GL [Tesla T4] (rev a1)
    root@DEB-HYPERV-6fabb3a422fb6e499b57dd2e11a7aa59:~#
    
    1 person found this answer helpful.
    0 comments No comments

2 additional answers

Sort by: Most helpful
  1. Anonymous
    2023-12-01T03:14:57+00:00

    Hi Sam,

    Hope you’re doing well.

    From my perspective, please follow the following steps to add multiple GPUs to a single VM by using Hyper-V DDA:

    (1) In the beginning, make sure that Hyper-V is installed and enabled on the host machine. And enable the Hyper-V DDA feature on your system.

    (2) And then, make sure that your hardware supports DDA. Both the GPU and the motherboard must support the PCIe ACS (Access Control Services) capability. At the same time, ensure that your GPUs have proper driver support for the Hyper-V environment.

    (3) Install GUP Drivers: Install the necessary GPU drivers on both the host and the VM. The VM should have drivers that are compatible with the GPU model assigned to it.

    (4) Configure the VM:

    At first, we need to shut down the VM.

    And then, we need to use Hyper-V Manager or PowerShell to add the GPU to the VM. This involves configuring the VM settings to use DDA and selecting the GPU(s) to assign.

    Finally, restart VM and ensure that it recognizes and uses the assigned GPU and test the GPU performance within the VM to ensure it’s working as expected.

    To optimize GPU resources for specific workloads within a single VM, please follow the following steps:

    (1) Assign the specific GPU(s) needed for the workload to the VM.

    (2) Depending on the workload, you might need to adjust GPU scheduling policies. Some workloads benefit from time-slicing, while others may require exclusive access.

    (3) Adjust VM configuration settings, such as the number of vCPUs and allocated memory, to optimize overall performance.

    (4) Configure your applications to make the best use of GPU resources. Some applications allow you to specify which GPU to use.

    Importance: The steps and considerations may vary based on the specific versions of Hyper-V and GPU drivers you are using.

    If you have any questions or concerns, please let me know without hesitation. In the meantime, I hope my reply will be helpful to you!

    Best Regards,
    Xu Gu

    1 person found this answer helpful.
    0 comments No comments
  2. Anonymous
    2023-12-22T01:48:44+00:00

    Hi Sam,

    Thanks for your update and share us the good news.

    If you have any questions, please let me know.

    Best Regards.

    1 person found this answer helpful.
    0 comments No comments