VDI and Discrete Device Assignment for GPUs?

Leith Tussing 26 Reputation points

We're working on deploying out first VDI environment and we have Hyper-V host servers with a couple of large compute GPUs in them. We wanted to make them available via VDI so the users we have that need them can use a super high tier compute VDI system without having to dedicated specific VMs and the hassle of always having them online and time slicing usage between users.

I've gone through all of the crazy Discrete Device Assignment PowerShell process for GPUs in a normal Windows 10 VM but what I was wondering is if there's a way to do this with VDI systems? I know there's a VM created by VDI separate from the Master image but how could I dedicated the GPUs in the Hyper-V host to those specific individual VMs and not the Master along with all of the in VM configuration like the GPU drivers and such? I know I can only do a 1 to 1 with DDA so these systems have 2 GPUs so I want to have 2 VDI systems using them.

Windows Server 2019
Windows Server 2019
A Microsoft server operating system that supports enterprise-level management updated to data storage.
3,344 questions
Remote Desktop
Remote Desktop
A Microsoft app that connects remotely to computers and to virtual apps and desktops.
4,149 questions
0 comments No comments
{count} votes

2 answers

Sort by: Most helpful
  1. Limitless Technology 39,206 Reputation points

    Hi @Leith Tussing

    You can try enabling the GPU through passthrough so that you can dedicate the GPU in the Hyper V. But make sure you have met with all Prerequisites which are listed below.

    • Verify that your GPU device is supported by your server vendor.
    • Verify that your GPU can be used in passthrough mode.
    • Verify whether your GPU device maps memory regions with a total size of 16 GB or more.

    Hope this resolves your Query!!


    --If the reply is helpful, please Upvote and Accept it as an answer--

    0 comments No comments

  2. Leith Tussing 26 Reputation points

    We're using NVIDIA Tesla M10 GPUs from Dell in Dell chassis and reading the NVIDIA documentation they're supported for DDA on Windows Server 2019. They are 32 GB cards though, is there some other issues we need to work around with memory above 16GB?

    So I guess what I would do something like this to get it to work?

    1. Make the master VM image for them and bind one of the video cards to it so I can then install the NVIDIA driver in Windows to use it
    2. Do the sysprep and shutdown
    3. Unbind the GPU from the master VM
    4. Go through the VDI process of making it into a pool of 2
    5. Then when that's done bind the two GPUs one to each VDI VM
    0 comments No comments