Hyper-V Easy-GPU-PV - How dose the gpu partitioning work

Thomas Hawkins 10 Reputation points
2023-04-24T20:28:28.65+00:00

Confused about GPU allocation in Hyper-V Hi everyone, I'm running Windows 11 with a 4090 GPU and have been using the easy-gpu-pv scripts to create a virtual machine with GPU resource allocation set to 50%. From what I understand, this means that the host still has 100% use of the GPU but the guest only has access to 50% of the GPU.

My interpretation is that to the guest, 100% GPU usage would show up as 50% in task manager on the host. If the guest is running at 20% load, that would equate to 10% on the host. If the guest is powered off, the host can use 100% of the GPU again.

However, I've come across conflicting information that suggests that once you partition the GPU and the guest is off, you're still losing 50% of the GPU at all times. I'm hoping the Hyper-V community, who are more experienced in this than me, can shed some light on this.

I'd also like to know if there's a way to check if my GPU is currently at 50% or 100% in the host if its the case that 50% of the GPU is fully reserved (if that's how it works).

Thank you in advance for any insights you can provide. I've received mixed answers on Discord groups and Reddit posts, and my head is starting to hurt from trying to figure this out! I have done some benchmarks before and after doing hyper v and it seems the first 3 haven benchmarks i got

Test 1

and after doing the script and then turning off the VM in Hyper-V so its off, and shouldn't effect the gpu performance Test 2

Its hard to figure out if the fps drop is circumstantial
I'm basically trying to figure out if

  1. Dose the gpu loose performance to Easy-GPU-PV when the vm is "off"
  2. How dose the partitioning work
Hyper-V
Hyper-V
A Windows technology providing a hypervisor-based virtualization solution enabling customers to consolidate workloads onto a single server.
2,733 questions
0 comments No comments
{count} votes

2 answers

Sort by: Most helpful
  1. Limitless Technology 44,381 Reputation points
    2023-04-25T14:03:30.92+00:00

    Hello there, Hyper-V provides two different options for assigning GPUs to virtual machines. One option is to use RemoteFX. The other option is to use a discrete device assignment. GPU-PV allows you to partition your systems dedicated or integrated GPU and assign it to several Hyper-V VMs. It's the same technology that is used in WSL2, and Windows Sandbox. Easy-GPU-PV aims to make this easier by automating the steps required to get a GPU-PV VM up and running. https://github.com/jamesstringerparsec/Easy-GPU-PV Hope this resolves your Query !! --If the reply is helpful, please Upvote and Accept it as an answer--


  2. Nicklas Thomsen 0 Reputation points
    2024-07-24T12:41:54.4933333+00:00

    Hello Thomas,

    It's an interesting question, and not something that James Stringer writes alot about in his readme.

    While i am a subject matter expert on Hyper-V, this is merely my speculation

    James mentions he is using GPU technologies from WSL2 and Windows Sandbox.

    Both technologies uses the Hyper-V technology to abstract the hardware layer and give you a Virtual Machine.

    But i do not think James uses the real GPU partitioning technology for Hyper-V which has only just been released for Windows Server 2025 (Hyper-V server) as this has strict hardware requirements for the GPU:

    https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/gpu-partitioning?pivots=windows-server

    However, i think James uses the WDDM GPU Virtualization technology, which is a part of Windows Sandbox (which again is based on the overall Hyper-V Virtualization technology)

    https://learn.microsoft.com/en-us/windows/security/application-security/application-isolation/windows-sandbox/windows-sandbox-architecture#wddm-gpu-virtualization

    That would explain why you are able to use consumer GPU's for your virtual machines, and aren't limited to the strict GPU requirements.

    This works fine for a Windows Sandbox environment, where security isn't of a concern, since you are only allowed to run 1 Windows Sandbox instance by default.
    From the Notes on Windows Sandbox Documentation

    Note

    Enabling virtualized GPU can potentially increase the attack surface of the sandbox.
    This is OK when running 1 VM on your own machine sharing the same GPU Kernel, but unacceptable when hosting multiple VM's for a number of customers.

    It would seem to me that James has figured out a way to use the WDDM GPU Virtualization solution against a normal Hyper-V VM, and not only the Windows Sandbox VM.

    That would explain why the underlying VM that you create are so dependent on the host GPU drivers, because it uses the WDDM GPU Virtualization method (shares GPU Kernel with the host) and not true GPU partitioning like seen on Windows Server 2025, where the VM has a true GPU partitioning and does not share GPU kernel with the host.

    So yes, unlike Windows Sandbox, because of James implementation, it's likely presumed that GPU resources are still allocated even after the VM is powered off (is would be the case of a real GPU partitioning solution).
    In Windows Sandbox, GPU resources are released once you exit the VM (it's destroyed) however, since this is a normal VM i presume it does not release WDDM Graphics resources.

    Razzmatazzz Interactive script also confirms my theory, since there is a feature to disable GPU acceleration of your VM's.

    Hope this answers your question

    Regards

    Nicklas

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.