Hyper-V VMMS service won't start

Torbjorn Steinsland 1 Reputation point
2022-09-02T13:48:31.077+00:00

We have a Hyper-V failover cluster node where the VMMS service won't start. This is a new cluster that was installed this summer with Server 2019 Datacenter on HPE DL380 Gen10 hardware using fiber channel SAN. Firmware and software is up to date
What happened was that we drained the node and paused it in the cluster. It was then restarted. After the restart it was discovered that the VMMS service couldn't start.

Trying to start the service manually gives this error:
Windows could not start the Hyper-V Virtual Machine Management service on Local Computer.
Error 0x8007000e: Not enough memory resources are available to complete this operation.

Rebooting the server does not fix the issue. It has 768 GB of RAM so it is extremely unlikely that it doesn't have enough memory.

We have tried running sfc /scannow and DISM with Cleanup-image and restore health, but this didn't help.

Has anyone encountered this issue and knows how this could happen? Can we resolve this without reinstalling the node?

Windows Server 2019
Windows Server 2019
A Microsoft server operating system that supports enterprise-level management updated to data storage.
3,804 questions
Hyper-V
Hyper-V
A Windows technology providing a hypervisor-based virtualization solution enabling customers to consolidate workloads onto a single server.
2,738 questions
Windows Server Clustering
Windows Server Clustering
Windows Server: A family of Microsoft server operating systems that support enterprise-level management, data storage, applications, and communications.Clustering: The grouping of multiple servers in a way that allows them to appear to be a single unit to client computers on a network. Clustering is a means of increasing network capacity, providing live backup in case one of the servers fails, and improving data security.
1,012 questions
{count} votes

4 answers

Sort by: Most helpful
  1. Joel Campbell 21 Reputation points
    2022-11-09T13:01:30.267+00:00

    Hi,

    We had this exact issue in a Windows 2019 (core) S2D cluster. We logged this it with MS and after a week they were able to find a cause and solution meaning in our case we didn't have to remove then re-add the Hyper-V features.

    Process Monitor revealed that the when the Virtual Machine Management Service was trying to start it was seemingly getting stuck at reading the contents of the data.vmcx file located in C:\ProgramData\Microsoft\Windows\Hyper-V on the local node. Additionally, the support engineer then was able to verify using a "VML" trace that there was indeed an issue with the data.vmcx file.
    258761-process-monitor.png

    So to fix it we found we had to do the following:

    • Ensure the node is paused with no roles
    • Rename the data.vmcx file in C:\ProgramData\Microsoft\Windows\Hyper-V\ to dataold.vmcx (on the problem node)
    • Reboot the node
    • Check the Virtual Machine Management Service has started.
    • Test migrate a VM to the node

    Additionally, we then found 2 VMs wouldn't migrate to the node (despite others working) due to the vm config vmcx files for the individual virtual machines being located in C:\ProgramData\Microsoft\Windows\Hyper-V\Planned Virtual Machines on the node. Stopping the Virtual Machine Management Service then moving the vmcx files out of this location (to desktop in my case) before starting the service again fixed this problem too. With all VMs migrating successfully after that.

    We don't have a root cause as to why this happened, at the time of writing the node has been stable. Hopefully this is of use to someone with the same issue. As always take the relevant backups and precautions just in case your issue isn't fixed by the above.

    4 people found this answer helpful.

  2. Michael Taylor 55,051 Reputation points
    2022-09-02T16:06:33.073+00:00

    I have seen this issue if Hyper-V services aren't started early enough in the boot process. Specifically, in my experience, Host Commute and VMMS must both be started early. If they start too late then the service fails to get the contiguous block of memory it needs to host VMs (again, in my experience). The only workaround is to reboot because once the memory is not available it is highly unlikely you'll get it.

    I don't know how much memory it needs but it would need at least enough to start the VMs that should auto-start.

    1 person found this answer helpful.

  3. Kev Hammond (MysonPages) 1 Reputation point
    2022-09-08T11:24:14.53+00:00

    Hi all.

    Just to let you know, to get this working I had to remove the Hyper-V role and then add that role back in (after a fair few reboots).
    Be prepared to access your server using iLo or iDRAC as it will change the server's networking as it uninstalls and reinstalls the virtual network adaptors.

    I'd tried everything beforehand (uninstalling updates, rolling back, uninstalling AV, DISM, SFC etc...), this is the only thing that's worked. My next step would have been an in-place upgrade, but luckily this solved the issue.

    I then had to rebuild my virtual switches, but first untick the Hyper-V Extensible Switch from the network adaptors / teams you used previously - otherwise you'll get an error when trying to enable your new Virtual Switch.

    Hope this helps someone else, as this has been very painful indeed.


  4. Limitless Technology 39,686 Reputation points
    2022-09-09T08:20:21.7+00:00

    Hello there,

    The cause of 0x8007000e must be linked to the internal memory of your PC but as it seems you have 768GB we can confirm that this might not be the issue in your case.

    You experience this issue because you allocated too much memory to VMs. This can cause the host doesn't have enough memory for starting the Hyper-V Virtual Machine Management service.

    You can adjust how much RAM a virtual machine can use. Of course, you should not allocate too little, because this results in a very slow virtual machine. But too much can result in error Ran out of memory 0x8007000E and often has no positive effect on performance.

    Here is a link that has some additional troubleshooting steps which you can try and see if helps in overcoming your issue https://learn.microsoft.com/en-us/troubleshoot/system-center/vmm/manage-host-fails-error-2911

    Please have a look following link for VMM system requirements. https://learn.microsoft.com/en-us/system-center/vmm/system-requirements?view=sc-vmm-2022

    -----------------------------------------------------------------------------------------------------------------------------------------

    --If the reply is helpful, please Upvote and Accept it as an answer–


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.