Hyper-V "Ghost" Entries After RAID Recovery

JordanF 6 Reputation points

OS: Windows Server 2019 - Hyper-V Role


Apologies if asked elsewhere, I could not find a winning search term.
Any assistance is appreciated.

Issue Description
Non-Existent VM's are listed in the Hyper-V Manager (and Get-VM powershell).
I cannot identify where the queried metadata is stored and how to remove it.
Ghosts share a name with production VM's, so I do not know of a safe way to run remove-vm with the proper reference.

We had to recover Hyper-V VM's from backup due to RAID failure/loss of Data Partition (primary OS remained intact).
After replacing the array, Hyper-V no longer listed the lost VM's in Hyper-V manager. (Just an empty list)
I recovered all VM's from backup, at which time it became clear that some metadata remained: to facilitate the recovery from backup, we needed to apply new UUIDs.

After reboot, and a number of reboots since (monthly patching), only the newly recovered VM's were listed . . . until now.
Oddly, "Ghosts" of the VM's from the lost RAID are now present, and I cannot find a way to de-register them or remove the entries via Hyper-V Manager.

Additional Information
Get-VM results (We can see the "Ghost" VM's with Saved Critical status

From Hyper-V Manager we can see the Ghost entries lack context menu or management options

Attempting to retrieve the UUID of the "Ghost" VM's fails.
Utilizing Aaron Parker's script, we can see duplicates, of which the legacy entries lack UUID's.

A Windows technology providing a hypervisor-based virtualization solution enabling customers to consolidate workloads onto a single server.
2,627 questions
0 comments No comments
{count} votes

4 answers

Sort by: Most helpful
  1. Limitless Technology 39,501 Reputation points

    Hi JordanF-0632,

    Check the following folder and subfolders for ghost links and files of deleted virtual machines:


    The remnants here may prevent you from recreating the VM using the same ID as before.

    --If the reply is helpful, please Upvote and Accept as answer--

    1 person found this answer helpful.
    0 comments No comments

  2. Vito Procino 6 Reputation points

    You need:
    (Optional) export vm
    Stop hyperv
    Remove older xml (not vhdx)
    Start hyperv
    Create new vm and connect older vhdx

    If you hyper-v is a cluster failover you need move vm saved critical on node used for vm before crash. So, vm saved critical you can use without problem.

    You can try Remove-VMSavedState vmname

    0 comments No comments

  3. JordanF 6 Reputation points

    Thank you @Vito Procino

    To follow up in reverse, no clustering here. So that hopefully simplifies things.

    I'm a bit lost on this step:

    Remove older xml (not vhdx)

    If you are referencing the individual VMCX (Virtual Machine Configuration Files) that define the settings for each virtual machine, those were lost with the original data drive.
    Hyper-V was configured to store both Virtual Hard Disk, and Virtual Machine Configuration related files on that Data partition.

    During initial configuration I used:

    Set-VMHost -VirtualMachinePath  [path]  
    Set-VMHost -VirtualHardDiskPath [path]  

    Is there a separate XML file(s) that tracks mounted/enumerated virtual machines?
    Do you know where this would be located or how I can identify the path?

    Thanks again for your help,

    0 comments No comments

  4. JordanF 6 Reputation points


    First off, I just wanted to thank everyone who submitted suggestions. I was able to resolve the issue.

    Because I changed the default storage areas using the follow, the VM metadata was not present in the normal programdata folder.

     Set-VMHost -VirtualMachinePath  [path]
     Set-VMHost -VirtualHardDiskPath [path]

    It turned out the solution was easier than I thought.
    I just renamed the viable VM's to enable the use of Get-VM and Remove-VM (Error state VMs could not be renamed)

    Steps performed:

    1. Rename the valid version in the VM pair by appending a "v2" suffix
    2. Ran Get-VM [vm name] | fl * for each VM in the pair
    3. Verified that the VM pair did not share common IDs or directories
    4. Verified backups are in place (always an important step)
    5. Ran Remove-VM -name [vm name] -force for the invalid pair member

    Rebooted to verify that the remaining VM's auto-start without issue.

    I hope this helps someone in a similar boat.

    0 comments No comments