New 2019VMM moving from 2016VMM

DaveK 96 Reputation points
2020-08-18T04:44:06.7+00:00

Hello, After we had a failed in-place scvmm 2016 to 2019 upgrade where vmm server would not uninstall we ended up with a very unstable 2016 scvmm. I have a new separate 2019 scvmm setup and patched and fully working, but empty. The question I have after researching the "reassociate this host with this VMM environment" check box is; 1. The new scvmm is 2019 and the servers still have the 2016 VMM agent installed. Does the reassociate check box upgrade the agent also and is there a reboot required for all nodes in the cluster? 2. The hosts are already clustered, full production and populated. Has there been any known issues bringing the whole cluster into a new VMM with the reassociate check box? 3. To remove the cluster from the 2016 SCVMM is the selection to close the "Remove" selection when right clicking on the cluster, removing a node removes the node from the cluster and I'm sure we don't want to uncluster. Current configuration is 2016VMM with 2-multi-node clustered 2016 servers with lots of VMs. Thanks in advance.

System Center Virtual Machine Manager
0 comments No comments
{count} votes

Accepted answer
  1. DaveK 96 Reputation points
    2020-08-21T07:28:22.837+00:00

    OK to add for all.... , a reboot of the scvmm server displayed the (Pending) status on the host in the VMs and Services tab. I right clicked and Added host to complete the process. The previous issues was the (Pending) was not showing until a reboot of the scvmm server. This repeated on a second cluster add to the new 2019 scvmm as confirmation. So we're good based on our actions. For others doing this, 2016 scvmm with 1-multi clusters.... going to 2019 scvmm. We set up a new scvmm. Removed the hvcluster from the retiring vmm. Then added the Cluster /Host to the new 2019 scvmm. It's a systematic process... add the cluster which adds the nodes. Watch the VMs and Services tab for host which will show a (pending). Wait to clear all. Reboot the scvmm server after the add is completed if there are (Pending) status' seen. There was an event that nodes cleared the (Pending) status but didn't show, but it(they) really weren't cleared. Rebooting the scvmm server and the recheck the vms and services tab for the hosts (Pending) showed a Pending status. Right click any and add host to cluster..... it's not the typical cluster node but a VMM language to add the host to the cluster. In our case once that was done and the Jobs were good then we added storage volumes to the scvmm cluster and let the jobs complete. We did see a permissions issue in the job results but the storage for the cluster was a green check and a good status.

    0 comments No comments

2 additional answers

Sort by: Most helpful
  1. AndyLiu-MSFT 576 Reputation points
    2020-08-19T02:48:14.14+00:00
    1. To my knowledge, the "reassociate this host with this VMM environment" option only works on the same VMM version.
    2. add the cluster to a new VMM environment, it's recommended to create the same configurations before migration, such as networking, storage, host groups etc.
    3. To remove the cluster from the VMM, please right click the Hyper-V cluster from the VMM fabric, and choose Remove.

    Plus, since you need to perform the migration in production, I would recommend to take a test in the lab beforehand.

    0 comments No comments

  2. DaveK 96 Reputation points
    2020-08-19T06:43:13.033+00:00

    Thanks Andy, unfortunately no lab. It's also a very bare bones config, basically the logical network which is from the adding of the cluster. (inherited vmm). So pending any response i did find a technet article on removing the cluster from the 2016 cluster and then adding it to the 2019 scvmm. Everything was close to good. Cluster came in, the 6 nodes were seen at the time of the cluster add to vmm. Adding the clustered nodes there was an anomally. In the Fabric tab only 5 nodes are visible. In the VMs and Services all 6 nodes are visible but the one missing from the Fabric tab doesn't show any VMs and all properties selections are greyed out. A cluster refresh returns an error that All nodes cannot be communicated with. I also get a Warning, Cluster node #2 was skipped during cluster refresh because it's in a pending state. Add the Pending cluster node under VMM management. (this statement I"m not sure where to go.) If I go back and add the node I get the multi-reason error and it does not display the server name to add the host. I have confirmed WinRM, Firewalls are on but the rules for VMM agent are enabled. The version of VMM Agent is the same for all nodes. The role account to add is a local admin on all nodes in the cluster and is also a domain admin account. That specific node was the cluster owner had the second IP for the cluster. So is there a way for scvmm to see the cluster member in the Fabric? Where is the " Add the Pending Cluster Node under VMM mgmt done at? VMM is a brand new 2019 scvmm. Hosts are all Win2016. I do have a N+1 config if I need to remove the node. It's a flat network so there is no Cloud's with separate VLANs so the single VLAN confirguration could be done at the FCM level and refreshed in VMM if possible. What are the options that you'd see in this situation?

    0 comments No comments