Thanks for your reply. The OS version of our VMM server as well all our Hyper-V hosts (except some stand-alone hosts) is Server 2019. The VMM server is fully updated, and we regularly update our Hyper-V hosts. We're already experiencing this issue on multiple update levels.
As far as I know there were no big changes before this started happening.
Jobs hang in SCVMM 2019 UR2
Since a few weeks we have this problem with VMM that certain jobs take way longer in VMM then they really do on the Hyper-V host. Jobs I'm talking about are for instance increasing disk sizes, increasing VM memory sizes and creating checkpoints. These jobs normally took between a few seconds and a few minutes, but now they all take between 20 and 50 minutes! The real task itself, if I check the Event Logs, finish within the normal time, it's just that VMM doesn't detect this and keeps the job running for way too long. This means, that for our customers, the job takes way too long and they are unable to do any other actions to the VM.
Does anyone have any clue on what can cause this problem?
2 answers
Sort by: Most helpful
-
JvdB 1 Reputation point
2021-02-05T08:53:08.4+00:00 -
JvdB 1 Reputation point
2021-02-08T09:05:48.227+00:00 That's not entirely true. We discovered that on every Hyper-V host the vmmagent crashes every few minutes:
This keeps happening, and we think this could have something to do with our issue.
Reinstalling doesn't fix the issue, because it happens at cluster we installed more then a year ago as well as on cluster we installed only a couple weeks ago.
I hope this sounds familiair to someone.