Yep - that's possible, although there are a few considerations:
- Migration feasibility:
You cannot add Windows Server 2025 nodes to an existing 2016 cluster, because mixed-mode clustering is not supported between these versions. Therefore:
- You must build a new cluster using Windows Server 2025.
- Then, you can migrate VMs from the 2016 cluster to the new one.
Supported Migration Methods:
- Export/Import VMs.
- Shared Nothing Live Migration (if both clusters are on the same domain and compatible network-wise).
- Storage Migration (if using SMB or cluster-shared volumes).
Microsoft generally recommends Shared Nothing Live Migration for minimal downtime if both clusters coexist temporarily.
- Documentation & guidance:
Key resources:
- Microsoft Docs – Failover Clustering
- Hyper-V Live Migration Overview
- Windows Server In-Place Upgrade Paths
Additional Tips:
- Always validate the Failover Cluster configuration using
Test-Cluster
. - Ensure identical VM generation, virtual switch names, and integration services.
- Active Directory domain requirement:
You can use workgroup-based clusters, but keep in mind the following restrictions:
Workgroup Cluster Support
Feature | Supported in Workgroup Cluster? | Notes |
---|---|---|
Failover Clustering | Yes | Since Windows Server 2016 |
Hyper-V Role | Yes | Full support |
Cluster Shared Volumes (CSV) | es | Supported with SMB or iSCSI |
Live Migration | Limited | Only Credential Security Support Provider (CredSSP) — no Kerberos |
Cluster-Aware Updating (CAU) | No | Not supported without AD |
SMB Authentication | Limited | No Kerberos — use certificates or NTLM |
Storage Spaces Direct (S2D) | No | Requires domain membership |
Cluster Name Object in AD | No | Uses local authentication or DNS-based access |
Management Tools (SCVMM, etc.) | Limited | Often assume domain membership |
- Recommended network configuration (6 NICs: 2x1G + 4x10G):
You can split NICs across roles for performance and isolation. Here's a recommended configuration:
Gigabit NICs (1G) – Management and Backup roles
NIC | Role | Notes |
---|---|---|
NIC1 | Management OS | Join domain, RDP, DNS, etc. |
NIC2 | Backup / Out-of-band | Optional – can isolate backup traffic. |
10G NICs – High-throughput roles
NIC | Role | Notes |
---|---|---|
NIC3 | iSCSI/Storage | Dedicated to iSCSI (use MPIO). No teaming for iSCSI. |
NIC4 | iSCSI/Storage | Same as above. One NIC per path for redundancy. |
NIC5 | Live Migration | Enable SMB Direct (RDMA) if supported. Use compression or TCP if not. |
NIC6 | Cluster + CSV + Heartbeat | Enable cluster communication + CSV traffic. Tag appropriately via metrics. |
💡 Note: You can adjust roles depending on your storage configuration (e.g., SMB vs iSCSI), but keep Live Migration and storage traffic on separate NICs. Avoid teaming unless for specific roles like Management or CSV/Live Migration when RDMA is unavailable.
You might want to also consider:
- Enable jumbo frames (MTU 9000) on 10G NICs if your switch infrastructure supports it.
- Keep DNS, Time Sync, and domain connectivity stable.
- Ensure cluster validation is 100% successful before bringing VMs in.
If the above response helps answer your question, remember to "Accept Answer" so that others in the community facing similar issues can easily find the solution. Your contribution is highly appreciated.
hth
Marcin