Hyper-V : How many network cards do I need?
[If you are viewing this from the main page the second table appears to be missing text. Just view the post on its own by clicking the title]
[Edit 19/04/2011 - Include more info on Live Migration NIC]
[Edit 19/06/2011 - More info on NICs for cluster communications]
This is a topic that comes up all of the time. Especially when dealing with Hyper-V clusters. The short answer to the question is - it depends.
For this post I will talk about networking requirements for a SAME SITE CLUSTER. Multisite clusters, or stretch clusters, have different networking requirements. I will cover this topic in a different post.
Network requirements for same site Cluster
There are some great TechNet articles and blog posts on this topic. However, I believe that most of these posts focus on the logical networking and don't take the physical and risk profiles into account. What do I mean? I've seen people get really hooked on the "avoid the single point of failure" when designing. The problem I find is that the design stops at the logical level and doesn't take the physical hardware into account. Having fault tolerant NIC's for the parent partition isn't much good if both ports are on the same multiport card or backplane. Sometimes you want the cluster to failover its workload. Keeping the workload on node at all costs could just be a case of over engineering. Enterprise Architecture 101 - keep it simple!
First things first, let's just look at the logical minimum requirements for network cards in a Hyper-V cluster.
Usage |
Description |
Traffic Requirements |
Recommended Connection Type |
Parent Partition |
Used for the management of the Hyper-V host. Also used by System Centre Virtual Machine Manager |
Typically low bandwidth. Can increase when deploying VM's from SCVMM. |
Public |
Storage |
iSCSI network connection to SAN |
High bandwidth and low latency required |
Refer to your storage vendor. Normally private.
|
VM Network |
Used to provide network access for your VM's |
Can vary depending on the workload. |
Public. |
Cluster Heartbeat |
Used for cluster communication to determine the status of other cluster nodes |
Low bandwidth and low latency required. |
Private. |
Cluster Shared Volume (CSV) |
Used in scenarios when redirected I/O is required |
Idle until redirected I/O kicks in. In which case High Bandwidth and low latency required, |
Private. |
Live Migration |
Used to transfer the running VM's from one cluster node to another |
Idle until Live Migration occurs. In which case High Bandwidth and low latency required, |
Private. |
Comments:
- When I say public for the recommended connetion type I mean that it's ok to have other traffic on the same subnet network. Or in other words, that network congestion should not impair performance or cause a failover.
- When I say private for the recommended connection type I mean that ideally you should have a dedicated subet/network for this type of cluster communication. Congestion could cause performance issues or even trigger failover.
- The Cluster Heartbeat/Communication can be configured to use any NIC presented to the OS. So in theory, you dont need to dedicate a NIC to cluster communications anymore. You could let it use pretty much any interface except the iSCSI. Old habbits have not died away for me so I prefer to go with a dedicated. The more NICs you have in the server the better ... its more flexible.
The table above just deals with the logical requirements for a Hyper-V cluster. I'll deal with single points of failure, combing usage and teaming shortly. So, based on the above, that's 6 different logical networks. Your solution may require one or more NICs per logical network. There are options for combining logical networks as well as options for teaming. But before we get into that ... Do you have the network hardware required to warrant NIC teaming or multipath connections? When you have the answer to this question you can then start to define your networking design in earnest.
I have a couple of observations/comments I'd like to make:
- What is the point of having multipath iSCSI connections if both are patched into a single switch or if your SAN only has ports on a single card?
- What's the point in NIC teaming (fail on fault) if your physical machine has a single point of failure in a multiport network card? The same also holds through for blade enclosures. If there is going to be a hardware failure it's very unlikely to be limited to a single port. The whole card/unit is going to fail.
The whole reason behind clustering your Hyper-v solution is to take into account serious component failure. With this in mind, don't over engineer things. Don't get too hooked up on single points of failure in the logical world. Just make sure you have enough capacity in your cluster for your workloads (the N+1 rule applies).
This is how I would setup network cards in a typical Hyper-V cluster.
Usage |
Number of Network Cards |
Comments |
Parent Partition |
1 Network Card |
|
Storage |
2 Network Cards - Not Teamed |
|
VM Network |
1+ Network cards depending on the workload. Teaming is optional. Normally at least 2 cards. |
|
Cluster Heartbeat |
1 Network Card |
|
Cluster Shared Volume (CSV) |
1 Network Card |
|
Live Migration |
1 Network Card |
|
Notes on the above:
- The settings Ive outlined for the NICs assigned to VM networks will be overwritten as soon as you select them for use in a Hyper-v network. All protocol settings will be replaced with a Hyper-V virtual network protocol. Its still best practise to configure the NICs regardless though.
- Make sure you check DNS to make sure there is only ONE entry for the hostname in both the Forward and Reverse DNS zones once you have the NICs are configured.
So whats the answer:
Based on the above table and assuming 2 network cards for your VM's external network, this means I recommend EIGHT (8) logical network connections/NICs are required at a minimum for a PRODUCTION Hyper-V cluster. Yes, you could double up on some of the NICs like combining the heartbeat with the CSV NIC but I feel this is the best balance.
I referenced a few TechNet articles when putting this together. Here they are:
Hyper-V: Live Migration Network Configuration Guide
Requirements for Using Cluster Shared Volumes in a Failover Cluster in Windows Server 2008 R2
Hyper-V: Using Live Migration with Cluster Shared Volumes in Windows Server 2008 R2
Designating a Preferred Network for Cluster Shared Volumes Communication
Comments
Anonymous
January 01, 2003
Great and very helpful post!Anonymous
March 28, 2011
Nice Blog - I think I'll be referring to this quite a bit! MikeAnonymous
April 28, 2011
Thanks Gavin for the post, it has been very helpfulAnonymous
January 03, 2014
In a Hyper-V Cluster deployment you need to have multiple network adapters and therefore multiple networks on each Hyper-V host that is part of the cluster. This post quickly describes the usage of these network adapters and the configuration of the adapt