Storage options for Windows Server 2008 Hyper-V
Windows Server 2008’s Hyper-V has been in public beta for a while now and lots of people have been experimenting with it. One aspect that I am focusing on is storage for those virtualized environments and more specifically the options related to SAN storage.
Virtualization terminology
Before we start, I wanted to define some terms commonly used in virtualization. We refer to the physical computer running the Hyper-V software as the parent partition or host, as opposed to the child partition or guest, which is the term used for virtual machine. You can, say, for instance, that the host must support hardware-assisted virtualization or that you can now run a 64-bit OS in the guest.
The other term used with Hyper-V is Integration Components. This is the additional software you run on the guest to better support Hyper-V. Windows Server 2008 already ships with Hyper-V Integration Components, but older operating systems will need to install them separately. In Virtual Server or Virtual PC, these were called “additions”.
Exposing storage to the host
A Hyper-V host is a server running Windows Server 2008 and it will support the many different storage options of that OS. This includes directly-attached storage (SATA, SAS) or SAN storage (FC, iSCSI). Once you expose the disks to the host, you can expose it to the guest in many different ways.
VHD or passthrough disk on the host
As with Virtual Server and Virtual PC, you can create a VHD file in one of the host’s volume and expose that as a virtual hard disk to the guest. This VHD functions simply as a set of blocks, stored as a regular file using the host OS file system (typically NTFS). There are a few different types of VHD, like fixed size or dynamically expanding. This hasn’t changed from previous versions. The maximum size of a VHD continues to be 2040 GB (8 GB short of 2 TB).
You can now expose a host disk to the guest without even putting a volume on it using a passthrough disk. Hyper-V will let you “bypass” the host’s file system and access a disk directly. This raw disk, which is not limited to 2040 GB in size, can be a physical HD on the host or a logical unit on a SAN. To make sure the host and the guest are not trying to use the disk at the same time, Hyper-V requires the disk to be in the offline state on the host. This is referred to as LUN passthrough, if the disk being exposed to the guest is a LUN on a SAN from the host perspective. With passthrough disks you will lose some nice, VHD-related features, like VHD snapshots, dynamically expanding VHDs and differencing VHDs.
IDE or SCSI on the guest
When you configure the guest’s virtual machine settings, you need to choose how to show the host disk (be it VHD file or passthrough disk) to the guest. The guest can see that disk either as a virtual ATA device on a virtual IDE controller or as a virtual SCSI disk device on a virtual SCSI controller. Note that you do not have to expose the device to the guest in the same way it is exposed to the host. For instance, a VHD file on a physical IDE disk on the host can be exposed as a virtual SCSI disk on the guest. A physical SAS disk on the host can be exposed as a virtual IDE disk on the guest.
The main decision criteria here should be the capabilities you are looking for on the guest. You can only have up to 4 virtual IDE disks on the guest (2 controllers with 2 disks each), but they are the only types of disk that the virtualized BIOS will boot from. You can have up to 256 virtual SCSI disks on the guest (4 controllers with 64 disks each), but you cannot boot from them and you will need an OS with Integration Components. Virtual IDE disks will perform at the same level of the virtual SCSI disks after you load the Integration Components in the OS, since they leverage the same optimizations.
You must use SCSI if you need to expose more than 4 virtual disks to your guest. You must use IDE if your guest needs to boot to that virtual disk or if there are no Integration Components in the guest OS. You can also use both IDE and SCSI with the same guest.
iSCSI directly to guests
One additional option is to expose disks directly to the guest OS (without ever exposing it to the host) by using iSCSI. All you need to do is load an iSCSI initiator in the guest OS (Windows Server 2008 already includes one) and configure your target correctly. Hyper-V’s virtual BIOS does not support booting to iSCSI directly, so you will still need to have at least one disk available to the guest as an IDE disk so you can boot to it. However, all your other disks can be iSCSI LUNs.
There are also third-party solutions that will that will allow a Hyper-V guest to boot from an iSCSI LUN exposed directly to the guest. You can check a product from EmBoot called WinBoot/i that does exactly that at https://www.emboot.com.
Moving disks between hosts
Another common usage scenario in virtualization is moving a virtual machine from one host to another. You will typically shut down the guest (or pause it), move the storage resources and then bring the VM up in the new host (or resume it).
The “move the storage” part is easier to imagine if you are using VHD files for guest disks. You simply copy the files from host to host. If you’re using physical disks (let’s say, SAS drives that are passthrough disks exposed as IDE disks to the guest), you can physically move the disk to another host. If this is a LUN on a SAN, you would need to reconfigure the SAN to mask the LUN to the old host and unmask it to the new host. You might want to use a technology called NPIV to use “virtual” WWNs for a set of LUNs, so you can move them between hosts without the need to reconfigure the SAN itself. This would be the equivalent of using multiple iSCSI targets for the same Hyper-V host and reconfiguring the targets to show up on the other host. If you use iSCSI directly exposed to the guest, those iSCSI data LUNs will just move with the guest, assuming the guest continues to have a network path to the iSCSI target and that you used one of the other methods to move the VM configuration and boot disk.
Windows Server 2008 is also a lot smarter about using LUNs on a SAN, so you might consider exposing LUNs to multiple Hyper-V hosts and onlining the LUNs as required, as long you don't access them simultaneosly from multiple hosts.
Keep in mind that, although I am talking about doing this manually, you will typically automate the process. Windows Server Failover Clustering and System Center Virtual Machine Manager (VMM) can make some of those things happens automatically. In some scenarios, the whole move can happen in just seconds (assuming you are pausing/resuming the VM and the disks are in a SAN). However, there is no option today with a robot to physically move disks from one host to another :-).
A few tables
Since there are lots of different choices and options, I put together a few tables describing the scenarios. They will help you verify the many options you have and what features are available in each scenario.
Table 1
VHD on host volume |
Passthrough disk on host |
Directly to guest |
|
DAS (SAS, SATA) |
X |
X |
|
FC SAN |
X |
X |
|
iSCSI SAN |
X |
X |
X |
Table 2
DAS or SAN on host, VHD or passthrough disk on host,exposed to guest as IDE |
DAS or SAN on host, VHD or passthrough disk on host,exposed to guest as SCSI |
not exposed to host,exposed to guest as iSCSI LUN |
|
Guest boot from disk |
Yes |
No |
No |
Additional sw on guest |
Integration Components (optional) |
Integration Components |
iSCSI initiator |
Guest sees disk as |
Virtual HD ATA Device |
Msft Virtual Disk SCSI Disk Device |
MSFT Virtual HD SCSI Disk Device |
Guest max disks |
2 x 2 = 4 disks |
4 x 64 = 256 disks |
Not limited by Hyper-V |
Guest hot add disk |
No |
No |
Yes |
Guest hw snap on SAN |
No |
No |
Yes |
Table 3
Scenario |
1IDE VHD Local |
2SCSI VHD Local |
3IDE Passthrough Local |
4SCSI Passthrough Local |
5IDE VHD Remote |
6SCSI VHD Remote |
7IDE Passthrough Remote |
8SCSI Passthrough Remote |
9Guest iSCSI |
Storage type |
DAS |
DAS |
DAS |
DAS |
SAN, FC/iSCSI |
SAN, FC/iSCSI |
SAN, FC/iSCSI |
SAN, FC/iSCSI |
SAN, iSCSI |
Exposed to host as |
VHD on NTFS |
VHD on NTFS |
Passthrough disk |
Passthrough disk |
VHD on NTFS |
VHD on NTFS |
Passthrough disk |
Passthrough disk |
Not exposed |
Exposed to guest as |
IDE |
SCSI |
IDE |
SCSI |
IDE |
SCSI |
IDE |
SCSI |
iSCSI LUN |
Guest driver is “synthetic” |
No (a) |
Yes |
No (a) |
Yes |
No (a) |
Yes |
No (a) |
Yes |
No (b) |
Guest boot from disk |
Yes |
No |
Yes |
No |
Yes |
No |
Yes |
No |
No (i) |
Guest max disks |
4 |
256 |
4 |
256 |
4 |
256 |
4 |
256 |
(j) |
Guest max disk size |
~2 TB (c) |
~2 TB (c) |
Limit imposed by guest (d) |
Limit imposed by guest (d) |
~2 TB (c) |
~2 TB (c) |
Limit imposed by guest (d) (e) |
Limit imposed by guest (d) (e) |
(d) (e) |
Hyper-V VHD snapshots |
Yes |
Yes |
No |
No |
Yes |
Yes |
No |
No |
No |
Dynamically expanding VHD |
Yes |
Yes |
No |
No |
Yes |
Yes |
No |
No |
No |
Differencing VHD |
Yes |
Yes |
No |
No |
Yes |
Yes |
No |
No |
No |
Guest hot add disk |
No |
No |
No |
No |
No |
No |
No |
No |
Yes |
SCSI-3 PR for guests on two hosts (WSFC) |
No |
No |
No |
No |
No |
No |
No |
No |
Yes |
Guest hardware snapshot on SAN |
N/A |
N/A |
N/A |
N/A |
No |
No |
No |
No |
Yes |
P2V migration without moving SAN data |
N/A |
N/A |
N/A |
N/A |
No |
No |
Yes (f) |
Yes (f) |
Yes (g) |
VM migration without moving SAN data |
N/A |
N/A |
N/A |
N/A |
Yes (h) |
Yes (h) |
Yes (f) |
Yes (f) |
Yes (g) |
(a) Works as legacy IDE but will perform better if Integration Components are present.
(b) Works as legacy network but will perform better if Integration Components are present.
(c) Hyper-V maximum VHD size is 2040 GB (8 GB short of 2 TB).
(d) Not limited by Hyper-V. NTFS maximum volume size is 256 TB.
(e) Microsoft iSCSI Software Target maximum VHD size is 16 TB.
(f) Requires SAN reconfiguration or NPIV support, unless using a failover cluster.
(g) For data volumes only (cannot be used for boot/system disks).
(h) Requires SAN reconfiguration or NPIV support, unless using a failover cluster. All VHDs on the same LUN must be moved together.
(i) Requires third-party product like WinBoot/i from EmBoot.
(j) Not limited by Hyper-V.
References
https://blogs.msdn.com/tvoellm/archive/2008/01/02/hyper-v-scsi-vs-ide-do-you-really-need-an-ide-and-scsi-drive-for-best-performance.aspx
https://blogs.technet.com/jhoward/archive/2007/10/04/boot-from-scsi-in-virtual-server-vs-boot-from-ide-in-windows-server-virtualization.aspx
Screenshots
Screenshot of settings for scenario 2 in table 3 (VHD exposed as SCSI):
Screenshot of settings for scenario 8 in table 3 (iSCSI LUN passthrough exposed as IDE, which your guest can boot from):
Updated on 03/30/2008 to reflect the change to 256 (4x64) virtual SCSI disks with the release of the Hyper-V RC.
Updated on 03/06/2008 with additional details on iSCSI boot on guest. Check details at https://blogs.technet.com/josebda/archive/2008/03/06/more-on-storage-options-for-windows-server-2008-s-hyper-v.aspx.
Updated on 04/27/2008 to include titles for scenarios on Table 3 as suggested by Jeff Woolsey.
Updated on 05/09/2008 to include information about VHD snapshots, dynamically expanding VHDs and differencing VHDs.
Comments
- Anonymous
January 01, 2003
I will be working at the Microsoft booth in the Storage Network World next week in Orlando, Florida. - Anonymous
January 01, 2003
今天在准备TechED2008的Hyper-V课程,读了一些关于hyper-V存储和集群相关的一些资料,share给大家看看: Hyper-V Step-by-Step Guide: Hyper-V and - Anonymous
January 01, 2003
There is quite a lot of documentation ( http://technet.microsoft.com/en-us/library/cc794762.aspx ) and - Anonymous
January 01, 2003
Performance Tuning Guidelines for Windows Server 2008 Hyper-V Release Notes Planning for Hyper-V Security - Anonymous
January 01, 2003
Virtualization terminology Before we start, I wanted to define some terms commonly used in virtualization - Anonymous
January 01, 2003
Here's my obscenely vast list of resources that I actually do reference and send to customers on a very - Anonymous
January 01, 2003
Jose pinged a great email across a discussion list a few days back, with this diagram in, which really - Anonymous
January 01, 2003
As I mentioned in a previous blog post, you can expose storage to a Hyper-V guest in many different ways. - Anonymous
January 01, 2003
Hello, this is Norm. I have been tinkering with Hyper-V, which I loaded onto a Dell Optiplex 745 machine. - Anonymous
January 01, 2003
The comment has been removed - Anonymous
January 01, 2003
I spent last week in Orlando, Florida, working at the Microsoft booth in the Storage Networking World. - Anonymous
January 01, 2003
There is a lot of information regards Hyper-V and storage options that is scattred in blog post. I aim - Anonymous
January 01, 2003
A lot of people are unclear of what physical and/or virtual hardware Hyper-V supports. The elements below - Anonymous
January 01, 2003
Ik kom nogal eens de vraag tegen of je iSCSI boot en Hyper-V kunt combineren. Bij de traditionele methode - Anonymous
January 01, 2003
As always, Andrew beat me on this one too. Jose Barreto, he's technical evangelist in the storage - Anonymous
January 01, 2003
My current project involves being the only dedicated technical resource on the Virtualization RDP Team. - Anonymous
January 01, 2003
(updated 9/26/08) My current project involves being the only dedicated technical resource on the Virtualization - Anonymous
January 01, 2003
Actually you can boot a Hyper-V virtual machine off of iSCSI by assigning the iSCSI LUN to the parent partition and then directly attaching it to the virtual machine.Cheers,Ben - Anonymous
January 01, 2003
As always, Andrew beat me on this one too. Jose Barreto, he's technical evangelist in the storage team, - Anonymous
January 01, 2003
This blog discusses the proper way to make a configuration change to a highly available virtual machine - Anonymous
January 01, 2003
Overview I previously covered the storage options for Hyper-V and described the many choices between - Anonymous
January 01, 2003
Come vi avevo promesso e come sta diventando d'abitudine eccomi (in ritardo) qui il post in cui potete - Anonymous
January 01, 2003
Hyper-V does indeed support iSCSI boot, both for the hypervisor-on-2008 and for its guest VMs. We've tested both successfully.For our tests on guest VMs, we first added a legacy NIC (DEC 21140). We then installed our winBoot/i software as per our normal procedure, copied the VM's contents up to an iSCSI lun, set the BIOS to boot from network and it worked.Booting a Hyper-V-on-2008 was a bit trickier - the Hyper-V installation adds bindings to an existing NIC that can get in the way of iSCSI boot. As per our past experience with this on Virtual Server 2005 R2 (see http://65.93.237.220/forum/forums/thread-view.asp?tid=203&posts=1)the trick is to use a 2nd NIC bound to Hyper-V and then set the boot NIC unbound from Hyper-V. We then installed our winBoot/i v2.5 beta client on this, SystemCopy up to iSCSI SAN, and we then booted successfully from iSCSI with VMs still working under Hyper-V.Screenshots and documentation details will be added to our website support forums within a few days.Steve MarfisiemBoot Inc. - Anonymous
January 01, 2003
As always, Andrew beat me on this one too. Jose Barreto, he's technical evangelist in the storage - Anonymous
January 01, 2003
PingBack from http://blog.windowsvirtualization.com/virtualization/storage-options-for-hyper-v - Anonymous
January 01, 2003
Hyper-V is here! As you can confirm on the press release linked below, the final release of Windows Server - Anonymous
January 01, 2003
Performance Tuning Guidelines for Windows Server 2008 Hyper-V Release Notes Planning for Hyper-V Security - Anonymous
January 01, 2003
There are many ways to implement Windows Server Failover Clustering with Hyper-V. I could actually find - Anonymous
January 01, 2003
Lately it's been very quiet on my blog. There are a couple of things to that. First and foremost there - Anonymous
January 01, 2003
Is this statement correct?"You can only have up to 4 virtual IDE disks on the guest (2 controllers with 2 disks each), but they are the only types of disk that the virtualized BIOS will boot from."I currently have a pile of Virtual Server VMs that all have virtual SCSI adapters and virtual SCSI drives attached and they all boot fine.