Virtualization
Automating Virtual Machine Host Deployment
Fergus Strachan
At a Glance:
- Hyper-V deployment and automation
- Resolve Sysprep generalization issues
- Provision virtual resources
- Configure unattended deployment of guest OS and server technologies
Code download available at: StrachanHyperV2009_02.exe(1,026 KB)
Contents
Deployment Architecture
WDS-Based Hyper-V Deployment
WMI-Based Hyper-V Configuration
Virtual Lab Deployment
Server Application Deployment
Conclusion
You are probably familiar with many of the good reasons to set up a Hyper-V environment, but one that may be of particular interest is how Hyper-V can facilitate product evaluations and training in test labs and learning environments without sacrificing 64-bit compatibility. Hyper-V runs even on entry-level 64-bit hardware provided you have a capable CPU and a recent BIOS update to support hardware virtualization. This makes it a blast to deploy full-scale test environments based on fully supported software versions, such as the 64-bit version of Microsoft Exchange Server 2007. And once you have set up the deployment, it is easy to redeploy anytime you need to start from scratch, whether to look at a new product or start a new class session.
Even if you're deploying a test environment for a customer with two domain controllers (DCs), a computer running SQL Server, two SharePoint front-end servers, an Exchange 2007 Mailbox server, a Hub Transport server, and a Client Access server, it's going to take a lot of effort. Suppose you have a much larger environment, with perhaps 600 Virtual Machines (VMs). Can you imagine reinstalling these VMs every week or whenever a new lab environment is needed? It is imperative to automate such deployments, and this is where Hyper-V can make a difference.
Hyper-V is Windows technology, and you can combine it with Windows Management Instrumentation (WMI), Windows PowerShell, Windows Deployment Services (WDS), Windows Automated Installation Kit (AIK), and Windows Preinstallation Environment (Windows PE) 2.0 to enable full-scale deployments at the blink of an eye—or at least without requiring much attention. You might find it interesting to look at the installation screens and progress bars while the systems deploy and configure themselves, but you don't have to do this if you have more important things on your list.
In this article, I show you how to deploy Hyper-V servers, VMs, guest operating systems, and server applications without any administrator interaction by using WDS, custom install images, unattend.xml files, and WMI scripts. The idea is to preconfigure a WDS environment once and then install the test systems whenever the need arises, such as when reinstalling a teaching environment, troubleshooting complicated problems in different configurations, and developing and testing custom solutions.
The only required interaction during the deployment is to press the F12 key to launch the Preboot eXecution Environment (PXE), and you can eliminate even this step if you use the Startrom.n12 instead of the default Startrom.com boot file in your WDS configuration, as explained in the TechNet article "Fully Automated Installation Design Background."
The remaining tasks are then already in the hands of WDS, AIK, and WMI, once you have auto-started Hyper-V VMs. You will be able to find the configuration files and scripts in the companion material, available from the February 2009 Code Downloads section of the TechNet Magazine Web site at technet.microsoft.com/magazine/cc135868. The actual install images are not included, as they are too large, but you should be able to adapt the companion files to your own lab environment.
Deployment Architecture
At the core of my lab deployment infrastructure is a WDS server running Active Directory Domain Services (AD DS), Domain Name System (DNS), Dynamic Host Configuration Protocol (DHCP), and, of course, WDS. For administrative convenience, I also installed AIK and Hyper-V tools for remote administration on this server. That's all it takes to get ready for Hyper-V deployment efficiency. You don't need more, though you can introduce additional WDS servers if high availability through redundancy is important to you. The remaining physical computers are Hyper-V servers, deployed through WDS and hosting the VMs that form the actual test environment, as illustrated in Figure 1.
Figure 1 A full-scale lab environment based on Hyper-V and virtual machines
For instructions on deploying the WDS server, check out the companion worksheet "Deploying Windows Deployment Services" at the same download site mentioned earlier. As you'll see, the installation is straightforward. It's the deployment and configuration of the Hyper-V hosts that's challenging, but more about this in a moment.
WDS-Based Hyper-V Deployment
One of the advantages of deployments of WDS for Hyper-V is that WDS simplifies updating the Windows Server 2008 installation media, which is necessary because the original media only includes a pre-release version of Hyper-V. The actual release version is available as a separate update on the Microsoft Download Center.
In a nutshell, here are the steps you take: Deploy Windows Server 2008 on a reference computer, update the installation with the latest Hyper-V files, install Hyper-V, use Sysprep.exe to generalize the installation, capture and upload the generalized install image to the WDS server, and then automate the default Hyper-V deployment for all hosts in the physical infrastructure. I prefer to use Windows Server 2008 Server Core for Hyper-V because my Hyper-V servers are dedicated to hosting VMs and Server Core offers a small operating system footprint as well as security, reliability, and manageability advantages. And for capturing install images, I use WDS, of course. Check out the companion worksheet "Deploying Default Hyper-V Hosts by Using Windows Deployment Services" to see how easy it is to create, upload, and use an updated Hyper-V install image. It couldn't be simpler.
So far, so good; the default Hyper-V deployment based on WDS is easy, but the unattended configuration is not without challenges. The issue is that you need to run Sysprep.exe to generalize the reference installation prior to capturing and uploading the install image, yet Sysprep.exe removes vital configuration information from the generalized Hyper-V image.
Among other things, Sysprep.exe generalizes the Boot Configuration Data (BCD) and removes the hypervisor launch directive from the BCD store. BCD is supposed to be firmware independent, but that's not the case with Hyper-V. The hypervisor depends on the virtualization capabilities of the underlying hardware and BIOS, so the hypervisor launch directive must go in order to generalize the install image. It is possible to modify the BCD store offline after the Sysprep generalization, but this isn't a solution.
If you mount the install image by using ImageX.exe (a tool included in the AIK), you can re-enter the launch directive by using BCDEdit.exe; however, Windows Setup removes this directive again during the generalization phase of the actual installation routine. You are back at the beginning.
It's a bit of a tricky situation because you need the launch directive to start the hypervisor. Without a running hypervisor, the Hyper-V server is not functional. Figure 2 shows the error message displayed when trying to start a VM on a Hyper-V server, deployed by using a custom install image but without an adjustment of the boot configuration.
Figure 2 Unable to start virtual machines because the hypervisor is not running
One way to re-enter the hypervisor launch directive is to add it manually after the server installation by running the command
bcdedit /set hypervisorlaunchtype auto
and restarting the Hyper-V server, but this manual step would be a serious roadblock to a fully unattended test lab deployment. Fortunately, the AIK includes Windows System Image Manager, which you can use to create an unattend.xml file for the install image that WDS applies during the installation in addition to its own WDSClientUnattend.xml file. In this unattend.xml file, you can specify that Setup should automatically log on to Windows with administrative credentials supplied by the WDS client and then run a script that inserts the hypervisor launch directive back into the BCD store and then restarts the server.
Figure 3 illustrates the general approach, and the companion material includes the full version of the unattend.xml file as well as a complete hypervconfig.vbs script. You can include the hypervconfig.vbs script directly in the install image so that it is available during the installation. You simply need to mount the image with ImageX.exe, as outlined in the companion worksheet "Customizing Hyper-V Deployments."
Figure 3 Reconfiguring and starting the hypervisor
WMI-Based Hyper-V Configuration
Re-enabling the hypervisor isn't too difficult, but if you analyze my hypervconfig.vbs script, you will find that it contains somewhat more than these five simple lines of code. In addition to the launch directive, you must configure the entire Hyper-V environment, and this is the challenging part of the deployment story.
You cannot simply create VMs on the reference system before image capture, include them in the install image, and then expect everything to work after fixing the hypervisor launch directive. The server will have the VMs, of course, but the hardware dependencies are missing.
The image generalization disconnects the Ethernet ports of your VMs from the physical network interface cards (NICs) and pass-thru drives from the underlying hard disks and CD/DVD devices. You could skip the generalization, but it's not a good idea to include preinstalled VMs in an install image. Preinstalled VMs inflate the image tremendously, evaluation licenses of deployed test servers eventually expire, and Active Directory domains don't like to be taken offline for extended periods of time either. If you restore a lab environment by using backups of VMs installed several months ago, you have a fair chance of experiencing Active Directory authentication and replication issues. It's better to start from scratch each time.
So let's provision VMs and the associated resources, such as NICs, hard disks, and DVD drives, in the Hyper-V environment before tackling the actual deployment of the test lab. As you might have guessed, provisioning these virtual resources is the main task of the hypervconfig.vbs script.
The approach is fairly straightforward. The script determines the name of the local Hyper-V server and then configures a host-specific set of VMs. Each VM receives two virtual DVD drives mapped to a server-specific .iso file and a general installation .iso file. The server-specific .iso file corresponds to the boot DVD. It includes all the necessary scripts and configuration files for an unattended installation of a particular lab server.
The general installation file provides the actual installation media. Sharing the general .iso file among all VMs on a server helps to keep the size of the Hyper-V install image moderately under control. You could place the .iso files on a network server, but one way or another you must copy the files to the Hyper-V server for installation, so I decided to include them directly in the install image. In this way, you have the .iso files available locally whenever the need arises. This can be useful, for example, when installing additional components or reinstalling a particular VM without tearing down the entire lab environment.
I'll discuss the server-specific installation DVDs in a moment. First, I'll focus on the configuration of the Hyper-V infrastructure by means of a WMI-based script. As illustrated in Figure 4, there are a variety of virtual resources that you must provision, including a virtual switch with internal and external switch ports, the VMs themselves with their virtual Ethernet cards, virtual IDE drives connected to virtual hard disk (VHD) files, and the virtual DVD drives connected to the .iso files for the installation of guest operating systems and server applications.
Figure 4 Provisioning virtual resources for the lab environment
You also need to adjust the auto-start configuration of the VMs and change the boot order to use the virtual disk drive first, followed by the virtual DVD drive that is connected to the server-specific .iso file. In this configuration, the VMs boot from the installation DVD until the OS is installed on the virtual hard disk. I'm sure you are familiar with this sequence because it is standard for any modern personal computer.
The hypervconfig.vbs script configures the VMs to start automatically when the physical computer starts, so the VMs come online after the HypervisorLaunchType reboot, and that's how lab installation commences. The VMs eventually boot into the Setup routines of their guest operating systems. This is the key to a fully unattended lab deployment.
For the most part, the VM configuration follows the same principles you take into consideration when configuring physical computers with multiple drives connected to multiple IDE controllers. The virtual switch, however, requires further explanation because it is the key to enabling communication between VMs on the same Hyper-V server and between VMs on separate servers over the computer network. Essentially, you can compare a virtual switch to its physical counterpart. You create it by calling the CreatedVirtualSwitch method, but a switch without ports is not very useful.
In order to connect the switch to a physical network, you must create a switch port by calling the CreateSwitchPort method and associate this port with an available Ethernet network card on the server. You can connect a physical network card to only one virtual switch, but you can have multiple switches connected to each other directly or through VMs running router software. For the purposes of this article, however, a basic LAN environment without network routers is sufficient, so I configured a single virtual switch on each Hyper-V server connected to the first available physical Ethernet card.
You also must connect the VMs to the virtual switch. Again, you must create a separate switch port for each VM by calling CreateSwitchPort. You can then associate each switch port with a VM's virtual network adapter. Don't forget to connect the parent partition to the virtual switch as well if you want to provide external network connectivity. You can accomplish this task conveniently by calling the SetupSwitch method, which expects an external and internal switch port, a reference to an available physical Ethernet card, and a unique device name and a display name as parameters.
By calling the SetupSwitch method, you convert the virtual switch from a private switch to an external switch, as demonstrated in the hypervconfig.vbs script in the companion material. The script contains all the details to set up external network connectivity for VMs. For more information, read the Virtualization WMI Provider documentation on MSDN. Substantial parts of my hypervconfig.vbs script are based on the samples available under "Using the Virtualization WMI Provider."
Virtual Lab Deployment
Now that the Hyper-V deployment is accomplished and the VMs boot automatically after each system restart, I can turn my attention to the actual deployment of the lab environment. For training centers, it probably suffices to deploy the virtual network infrastructure and guest operating systems and let the trainees deploy the remaining server applications later on. But for development, test, and evaluation purposes it is better to automate the entire deployment of the lab environment.
The overall approach is similar to the Hyper-V method. Following the unattended installation of the OS, let the Administrator account log on automatically and run any additional setup commands. However, you must orchestrate the deployment.
All VMs boot into their setup routines practically at the same time, yet some servers depend on other servers so it isn't possible to perform all installations concurrently. For example, you need to install AD DS before you can add any other servers to the domain, Exchange Server 2007 also requires AD DS, SharePoint server farms require SQL Server, and so forth, so the only VM in your scenario that can run Windows Setup immediately is DC01.Litware.com. All other VMs must wait until the DC is up and running.
There are several ways to implement an installation sequence. You can configure a boot delay for VMs, but this technique is notoriously unreliable. Are you willing to bet that the Active Directory installation always completes within 15 minutes? And how long does it take to install the first Exchange Server after that?
Another possibility is a WMI-based script to switch on VMs when the installation prerequisites allow it. This is a better alternative, yet it requires you to coordinate centralized script execution with distributed VM deployment. It's less complicated to customize each individual setup routine and let the VMs check any given installation prerequisites themselves before initiating their Windows Setup routines, as shown in Figure 5.
Figure 5 Implementing a deployment sequence based on installation prerequisites
Windows PE enables us to implement these custom setup routines. It is a minimal Win32 operating system with limited services, but with support for Windows Script Host (WScript), WMI, and Microsoft Data Access Component (MDAC). You need only to create a customized Windows PE image, add the required Windows feature packages, include a custom script, and then edit the Startnet.cmd file located at %SYSTEMROOT%\System32 of the Windows PE image to run the custom script.
The companion worksheet "Creating Custom Boot Images for Server Deployments" outlines how to create a customized Windows PE image for each server in the test lab environment. Figure 6 shows you how to use this technique to orchestrate the deployment of a second DC.
Figure 6 Orchestrated deployment of a second domain controller in a test lab
The Startnet.cmd file includes a netsh command to assign a static IP address to the network interface of the VM and then calls the StartSetup script. The netsh command is not strictly necessary in a DHCP-enabled environment, yet it helps to highlight network-related errors. For example, if you provision a standard network card (Microsoft Synthetic Ethernet Port) for your VM in your Hyper-V configuration script instead of a legacy network card (Microsoft Emulated Ethernet Port), the netsh command will inform you that Windows PE can't recognize the NIC.
The StartSetup script does not inform you about this issue when attempting to access network resources because the On Error Resume Next statement enables the script to sustain runtime errors. If DC01 is unavailable for any reason, the connection attempts fail and the script continues to loop indefinitely. The loop ends only if a connection attempt succeeds and if DC01 is a global catalog server, which implies that AD DS has been installed.
When the loop ends, the script calls the actual Setup command, specifying an unattend.xml file with server-specific configuration settings. The diagram in Figure 6 illustrates how to wait for a global catalog server to come online, but the same principle is also applicable in other scenarios, such as when checking the availability of file shares or SQL Server databases. Simply attempt to access the resource and exit the loop if the attempt succeeds.
Hyper-V Resources
Windows Server 2008 Hyper-V Web site
Windows Virtualization Team Blog
Windows Deployment Services Guide
Windows Automated Installation Kit
Unattended Windows Setup Reference
Server Application Deployment
The only remaining task is to configure the unattend.xml file to add the server to the domain, configure TCP/IP settings, enable the Remote Desktop Protocol (RDP), and configure <FirstLogonCommands> for the installation of any desired server applications. Most Microsoft server applications support unattended deployments.
For AD DS, you must provide an answer file, as explained in the Microsoft Knowledge Base article "How to Use Unattended Mode to Install and Remove Active Directory Domain Services on Windows Server 2008-Based Domain Controllers." For Exchange Server 2007, you should use command-line parameters instead (see "How to Install Exchange 2007 in Unattended Mode" in the online help). For SQL Server 2008, you should follow the online help instructions outlined in the article "How to: Install SQL Server 2008 from Command Prompt." And for Windows SharePoint Services 3.0, pay attention to "Config.xml reference for Windows SharePoint Services."
The requirements vary in complexity, but you can deploy these systems without any administrator interaction. The final task is to press the F12 key to engage the WDS-based deployment system.
Conclusion
Hyper-V is exciting technology. It is fully 64-bit compatible, so you don't need to deploy 32-bit software versions for evaluation or training purposes anymore if a 64-bit version is available. It is Windows technology, so you can take full advantage of WDS, AIK, and Windows PE for deployment. It supports WMI and Windows PowerShell through a Virtualization WMI Provider, which you can use to manage all aspects of the virtualized environment including the provisioning of resources and VMs during the deployment process. It uses a hypervisor instead of a Virtual Machine Monitor (VMM) to provide high performance and increase scalability, and it is included in Windows Server 2008 at no extra cost.
Hyper-V-based environments are relatively uncomplicated to deploy. It only takes a few mouse clicks to get started with the first VMs, and in conjunction with Windows deployment technologies it is fun to automate even the most complex scenarios.
The only downside I can see is the online documentation of the Virtualization WMI Provider, which is still at an infant stage and, thus, the sample code does not cover all relevant tasks. The results, however, are well worth the effort. It is fun to see an IT environment deploy itself—even if it includes far fewer than 600 VMs.
Fergus Strachan is a London-based independent consultant specializing in Microsoft server infrastructure design and implementation for corporate customers in the UK. Fergus has written technical articles about Microsoft server technology and authored Integrating ISA Server 2006 with Microsoft Exchange 2007. He is also co-author of the Microsoft Exchange Server 2003 Resource Kit.