Deployment Roadmap for Windows HPC Server 2008 R2
Updated: March 2011
Applies To: Windows HPC Server 2008 R2
This topic provides a high level roadmap to deploy Windows® HPC Server 2008 R2 using several common configurations, with links to detailed deployment guidance. Windows HPC Server 2008 R2 supports a wide range of HPC computing workloads and network environments. To meet your specific computing needs and environment, you can adapt or combine these common configurations.
For each cluster configuration, the following information is provided:
Features of the configuration
Example deployment options
Key deployment steps
Additional considerations
Important |
---|
|
In this topic:
General prerequisites
Small HPC cluster A small yet fully functional on-premises HPC cluster that is especially useful for pre-production and proof-of-concept deployments
Basic on-premises HPC cluster A medium-size on-premises cluster that can run a variety of HPC jobs
SOA-enabled on-premises HPC cluster A medium-size on-premises cluster that can run the full range of HPC jobs, including large service-oriented architecture (SOA) jobs
High availability cluster An on-premises cluster that can scale to more than 1000 nodes and that enhances the availability of the head node and Windows Communication Foundation (WCF) broker node services for unscheduled and scheduled outages
Workstation cluster An on-premises HPC cluster that is made up of Windows® 7 workstations that are not dedicated cluster nodes
Windows Azure cloud cluster An HPC cluster that is made up of an on-premises head node and Windows Azure nodes in the cloud that can be added or removed as needed to change the capacity of the cluster
General prerequisites
For information about preparing and planning for an HPC cluster, see Prepare for Your Deployment in the Design and Deployment Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkID=201563).
For each configuration shown in this topic, you will need the following, at a minimum:
A computer for the head node of the cluster and, in most cases, one or more computers for cluster nodes. The computers must meet the System Requirements for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkId=212269).
Optionally, one or more client computers on the enterprise network on which you can deploy the client utilities for Windows HPC Server 2008 R2 (HPC PowerShell, HPC Cluster Manager, and HPC Job Manager). These components allow remote management of the HPC cluster or job submission from client computers. You can also run these utilities on the head node, where they are installed when you install HPC Pack 2008 R2.
Network switches, network adapters, and connections for the cluster nodes. If your HPC applications require an application network with high bandwidth and low latency, you may require specialized hardware. For example, to run certain message passing interface (MPI) jobs, you may want to consider using an InfiniBand network for your application network.
An existing Active Directory doMayn that the nodes of the HPC cluster will join. Generally the doMayn controller for the doMayn is a separate computer in the enterprise network, but in a small HPC cluster in a test environment you can optionally install the Active Directory DoMayn Services role on the head node.
One or more user accounts in the Active Directory doMayn with sufficient permissions to deploy the head node and to add nodes to the HPC cluster.
Small HPC cluster
Figure 1 A small Windows HPC Server cluster
Features of the sample configuration
Supports a small (around 5 nodes) on-premises cluster that can run and test a variety of HPC jobs, including parametric sweep, message passing interface (MPI), and task flow jobs.
Useful for pre-production and proof-of-concept deployments.
Can use the compute node role that is installed and enabled on the head node for additional computing power.
Can run small service-oriented architecture (SOA) jobs because of the Windows Communication Foundation (WCF) broker node role that is installed and enabled on the head node.
Can use but does not require a connection to the enterprise network infrastructure.
Adds preconfigured compute nodes to a private network.
Example deployment options
Item | Description | ||
---|---|---|---|
HPC Pack 2008 R2 edition |
Enterprise edition
|
||
HPC databases |
Installed with SQL Server 2008 Express edition on the head node (default) |
||
Network adapters |
|
||
Network configuration |
|
Key deployment steps
Note |
---|
If you are new to Windows HPC and want the simplest path for setting up a small cluster, see DIY supercomputing: How to build a small Windows HPC cluster (https://go.microsoft.com/fwlink/?LinkId=214585). |
Step | Reference |
---|---|
Deploy the head node
|
Deploy the Head Node, in the Design and Deployment Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkId=214560) |
Configure the head node
|
Configure the Head Node, in the Design and Deployment Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkID=198319) |
Pre-configure the compute node computers
|
See the section “Pre-configure the compute nodes” in DIY supercomputing: How to build a small Windows HPC cluster (https://go.microsoft.com/fwlink/?LinkId=214585). |
Add compute nodes, using the compute node template |
Add Preconfigured Nodes (https://go.microsoft.com/fwlink/?LinkId=214588) |
Additional considerations
These configuration and deployment options may not scale well for production deployments. For example, if you need to deploy a larger number of nodes, or see Basic on-premises HPC cluster in this topic.
Optionally, you can run Active Directory DoMayn Services on the head node instead of on a separate doMayn controller. However, this can adversely affect cluster performance.
If your computers each have additional network adapters, you can configure other network topologies, such as those with a dedicated application network.
If you want to use the head node as a compute node or a WCF broker node, ensure that you bring the head node online in HPC Cluster Manager.
Basic on-premises HPC cluster
Figure 2 A basic Windows HPC Server cluster
Features of the sample configuration
Supports a medium-size (up to 256 nodes) on-premises cluster that can run a variety of HPC jobs, including parametric sweep, message passing interface (MPI), and task flow jobs.
Can be used to run small service-oriented architecture (SOA) jobs because of the WCF broker node role that is installed and enabled on the head node. However, large SOA jobs may need additional broker nodes to be deployed in the HPC cluster. For more information, see SOA-enabled on-premises HPC cluster later in this topic.
Supports a cluster with more than 256 nodes, but if you do this, consider deploying the HPC databases on one or more servers running Microsoft SQL Server. This requires additional configuration steps.
Deploys nodes to the cluster from bare metal by using the Windows HPC Server 2008 R2 features to automatically install an operating system and the HPC Pack 2008 R2 components on the nodes, name them, and join them to the doMayn.
Supports additional node deployment options that are available in Windows HPC Server 2008 R2, depending on the network topology selected.
Example deployment options
Item | Description | ||
---|---|---|---|
HPC Pack 2008 R2 edition |
Enterprise edition
|
||
HPC databases |
Installed with SQL Server 2008 Express edition on the head node (default)
|
||
Network adapters |
|
||
Network configuration |
|
Key deployment steps
Step | Reference |
---|---|
Deploy the head node
|
Deploy the Head Node, in the Design and Deployment Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkId=214560) |
Configure the head node
|
|
(Optional) Add drivers to the operating system image |
Add Drivers for Operating System Images (https://go.microsoft.com/fwlink/?LinkId=214592) |
Add compute nodes, using the compute node template |
Deploy Nodes from Bare Metal (https://go.microsoft.com/fwlink/?LinkId=214594) |
Additional considerations
This configuration also supports the following node deployment methods:
Item | Reference |
---|---|
Add nodes on which Windows Server 2008 R2 and HPC Pack 2008 R2 are already installed |
Add Preconfigured Nodes (https://go.microsoft.com/fwlink/?LinkID=214588) |
Use an XML file that specifies attributes of the nodes that are added to the HPC cluster |
|
Deploy nodes from bare metal using an iSCSI connection to a network-attached storage array |
Deploying iSCSI Boot Nodes Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkID=194674) |
If you want to use the head node as a compute node or a WCF broker node, ensure that you bring the head node online in HPC Cluster Manager.
If you are deploying an InfiniBand application network with NetworkDirect support, you can deploy the InfiniBand device drivers at the same time that you deploy the nodes in your cluster. For more information, see Deploying InfiniBand Device Drivers with NetworkDirect Support in Windows HPC Server 2008 Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkID=137227).
SOA-enabled on-premises HPC cluster
Figure 3 A Windows HPC Server cluster configured for SOA applications
Features of the sample configuration
Supports a medium-size (up to 256 nodes) on-premises cluster that can run a variety of parallel computing jobs, including parametric sweep, message passing interface (MPI), and task flow jobs, as well as service-oriented architecture (SOA) and Microsoft Excel calculation offloading jobs.
Supports a cluster with more than 256 nodes, but if you do this, consider deploying the HPC databases on one or more servers running Microsoft SQL Server. This requires additional configuration steps.
Supports communication of cluster nodes with SOA clients that are on the enterprise network.
Deploys additional broker nodes to the cluster to handle SOA jobs. Because a broker node role is installed by default on the head node, this may be necessary only for large SOA workloads.
Adds compute nodes to the cluster from bare metal by using the Windows HPC Server 2008 R2 features to automatically install an operating system and the HPC Pack 2008 R2 components on the nodes, name them, and join them to the doMayn.
Supports additional node deployment options that are available in Windows HPC Server 2008 R2, depending on the network topology selected.
Example deployment options
Item | Description | ||
---|---|---|---|
HPC Pack 2008 R2 edition |
Enterprise edition
|
||
HPC databases |
Installed with SQL Server 2008 Express edition on the head node (default)
|
||
Network adapters |
|
||
Network configuration |
|
Key deployment steps
Step | Reference |
---|---|
Deploy the head node
|
Deploy the Head Node, in the Design and Deployment Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkId=214560) |
Configure the head node
|
|
(Optional) Add drivers to the operating system image |
Add Drivers for Operating System Images (https://go.microsoft.com/fwlink/?LinkId=214592) |
Deploy nodes to the cluster
|
|
Additional considerations
This configuration also supports the following node deployment methods:
Item | Reference |
---|---|
Add nodes on which Windows Server 2008 R2 and HPC Pack 2008 R2 are already installed |
Add Preconfigured Nodes (https://go.microsoft.com/fwlink/?LinkID=214588) |
Use an XML file that specifies attributes of the nodes that are added to the HPC cluster |
|
Deploy nodes from bare metal using an iSCSI connection to a network-attached storage array |
Deploying iSCSI Boot Nodes Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkID=194674) |
If you want to use the head node as a compute node or a WCF broker node, ensure that you bring the head node online in HPC Cluster Manager.
If you are deploying an InfiniBand application network with NetworkDirect support, you can deploy the InfiniBand device drivers at the same time that you deploy the nodes in your cluster. For more information, see Deploying InfiniBand Device Drivers with NetworkDirect Support in Windows HPC Server 2008 Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkID=137227).
High availability cluster
Figure 4 A Windows HPC Server cluster configured for high availability of the head node and WCF broker nodes
For detailed, step-by-step procedures for this configuration, see Configuring Windows HPC Server 2008 R2 for High Availability with SOA Applications (https://go.microsoft.com/fwlink/?LinkID=198300).
Features of the sample configuration
Creates a cluster that can scale to more than 1000 nodes and that enhances the availability of the head node and WCF broker node services for unscheduled and scheduled outages.
Deploys the head node in the context of a preconfigured two-node failover cluster.
Deploys one or more WCF broker nodes, each in the context of a two-node failover cluster.
Is well suited to run a variety of parallel computing jobs, including parametric sweep, message passing interface (MPI), and task flow jobs, as well as SOA and Microsoft Excel calculation offloading jobs.
Deploys nodes to the cluster from bare metal by using the Windows HPC Server 2008 R2 features to automatically install an operating system and the HPC Pack 2008 R2 components, name them, and join them to the doMayn.
Supports additional node deployment options that are available in Windows HPC Server 2008 R2, depending on the network topology selected.
Example deployment options
Item | Description | ||
---|---|---|---|
HPC Pack 2008 R2 edition |
Enterprise edition
|
||
HPC databases |
Installed on one or more remote instances of SQL Server 2008 SP1 or later that are preconfigured for Windows HPC Server 2008 R2
|
||
Network adapters |
|
||
Network configuration |
|
Key deployment steps
Step | Reference |
---|---|
Install a supported operating system on the servers for the head nodes, the WCF broker nodes, and the remote servers for each SQL Server instance (if you will be installing SQL Server in a failover cluster) |
Install Windows Server 2008 R2 on Multiple Servers, in Configuring Windows HPC Server 2008 R2 for High Availability with SOA Applications (https://go.microsoft.com/fwlink/?LinkID=201566) |
Set up shared storage for the servers in each failover cluster |
|
Configure failover clustering and file services on the servers for the head node |
|
Install HPC Pack 2008 R2 on the first server that will run head node services |
Install HPC Pack 2008 R2 on a Server that Will Run Head Node Services, in Configuring Windows HPC Server 2008 R2 for High Availability with SOA Applications (https://go.microsoft.com/fwlink/?LinkID=201570) |
Configure the head node on the first server |
Configure the Head Node on the First Server, in Configuring Windows HPC Server 2008 R2 for High Availability with SOA Applications (https://go.microsoft.com/fwlink/?LinkID=201571) |
Install HPC Pack 2008 R2 and configure the head node on the second server that will run head node services |
Install and Configure HPC Pack 2008 R2 on the Other Server that Will Run Head Node Services, in Configuring Windows HPC Server 2008 R2 for High Availability with SOA Applications (https://go.microsoft.com/fwlink/?LinkID=201572) |
Install HPC Pack 2008 R2 on the WCF broker nodes, and then add the broker nodes to the HPC cluster |
Create WCF Broker Nodes Running Windows HPC Server 2008 R2, in Configuring Windows HPC Server 2008 R2 for High Availability with SOA Applications (https://go.microsoft.com/fwlink/?LinkID=201574) |
Set up shared storage for the WCF broker nodes |
Set Up Shared Storage for WCF Broker Nodes , in Configuring Windows HPC Server 2008 R2 for High Availability with SOA Applications (https://go.microsoft.com/fwlink/?LinkID=214602) |
Create failover clusters using the broker nodes |
Create Failover Clusters Using WCF Broker Nodes, in Configuring Windows HPC Server 2008 R2 for High Availability with SOA Applications (https://go.microsoft.com/fwlink/?LinkId=214604) |
Add an operating system image that will be deployed to the nodes |
Add an Operating System Image (https://go.microsoft.com/fwlink/?LinkId=214590) |
(Optional) Add drivers to the operating system image |
Add Drivers for Operating System Images (https://go.microsoft.com/fwlink/?LinkId=214592) |
Create a compute node template, selecting the option to add nodes with an operating system image |
Create a Node Template (https://go.microsoft.com/fwlink/?LinkId=214589) |
Add compute nodes, using the compute node template |
Deploy Nodes from Bare Metal (https://go.microsoft.com/fwlink/?LinkId=214594) |
Additional considerations
This configuration also supports the following node deployment methods:
Item | Reference |
---|---|
Add nodes on which Windows Server 2008 R2 and HPC Pack 2008 R2 are already installed |
Add Preconfigured Nodes (https://go.microsoft.com/fwlink/?LinkID=214588) |
Use an XML file that specifies attributes of the nodes that are added to the HPC cluster |
|
Deploy nodes from bare metal using an iSCSI connection to a network-attached storage array |
Deploying iSCSI Boot Nodes Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkID=194674) |
- If you are deploying an InfiniBand application network with NetworkDirect support, you can deploy the InfiniBand device drivers at the same time that you deploy the nodes in your cluster. For more information, see Deploying InfiniBand Device Drivers with NetworkDirect Support in Windows HPC Server 2008 Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkID=137227).
Workstation cluster
Figure 5 A Windows HPC Server cluster of workstations
Features of the sample configuration
Creates an HPC cluster that is made up of doMayn-joined Windows® 7 workstations (that are running Windows 7 Enterprise, Windows 7 Professional, or Windows 7 Ultimate). The workstation nodes do not need to be dedicated cluster computers, and can be used for other tasks.
Makes Windows 7 workstations available to the HPC cluster to run jobs according to a time-based or activity-based availability policy, or manually. For example, you can configure the cluster to use workstations only on nights and weekends, or when keyboard or mouse activity has not been detected for a certain time.
Can run a variety of HPC jobs, but is ideal for short-running jobs that can be interrupted and that do not require internode communication.
Can be adapted to include dedicated on-premises nodes in addition to workstation nodes. For information about deploying on-premises nodes, see Basic on-premises HPC cluster earlier in this topic.
Can support a large number of nodes, but if you do this, consider deploying the HPC databases on one or more servers running Microsoft SQL Server. This requires additional configuration steps.
Example deployment options
Item | Description | ||
---|---|---|---|
HPC Pack 2008 R2 edition |
Enterprise edition
|
||
HPC databases |
Installed with SQL Server 2008 Express edition on the head node (default)
|
||
Network adapters |
|
||
Network configuration |
|
Key deployment steps
Step | Reference | ||
---|---|---|---|
Deploy the head node
|
Deploy the Head Node, in the Design and Deployment Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkId=214560) |
||
Configure the head node
|
|
||
Install HPC Pack 2008 R2 Enterprise edition on each Windows 7 computer that you will use as a workstation node, selecting the option to join an existing HPC cluster by creating a new workstation node
|
Install HPC Pack 2008 R2 on the Workstation Computers, in the Adding Workstation Nodes in Windows HPC Server 2008 R2 Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkId=214606) |
||
Assign the node template to add the workstation nodes to the cluster |
Assign a Workstation Node Template, in the Adding Workstation Nodes in Windows HPC Server 2008 R2 Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkId=214607) |
Additional considerations
Using workstation nodes to run Windows HPC jobs has important security and administrative considerations. For more information, see Requirements for Adding Workstation Nodes (https://go.microsoft.com/fwlink/?LinkID=202684).
The enterprise network infrastructure and speed can strongly impact the performance of workstation nodes.
If you plan to deploy compute nodes in your cluster in addition to workstation nodes, you can use any topology supported by Windows HPC Server 2008 R2. However, if your jobs require communication between compute nodes and workstation nodes, that communication must be allowed by the topology. For more information, see Cluster Network Topologies for Workstation Nodes (https://go.microsoft.com/fwlink/?LinkId=214609).
Windows Azure cloud cluster
Figure 6 A Windows HPC Server cluster using Windows Azure nodes
Important |
---|
To deploy Windows Azure worker nodes, you must be running Windows HPC Server 2008 R2 Service Pack 1 or later. For more information and release notes for the service pack, see Release Notes for Microsoft HPC Pack 2008 R2 Service Pack 1 (https://go.microsoft.com/fwlink/?LinkID=202812). |
Features of the sample configuration
Creates an HPC cluster that uses minimal on-premises infrastructure, and that adds or removes Windows Azure nodes in the cloud as needed to change the capacity of the cluster.
Makes Windows Azure computational resources available according to a time-based availability policy, or manually. You pay for Windows Azure nodes only when they are made available.
Can run a variety of parallel jobs, but is ideal for small, service-oriented architecture (SOA) jobs that do not process large amounts of data. You can run SOA jobs by using the broker node role that is installed and enabled by default on the head node. For larger SOA jobs, additional on-premises broker nodes may need to be deployed in the HPC cluster. For more information, see SOA-enabled on-premises HPC cluster earlier in this topic.
Important If you want to run an ISV application in the cloud, check with the vendor of your ISV application for the availability of the application in Windows Azure. Requires a Windows Azure subscription in which a hosted service and a storage account are preconfigured.
Can be adapted to include dedicated on-premises compute nodes in addition to Windows Azure worker nodes. For information about deploying compute nodes, see Basic on-premises HPC cluster earlier in this topic
Can support a large number of nodes, depending on your Windows Azure subscription and the on-premises configuration, but if you do this, consider deploying the HPC databases on one or more servers running Microsoft SQL Server. This requires additional configuration steps.
Example deployment options
Item | Description | ||
---|---|---|---|
HPC Pack 2008 R2 edition |
Express edition |
||
HPC databases |
Installed with SQL Server 2008 Express edition on the head node (default)
|
||
Network adapters |
|
||
Network configuration |
|
Key deployment steps
Step | Reference |
---|---|
Deploy the head node
|
Deploy the Head Node, in the Design and Deployment Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkId=214560) |
Configure the head node
|
Configure the Head Node, in the Design and Deployment Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkID=198319) |
Prepare to deploy Windows Azure worker nodes
|
|
Create a Windows Azure worker node template to define the availability policy of the nodes |
Step 4: Create a Windows Azure Worker Node Template, in the Deploying Windows Azure Work Node in Windows HPC Server 2008 R2 SP1 Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkID=200496) |
Add the Windows Azure worker nodes to the cluster |
Add Windows Azure Worker Nodes to the HPC Cluster, in the Deploying Windows Azure Work Nodes in Windows HPC Server 2008 R2 SP1 Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkId=214615) |
Start the Windows Azure worker nodes, which provisions the worker role nodes in Windows Azure |
Start the Windows Azure Worker Nodes, in the Deploying Windows Azure Work Nodes in Windows HPC Server 2008 R2 SP1 Step-by-Step Guide (https://go.microsoft.com/fwlink/?LinkId=214616) |
Additional considerations
Before using Windows Azure worker nodes, consider your organization’s policies and other limitations for storing or processing sensitive data in the cloud.
The performance of Windows Azure worker nodes may be less than that of dedicated on-premises compute nodes.
Windows Azure worker nodes cannot access on-premises nodes or file shares directly.