次の方法で共有


Step 1: Prepare for Your Deployment

Updated: August 2011

Applies To: Windows HPC Server 2008 R2

The first step in the deployment of your HPC cluster is to make important decisions, such as deciding how you will be adding nodes to your cluster, and choosing a network topology for your cluster. The following checklist describes the steps involved in preparing for your deployment.

Checklist: Prepare for your deployment

Task Description

1.1. Review initial considerations and system requirements

Review the list of initial considerations and system requirements to ensure that you have all the necessary hardware and software to deploy an HPC cluster.

1.2. Decide on the database configuration

Determine the database edition, installation location, and configuration options that are appropriate for your cluster.

1.3. Decide what type of nodes you want to add to your cluster and how many

Decide if you want to add compute nodes, broker nodes, workstation nodes, or Windows Azure to your cluster. Also, decide how many nodes to add.

1.4. Decide how to add compute nodes to your cluster

Decide if you will be adding nodes to your cluster from bare metal, from preconfigured nodes, or from an XML file. Decide also if you want to deploy nodes over iSCSI.

1.5. Choose the Active Directory doMayn for your cluster

Choose the Active Directory® doMayn to which you will join the head node and compute nodes of your HPC cluster.

1.6. Choose a doMayn account for adding nodes

Choose an existing doMayn account with enough privileges to add nodes to the cluster.

1.7. Choose a network topology for your cluster

Choose how the nodes in your cluster will be connected, and how the cluster will be connected to your enterprise network.

1.8. Prepare for multicast

If you will be deploying nodes from bare metal and want to multicast the operating system image that you will be using during deployment, configure your network switches appropriately.

1.9. Prepare for the integration of scripted power control tools

If you want to use your own power controls tools to start, shut down, and reboot compute nodes remotely, obtain and test all the necessary components of your power control tools.

1.1. Review initial considerations and system requirements

The following sections list some initial considerations that you need to review, as well as hardware and software requirements for Windows HPC Server 2008 R2.

Note
If you plan to run LINQ to HPC jobs on your cluster, there are additional hardware and software guidelines. For more information, see Deploying a Windows HPC Server Cluster to Run Jobs Using the LINQ to HPC Components (Preview).

Initial considerations

Review the following initial considerations before you deploy your HPC cluster.

Compatibility with previous versions

  • If you currently have a Windows HPC Server 2008 cluster, you can upgrade your cluster to Windows HPC Server 2008 R2. For detailed upgrade information and step-by-step procedures, see the Upgrade Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/p/?LinkID=197497).

  • The upgrade of a Windows Compute Cluster Server 2003 head node to a Windows HPC Server 2008 R2 head node is not supported.

  • Windows HPC Server 2008 R2 provides application programming interface (API)-level compatibility for applications that are integrated with Windows HPC Server 2008 or with Windows Compute Cluster Server 2003. These applications might, however, require changes to run on Windows Server® 2008 R2. If you encounter problems running your application on Windows Server 2008 R2, you should consult your software vendor.

  • For additional information about the compatibility of Windows HPC Server 2008 R2 with Windows HPC Server 2008 and with Windows Compute Cluster Server 2003, see the product information at the Windows HPC Server website (https://go.microsoft.com/fwlink/p/?LinkID=85562).

Server roles added during installation

The installation of HPC Pack 2008 R2 adds the following server roles to the head node:

  • Dynamic Host Configuration Protocol (DHCP) Server, to provide IP addresses and related information for the cluster nodes.

  • Windows Deployment Services, to deploy nodes remotely.

  • File Services, to manage shared folders.

    Note
    HPC Pack 2008 R2 with Service Pack 2 or later also adds the File Server Resource Manager service of the File Services role to configure folder quotas.
  • Network Policy and Access Services, which enables Routing and Remote Access so that network address translation (NAT) services can be provided to the cluster nodes.

Hardware requirements

Hardware requirements for Windows HPC Server 2008 R2 are very similar to those for Windows Server 2008 R2.

Note
For more information about installing Windows Server 2008 R2, including system requirements, see Installing Windows Server 2008 R2 (https://go.microsoft.com/fwlink/p/?LinkID=194693).

Processor (x64-based):

  • Minimum: 1.4 GHz

  • Recommended: 2 GHz or faster

RAM:

  • Minimum: 512 MB

  • Recommended: 2 GB or more

Available disk space:

  • Minimum: 50 GB

  • Recommended: 80 GB or more

Drive:

  • DVD-ROM drive, if you will be using DVD media to install HPC Pack 2008 R2.

Network adapters:

  • The number of network adapters that you install on the nodes in your cluster depends on the network topology that you choose for your cluster. For more information about the different HPC cluster network topologies, see Appendix 1: HPC Cluster Networking.

  • You can deploy your HPC cluster with only one network adapter on each node, but you will be limited to only one possible network topology (all nodes only on an enterprise network). An additional network adapter on the head node will expand your options to a second possible network topology to choose from (compute nodes isolated on a private network).

  • You should also evaluate the possibility of installing a low-latency and high-throughput application network for your HPC cluster. This network will require installing specialized network adapters on the nodes.

Software requirements

The following list outlines the software requirements for the nodes in a Windows HPC Server 2008 R2 cluster:

  • The head node computer must be running Windows Server 2008 R2 HPC Edition, or another edition of Windows Server 2008 R2.

  • The operating system on the head node must be installed in one of the following languages that is supported by HPC Pack 2008 R2 on the head node: English, Japanese, or Simplified Chinese.

  • A compute node can be running Windows Server 2008 R2 HPC Edition, another edition of Windows Server 2008 R2, or a 64-bit edition of Windows Server® 2008.

  • A broker node can only be running Windows Server 2008 R2 HPC Edition, or another edition of Windows Server 2008 R2.

  • A workstation node can be running Windows® 7 Enterprise, Windows 7 Professional, or Windows 7 Ultimate (joining a doMayn is required).

  • HPC Pack 2008 R2

To enable users to submit jobs to your HPC cluster, you can install the utilities included with HPC Pack 2008 R2 on client computers. Those client computers must be running any of the following operating systems:

  • Windows 7 Enterprise, Windows 7 Professional, or Windows 7 Ultimate

  • Windows Vista® Enterprise, Windows Vista Business, Windows Vista Home, or Windows Vista Ultimate with Service Pack 2 or later (32-bit or 64-bit editions)

  • Windows XP Professional with Service Pack 2 or later (x64-based), or Windows XP Professional with Service Pack 3 or later (x86-based)

  • Windows Server 2008 with Service Pack 2 or later (32-bit or 64-bit editions)

  • Windows Server 2003 R2 (x86- or x64-based)

  • Windows Server 2003 with Service Pack 2 or later (x86- or x64-based)

1.2. Decide on the database configuration

Windows HPC Server 2008 R2 uses four different Microsoft® SQL Server® databases to store management, job scheduling, reporting, and diagnostics data. By default, if no other edition of SQL Server is detected, the head node installation program installs the Express edition of SQL Server 2008 SP1 (or the Express edition of SQL Server 2008 R2) and creates the four databases on the head node. Depending on the size, expected job throughput, and other requirements of your cluster, you can install a different edition of SQL Server 2008 SP1 or later, or install the databases on remote servers. The advantage of installing the databases on remote servers is that it saves resources on the head node, helping ensure that it can efficiently manage the cluster.

For detailed database configuration options and tuning guidelines, see Database Capacity Planning and Tuning for Microsoft HPC Pack.

Important
You should consider installing the HPC databases on one or more remote servers if your cluster will have more than 256 nodes or a high rate of job throughput.

To install the HPC databases on a remote server, that server must be running SQL Server® 2008 SP1 or later. Also, you need to create the databases and configure them for remote access before you start the deployment process for your HPC cluster.

For detailed information and step-by-step procedures for installing the HPC databases on remote servers, see the Deploying an HPC Cluster with Remote Databases Step-by-Step Guide (https://go.microsoft.com/fwlink/p/?LinkID=186534).

1.3. Decide what type of nodes you want to add to your cluster and how many

You can add the following types of nodes to your cluster:

  • Compute nodes. Compute nodes are used for running jobs. This type of node cannot become a different type of node (that is, change roles) without being redeployed.

  • Broker nodes. Windows Communication Foundation (WCF) broker nodes are used for routing WCF calls from the Service-Oriented Architecture (SOA) clients to the SOA services running on nodes in your cluster. This type of node can change roles to become a compute node without being redeployed.

  • Workstation nodes. Workstation nodes can also run jobs. This type of node can only be created on a computer that is running Windows 7 Enterprise, Windows 7 Professional, or Windows 7 Ultimate. This type of node cannot change roles.

  • Windows Azure nodes. If you have a Windows Azure™ subscription, you can add Windows Azure nodes on demand to increase your cluster capacity when you need it. Like compute nodes and workstation nodes, Windows Azure nodes can run jobs.

    Note
    You can add Windows Azure nodes only in Windows HPC Server 2008 R2 with Service Pack 1 or later. For more information, see Deploying Windows Azure Nodes.

When HPC Pack 2008 R2 is installed, depending on the type of node that is being created, different features are installed. These features determine the role that the node will perform in the cluster. In some cases, a node is able to change roles because it has the necessary features to perform a different role. The ability to change roles is an important aspect that you need to consider when deciding the type of nodes that you want to add to your cluster.

Another important decision that you have to make is the number of nodes that you want to add. For example, if you plan to run service-oriented architecture (SOA) jobs, the number of nodes can markedly affect cluster performance. If you plan to run LINQ to HPC jobs, you must have a minimum of three nodes, including the head node. If you are adding broker nodes, you also need to decide how many compute nodes you will add for each broker node that is available on the cluster. The ratio of broker nodes to compute nodes can also affect cluster performance.

Finally, if you want to configure the head node or a broker node in a failover cluster, you will need at least one additional computer for each failover cluster that you configure, which might reduce the number of nodes that you can add to your cluster. For more information about running an HPC cluster with failover clustering, see the Configuring Failover Clustering Step-by-Step Guide (https://go.microsoft.com/fwlink/p/?LinkId=194691) and Configuring Windows HPC Server for High Availability with SOA Applications (https://go.microsoft.com/fwlink/p/?LinkId=194786).

1.4. Decide how to add compute nodes to your cluster

You can use the following ways to add compute nodes to your cluster:

  • Deploy nodes from bare metal. The operating system and all the necessary HPC cluster features are automatically installed on each node as it is added to the cluster. No manual installation of the operating system or other software is required. Bare metal deployment is only possible for compute nodes and broker nodes.

  • Add preconfigured compute nodes. The compute nodes are already running one of the supported operating systems, and HPC Pack 2008 R2 is manually installed on each node.

  • Import a node XML file. A node XML file contains a list of all the nodes that will be added to the cluster. This XML file can be used to add preconfigured nodes or to deploy nodes from bare metal. For more information about node XML files, see Appendix 2: Creating a Node XML File.

The following is a list of details to take into consideration when choosing how to add nodes to your HPC cluster:

  • When deploying nodes from bare metal, Windows HPC Server 2008 R2 automatically generates computer names for your nodes. During the configuration process, you will be required to specify the naming convention to use when automatically generating computer names for the new nodes.

  • Nodes are assigned their computer name in the order that they are deployed.

  • If you want to add nodes from bare metal and assign computer names in a different way, you can use a node XML file. For more information about node XML files, see Appendix 2: Creating a Node XML File.

  • If you want to add preconfigured nodes to your cluster, you will need to install one of the supported operating systems on each node (if not already installed), as well as HPC Pack 2008 R2.

Deploying nodes over iSCSI

You can centralize the storage of your HPC cluster by using a network-attached storage array. A networked-attached storage array is a computer, storage system, or appliance that provides storage resources over a network connection.

By using a storage array, the nodes in your cluster will not require a local hard disk drive to serve as a system disk. Instead, the nodes use the storage resources on the storage array to boot the operating system over the network, using an iSCSI connection.

Nodes that are deployed over iSCSI are deployed from bare metal.

To have an iSCSI deployment, you will need the following:

  • One or more network-attached storage arrays

  • A network connection between the nodes in your cluster and the storage arrays

  • An iSCSI provider for the storage arrays, installed on the head node

Note
For detailed information about iSCSI deployment and step-by-step procedures for deploying iSCSI boot nodes, see the Deploying iSCSI Boot Nodes Step-by-Step Guide (https://go.microsoft.com/fwlink/p/?LinkId=194674).

1.5. Choose the Active Directory doMayn for your cluster

The nodes in your HPC cluster must be members of an Active Directory doMayn. Before deploying your cluster, you must choose the Active Directory doMayn that you will use for your HPC cluster.

If you do not have an Active Directory doMayn to which you can join your cluster, or if you prefer not to join an existing doMayn, you can install the Active Directory DoMayn Services role on a computer that is running Windows Server 2008 R2 and then configure a doMayn controller on that computer. For more information about installing the Active Directory DoMayn Services role on a computer that is running Windows Server 2008 R2, see the AD DS Installation and Removal Step-by-Step Guide (https://go.microsoft.com/fwlink/p/?LinkID=119580).

Important
Because of potential administrative difficulties and to ensure cluster performance, we do not recommend using the head node as a doMayn controller unless isolation is required (for example, for test purposes) or no other option exists.
Caution
If you choose to install and configure an Active Directory doMayn controller on the head node, consult with your network administrator about the correct way to isolate the new Active Directory doMayn from the enterprise network, or how to join the new doMayn to an existing Active Directory forest.
Note
If you are installing HPC Pack 2008 R2 with Service Pack 2 or later, one consideration is the location of the runtime data share, a file share that is configured during installation. You can choose to use local storage on the head node , or you can configure an existing file share on a file server in the Active Directory doMayn. If you choose to configure an existing file share, you must prepare the share before the installation of HPC Pack 2008 R2 on the head node as described in Configure the Runtime Data Share.

1.6. Choose a doMayn account for adding nodes

During the configuration process of your HPC cluster, you must provide credentials for a doMayn user account that will be used for adding nodes and for system configuration. You must choose an existing account or create a new account, before starting your cluster deployment.

The following is a list of details to take into consideration when choosing the user account:

  • The user account that you choose must be a doMayn account with enough privileges to create Active Directory computer accounts for the nodes.

  • If the policies of your organization restrict you from using a doMayn account that can add new computers to the doMayn, you will need to ask your doMayn administrator to pre-create the computer objects for you in Active Directory DoMayn Services before you deploy your nodes. For more information, see Deploy Nodes with Pre-created Computer Objects in Active Directory (https://go.microsoft.com/fwlink/p/?LinkId=194363).

  • If part of your deployment requires access to resources on the enterprise network, the user account must have the necessary permissions to access those resources—for example, installation files that are available on a network server.

  • If you want to restart nodes remotely by using HPC Cluster Manager, the account must be a member of the local Administrators group on the head node. This requirement is only necessary if you do not have scripted power control tools that you can use to remotely restart the compute nodes.

1.7. Choose a network topology for your cluster

Windows HPC Server 2008 R2 supports five cluster topologies. These topologies are distinguished by how the nodes in the cluster are connected to each other and to the enterprise network. The five supported cluster topologies are:

  • Topology 1: Compute nodes isolated on a private network

  • Topology 2: All nodes on enterprise and private networks

  • Topology 3: Compute nodes isolated on private and application networks

  • Topology 4: All nodes on enterprise, private, and application networks

  • Topology 5: All nodes on an enterprise network

For more information about each network topology and each HPC cluster network, see Appendix 1: HPC Cluster Networking.

When you are choosing a network topology, you must take into consideration your existing network infrastructure and the type of nodes that you will be adding to your cluster:

  • Decide which network in the topology that you have chosen will serve as the enterprise network, the private network, and the application network.

  • Do not have the network adapter that is connected to the enterprise network on the head node in automatic configuration (that is, the IP address for that adapter does not start with: 169.254). That adapter must have a valid IP address, dynamically or manually assigned (static).

  • If you choose a topology that includes a private network, and you are planning to add nodes to your cluster from bare metal:

    • Ensure that there are no Pre-Boot Execution Environment (PXE) servers on the private network.

    • If you want to use an existing DHCP server for your private network, ensure that it is configured to recognize the head node as the PXE server in the network.

  • If you want to enable DHCP server on your head node for the private network or the application network and there are other DHCP servers connected to those networks, you must disable those DHCP servers.

  • If you have an existing DoMayn Name System (DNS) server connected to the same network as the nodes in your cluster, no action is necessary, but the nodes will be automatically deregistered from that DNS server.

  • Contact your system administrator to determine if Internet Protocol security (IPsec) is enforced on your doMayn through Group Policy. If IPsec is enforced on your doMayn through Group Policy, you may experience issues during deployment. A workaround is to make your head node an IPsec boundary server so that the other nodes in your cluster can communicate with the head node during PXE boot.

  • If you want to add workstation nodes to your cluster, topology 5 (all nodes on an enterprise network) is the recommended topology, but other topologies are supported. If you want to add workstation nodes on other topologies, see Adding Workstation Nodes Step-by-Step Guide (https://go.microsoft.com/fwlink/p/?LinkID=194376).

  • If you want to add broker nodes to your cluster, they must be connected to the network where the clients that are starting SOA sessions are connected (usually the enterprise network) and to the network where the nodes that are running the SOA services are connected (if different from the network where the clients are connected).

  • If you want to add nodes that can run LINQ to HPC jobs, you should choose a network topology that includes the enterprise network.

1.8. Prepare for multicast

If you will be deploying nodes from bare metal and want to multicast the operating system image that you will be using during deployment, we recommend that you prepare for multicast by:

  • Enabling Internet Group Management Protocol (IGMP) snooping on your network switches, if this feature is available. This will help to reduce multicast traffic.

  • Disabling Spanning Tree Protocol (STP) on your network switches, if this feature is enabled.

Note
For more information about these settings, contact your network administrator or your networking hardware vendor.

1.9. Prepare for the integration of scripted power control tools

The cluster administration console (HPC Cluster Manager) includes actions to start, shut down, and reboot compute nodes remotely. These actions are linked to a script file (CcpPower.cmd) that performs these power control operations using operating system commands. You can replace the default commands in that script file with your own power control scripts, such as Intelligent Platform Management Interface (IPMI) scripts that are provided by your vendor of cluster solutions.

In preparation for this optional integration, you must obtain all the necessary scripts, dynamically linked library (DLL) files, and other components of your power control tools. After you have obtained all the necessary components, test them independently and ensure that they work as intended on the computers that you will be deploying as nodes in your cluster.

For information about modifying CcpPower.cmd to integrate your own scripted power control tools, see Appendix 5: Scripted Power Control Tools.