次の方法で共有


Release Notes for Windows HPC Server 2008 R2

Updated: September 2010

Applies To: Windows HPC Server 2008 R2

These release notes address late-breaking issues and information about the final release of Windows® HPC Server 2008 R2.

In these release notes:

  • Uninstallation order for HPC Pack 2008 R2

  • Upgrading from HPC Pack 2008 R2 Express edition to HPC Pack 2008 R2 Enterprise and HPC Pack 2008 R2 for Workstation edition

  • Recommended updates for Windows HPC Server 2008 R2 clusters

  • Installing HPC Pack 2008 R2 on the head node fails if the Windows Firewall service is stopped

  • Event forwarding configuration feature has been discontinued

  • Some parameters and subcommands for command-line tools are being deprecated

  • Sending a large number of requests to a SOA session will trigger broker throttling and EndRequest() will fail

  • Job submission from the command prompt must be done in an elevated Command Prompt window

  • You see a ‘There was no endpoint listening' error when running diagnostic tests

  • Rebooting a node that is running Windows Server 2008 from HPC Cluster Manager can fail

  • Upgrading from Windows HPC Server 2008 can cause Windows Deployment Services on the head node to answer to PXE requests on the enterprise network

  • Upgrading from Windows HPC Server 2008 may fail in some circumstances

  • Upgrading from Windows HPC Server 2008 may fail on clusters where a node XML file was used

  • The diagnostic test Available Software Updates for Node Report fails when nodes cannot connect to Microsoft Update or a local WSUS server

  • On large clusters, MPI jobs that include a large number of nodes can fail, including MPI Ping-Pong diagnostic tests

Uninstallation order for HPC Pack 2008 R2

If you need to uninstall HPC Pack 2008 R2, uninstall the different features in the following order:

  1. Microsoft HPC Pack 2008 R2 Services for Excel 2010

  2. Microsoft HPC Pack 2008 R2 Server Components

  3. Microsoft HPC Pack 2008 R2 Client Components

  4. Microsoft HPC Pack 2008 R2 MS-MPI Redistributable Pack

Important
Not all features are installed on all computers. For example, Microsoft HPC Pack 2008 R2 Services for Excel 2010 is only installed on the head node when the HPC Pack 2008 R2 Enterprise and HPC Pack 2008 R2 for Workstation edition is installed.

When HPC Pack 2008 R2 is installed on the head node, other programs are installed with it. After uninstalling HPC Pack 2008 R2, you can remove the following programs if they will no longer be used:

  • Microsoft Report Viewer Redistributable 2008

  • Microsoft SQL Server 2008 (64-bit)

    Note
    This program also includes: Microsoft SQL Server 2008 Browser, Microsoft SQL Server 2008 Setup Support Files, and Microsoft SQL Server VSS Writer.
  • Microsoft SQL Server 2008 Native Client

Additionally, the following server roles and features might have been added when HPC Pack 2008 R2 was installed, and can be removed if they will no longer be used:

  • Dynamic Host Configuration Protocol (DHCP) Server server role

  • File Services server role

  • Network Policy and Access Services server role

  • Windows Deployment Services server role

  • Microsoft .NET Framework feature

  • Message Queuing feature

^ Top of page

Upgrading from HPC Pack 2008 R2 Express edition to HPC Pack 2008 R2 Enterprise and HPC Pack 2008 R2 for Workstation edition

When upgrading your cluster from the HPC Pack 2008 R2 Express edition to the HPC Pack 2008 R2 Enterprise and HPC Pack 2008 R2 for Workstation edition, you must upgrade the head node first. After the head node has been upgraded, compute nodes can be redeployed and will automatically have the features of the upgraded edition installed.

If your HPC cluster includes Windows Communication Foundation (WCF) broker nodes that are configured in a failover cluster and you do not want to redeploy them, you need to disable the Cluster Service on the broker nodes before performing the upgrade on those nodes.

Note
To upgrade from the HPC Pack 2008 R2 Express edition to the HPC Pack 2008 R2 Enterprise and HPC Pack 2008 R2 for Workstation edition, run setup.exe to start the installation wizard, or run the following command on a command prompt: setup.exe –unattend –upgradesku.

^ Top of page

The following table lists the updates that are recommended for your Windows HPC Server 2008 R2 cluster, and the nodes where they should be installed:

Update Install on nodes

KB981314

Head node, compute nodes, WCF broker nodes, and workstation nodes

KB981347

Head node

KB981002

WCF broker nodes that are configured in a failover cluster

Note
More information about these updates is available on the download page of each update.

^ Top of page

Installing HPC Pack 2008 R2 on the head node fails if the Windows Firewall service is stopped

If Windows Firewall service is stopped or turned off on the head node computer, installation of HPC Pack 2008 R2 does not complete successfully because the firewall rules required by Windows HPC Server 2008 R2 are not configured.

If this problem occurs during installation, entries such as the following may appear in the hpcMsi-DateTime.txt log file (located in the %Temp%\HPCSetupLogs folder):

CAQuietExec:  Firewall rule 'HPC Host (TCP-In)' addition failed: 0x800706d9
CAQuietExec:  Warning: failed to add rule HPC Host (TCP-In), continue
CAQuietExec:  Firewall rule 'HPC Host for controller (TCP-In)' addition failed: 0x800706d9
CAQuietExec:  Warning: failed to add rule HPC Host for controller (TCP-In), continue
CAQuietExec:  Firewall rule 'HPC Host x32 (TCP-In)' addition failed: 0x800706d9

Workaround

Ensure that the Windows Firewall service is started on the head node computer, and then try to install HPC Pack 2008 R2 again.

You can start the Windows Firewall service by using the Services MMC snap-in, or by running the following command in an elevated Command Prompt window:

net start MpsSvc

To verify the configuration of Windows Firewall on the head node computer, use the Windows Firewall with Advanced Security MMC snap-in.

^ Top of page

Event forwarding configuration feature has been discontinued

In HPC Pack 2008 R2, the cluster feature to configure automatic event forwarding from the nodes to the head node has been discontinued. The –ForwardEvents parameter of the Set-HpcClusterProperty cmdlet in HPC PowerShell has also been removed.

Workaround

You can have node events forwarded to the head node by several standard methods, depending on the size of your cluster. For example, for a cluster with fewer than 256 nodes, you can create a collector-initiated event subscription in which the head node polls the cluster for new events. For a larger cluster, it is recommended that you create a source-initiated event subscription in which the compute nodes send events to the head node. For more information, see View Node Events.

^ Top of page

Some parameters and subcommands for command-line tools are being deprecated

The following parameters for command-line tools are being deprecated and will be removed after this release:

  • The /numprocessors parameter in the job new command-line tool

  • The /numprocessors parameter in the job add command-line tool

  • The /askednodes parameter in the job new command-line tool

The following subcommands for command-line tools are being deprecated and will be removed after this release:

  • The pause subcommand for the node command-line tool

  • The resume subcommand for the node command-line tool

  • The approve subcommand for the node command-line tool

Workaround

Substitute the deprecated parameters and subcommands with the following:

Parameter or subcommand Substitute

/numprocessors

/numcores

/askednodes

/requestednodes

node pause, node resume, and node approve

Use HPC PowerShell or HPC Cluster Manager

^ Top of page

Sending a large number of requests to a SOA session will trigger broker throttling and EndRequest() will fail

Each SOA service has its own configuration settings for throttling in the service configuration file. A call to EndRequest() will fail if the processing time of all requests is too long. This can commonly happen to non-durable sessions when the number of messages is larger than the default threshold. For more information about throttling, see Overview of SOA Programming Model and Runtime System for Windows HPC Server 2008 (https://go.microsoft.com/fwlink/?LinkID=121221).

Workaround

There are two possible workarounds:

  • Increase the threshold in the service configuration file, ensuring that it is larger than the total number of requests.

  • Specify a longer timeout period when calling EndRequest(timeout), ensuring that the timeout period is longer than the total message processing time.

^ Top of page

Job submission from the command prompt must be done in an elevated Command Prompt window

If you want to submit a job to the cluster by running the job command-line tool, you must be using an elevated Command Prompt window. If you do not run the job command-line tool from an elevated Command Prompt window, you see the following error:

Could not connect to the scheduler. The user may not be authorized to connect to the scheduler or the scheduler service might not be running.

Workaround

To open an elevated Command Prompt window, click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator. If you are prompted for an administrator password or confirmation, type the password or provide confirmation.

^ Top of page

You see a ‘There was no endpoint listening' error when running diagnostic tests

When you run certain diagnostic tests on your cluster, you might see a There was no endpoint listening error in the test results, and the test will be marked as Failure.

This error can happen when you run all the Excel diagnostic tests at the same time on a node, when you run the Excel and SOA tests at the same time on a node, or when you run multiple instances of the Excel or the SOA tests at the same time on a node.

Workaround

Run each of the Excel and SOA tests separately, or run them on different nodes, and do not run more than one instance of each of these tests at the same time on any node. Also, if you see the There was no endpoint listening error in the results for a test, run the test again.

^ Top of page

Rebooting a node that is running Windows Server 2008 from HPC Cluster Manager can fail

In HPC Cluster Manager, when you select to reboot a node that is running the Windows Server® 2008 operating system, the node might fail to reboot. This can occur because the action in HPC Cluster Manager to reboot nodes remotely is based on the wmic command-line tool, and by default Windows Firewall in Windows Server 2008 blocks remote Windows Management Instrumentation (WMI) requests.

Workaround

On the nodes that you want to reboot remotely, you need to enable the Windows Management Instrumentation (WMI) rule group in Windows Firewall. The following command enables that rule group:

netsh advfirewall firewall set rule group="Windows Management Instrumentation (WMI)" new enable=yes
Note
You can add the command to the node template that you use to deploy nodes that are running Windows Server® 2008. The command should be added in a new Run OS command task in the node template, under Deployment.

^ Top of page

Upgrading from Windows HPC Server 2008 can cause Windows Deployment Services on the head node to answer to PXE requests on the enterprise network

If your Windows® HPC Server 2008 cluster is configured to deploy compute nodes from bare metal, and as part of the upgrade process of your head node to Windows HPC Server 2008 R2 you upgrade the operating system to Windows Server® 2008 R2, Windows Deployment Services on the head node can start answering to PXE requests on the enterprise network.

Workaround

Upgrade your head node to HPC Pack 2008 R2, as explained in the Upgrade Guide for Windows HPC Server 2008 R2 (https://go.microsoft.com/fwlink/?LinkId=197497).

^ Top of page

Upgrading from Windows HPC Server 2008 may fail in some circumstances

Upgrading from Windows HPC Server 2008 to Windows HPC Server 2008 R2 may fail with the following error when copying files: Access to the path \\<head_node>\REMINST\Drivers is denied, where <head_node> is the name of the head node that you are upgrading. For a standalone head node, this is the computer name, and for a head node in a failover cluster, this is the name of the clustered instance of the head node.

This error happens when the following two conditions are true:

  • Some of the custom user files are configured as read-only.

  • The last step of importing data during the upgrade process fails or is canceled, and the upgrade is ran again.

Workaround

If you encounter this issue, perform the following steps:

  1. Delete the following two folders: \\<head_node>\REMINST\Drivers and \\<head_node>\REMINST\Config, where <head_node> is the name of the head node that you are upgrading. For a standalone head node, this is the computer name, and for a head node in a failover cluster, this is the name of the clustered instance of the head node.

  2. Delete the following file on the head node: %CCP_HOME%Bin\CcpPower.cmd. For example: C:\Program Files\Microsoft HPC Pack\Bin\CcpPower.cmd.

  3. Restart the upgrade process, or continue with it if the upgrade wizard is still open.

^ Top of page

Upgrading from Windows HPC Server 2008 may fail on clusters where a node XML file was used

In some circumstances, when upgrading a Windows HPC Server 2008 cluster where nodes were imported from a node XML file, the upgraded Windows HPC Server 2008 R2 cluster will have erroneous information for the nodes and might not be able to communicate with the nodes.

Workaround

After upgrading the head node of your HPC cluster, in HPC Cluster Manager, delete all the nodes and then add the nodes again by importing the node XML file that was used for the Windows HPC Server 2008 cluster. If needed, apply a node template to the nodes that you imported.

^ Top of page

The diagnostic test Available Software Updates for Node Report fails when nodes cannot connect to Microsoft Update or a local WSUS server

If you run the diagnostic test Available Software Updates for Node Report and the nodes in your HPC cluster cannot connect to Microsoft Update over an Internet connection or to a Windows Server Update Services (WSUS) server on your enterprise network, the test will fail and an exception error message will be displayed in the test results.

Workaround

Before running the diagnostic test Available Software Updates for Node Report, make sure that the nodes in your cluster can connect to Microsoft Update or to a WSUS server on your enterprise network.

^ Top of page

On large clusters, MPI jobs that include a large number of nodes can fail, including MPI Ping-Pong diagnostic tests

On large clusters, Message Passing Interface (MPI) jobs that include a large number of nodes or all the nodes in the cluster can fail to run because of a data capacity overflow of the CCP_NODES environment variable that is used by Windows HPC Server 2008 R2. This includes the MPI Ping-Pong diagnostic tests in HPC Pack 2008 R2.

The CCP_NODES environment variable is used to store the computer name and the number of processors for each node in an MPI job. When an MPI job includes a large number of nodes or all the nodes in a large cluster, the amount of information that needs to be included in the CCP_NODES environment variable can exceed the data capacity of an environment variable, causing the job output to display the following error:

The HPC Server Scheduler could not fit all the node names for this job in the CCP_NODES environment variable, preventing this job from running on all assigned nodes. Use mpiexec's "-machinefile" option to specify all the nodes to run this job.

Workaround

There are three possible workarounds:

  • Use short node names for large clusters on which you want to run MPI jobs that include a large number of nodes.

  • Do not run MPI jobs on a large number of nodes or on all the nodes in your cluster at the same time, especially the MPI Ping-Pong diagnostic tests. Run these jobs on groups of nodes, so that you cover the cluster in sections.

  • Create a file that includes a list of the nodes where you want to run an MPI job (known as a machine file), store that file on a shared folder that is accessible from all the nodes in your cluster, and then use the /machinefile parameter for mpiexec to include the file when running MPI jobs. When you include a machine file, it is used instead of the CCP_NODES environment variable.

    Note
    • For more information about how to use a machine file, see the command reference for mpiexec.

    • You can also use a script to dynamically create the machine file and then start your MPI job. For example, the following HPC PowerShell cmdlet can be used to retrieve the list of nodes that are assigned to a task in a job: Get-HpcTask –JobId $Job -TaskId $task).RequiredNodes

^ Top of page