次の方法で共有


Release Notes for Microsoft HPC Pack 2008 R2 Service Pack 2

Updated: June 2011

Applies To: Windows HPC Server 2008 R2

These release notes address late-breaking issues and information about the Service Pack 2 (SP2) of Microsoft® HPC Pack 2008 R2. You can install HPC Pack 2008 R2 SP2 to upgrade an existing Windows® HPC Server 2008 R2 with SP1 HPC cluster.

Important
  • You cannot install HPC Pack 2008 R2 SP2 to upgrade the release to manufacturing (RTM) version of HPC Pack 2008 R2. You must install HPC Pack 2008 R2 SP1 first. For information about installing HPC Pack 2008 R2 SP1, see Release Notes for Microsoft HPC Pack 2008 R2 Service Pack 1.

  • The installer for HPC Pack 2008 R2 SP2 does not support uninstallation back to HPC Pack 2008 R2 SP1 or to the RTM version of HPC Pack 2008 R2. To uninstall HPC Pack 2008 R2 SP2, you must completely uninstall HPC Pack 2008 R2.

  • Upgrade from the Beta release of HPC Pack 2008 R2 SP2 is not supported. If you previously installed HPC Pack 2008 R2 SP2 Beta, you must completely uninstall it from the head node and from any client computers that are running it. Then, reinstall the RTM version of HPC Pack 2008 R2, HPC Pack 2008 R2 SP1, and then HPC Pack 2008 R2 SP2. (You can also follow the instructions in Creating a ‘Service Pack integrated' installation point or, if you are a volume licensing customer, obtain integrated setup media for HPC Pack 2008 R2 with SP2.) Depending on your cluster and your node deployment methods, you might need to perform the same steps on compute nodes, broker nodes, and workstation nodes. However, if you deployed nodes from bare metal using an operating system image, you can wait to redeploy or reimage the nodes until after the head node is upgraded to HPC Pack 2008 R2 SP2.

  • If you created a virtual hard disk (VHD) using the HPC Pack 2008 R2 for Windows Azure Virtual Machines in HPC Pack 2008 R2 SP2 Beta, you must recreate the VHD using the HPC Pack 2008 R2 for Windows Azure Virtual Machines in HPC Pack 2008 R2 SP2 before you can deploy Windows Azure virtual machine (VM) nodes.

In this topic:

  • Before you install HPC Pack 2008 R2 SP2

  • Compatibility of Windows HPC Server 2008 R2 SP2 with previous versions of Windows HPC Server 2008 R2

  • Install Microsoft HPC Pack 2008 R2 SP2 on the head node

  • Install HPC Pack 2008 R2 SP2 on a high availability head node

  • Install HPC Pack 2008 R2 SP2 on compute nodes, WCF broker nodes, and workstation nodes

  • Install the Microsoft HPC Pack web components

  • Runtime data share created during service pack installation

  • Install the HPC soft card key storage provider

  • Uninstall HPC Pack 2008 R2 with SP2

  • Known issues

Before you install HPC Pack 2008 R2 SP2

Be aware of the following items and recommendations before you install HPC Pack 2008 R2 SP2 on the head node:

  • When you install the service pack, new indexes and new parameters for some procedures are added to the HPC databases. To prevent potential data loss, create backups of the following databases before installing the service pack:

    • Cluster management database

    • Job scheduling database

    • Reporting database

    • Diagnostics database

    You can use a backup method of your choice to back up the HPC databases. For more information and an overview of the backup and restore process, see Backing Up and Restoring a Windows HPC Server Cluster.

  • If the installation fails, you must completely uninstall HPC Pack 2008 R2, restore the HPC Pack 2008 R2 databases from a backup, and then reinstall HPC Pack 2008 R2.

  • When you install the service pack, several settings related to HPC services are reset to their default values, including the following:

    • Firewall rules

    • Event log settings for all HPC services

    • Service properties such as dependencies and startup options

    • If the head node is configured for high availability in a failover cluster, the related resources that are configured in the failover cluster

    Other details can be found in the following log file after you install the service pack: %windir%\temp\HPCSetupLogs\hpcpatch-DateTime.txt

  • Close all open windows for applications related to HPC Pack 2008 R2, such as HPC Cluster Manager, before you install the service pack.

  • Ensure that all diagnostic tests have finished, or are canceled, before you install the service pack.

  • Do not apply the service pack during critical times or while a long running job is still running. When you upgrade either a head node or other node in a cluster, you may be prompted to reboot the computer to complete the installation.

  • Before you install the service pack, ensure that you stop Windows Azure nodes that you previously deployed in Windows HPC Server 2008 R2 SP1 so that they are in the Not-Deployed state. If you do not do this, you may be unable to use or delete the nodes after the service pack is installed, but charges will continue to accrue in Windows Azure. To use the existing Windows Azure nodes in Windows HPC Server 2008 R2 SP2, you must restart them after the service pack is installed.

  • When you install the service pack on the head node, the files that are used to deploy a compute node or a Windows Communication Foundation (WCF) broker node from bare metal are also updated. Later, if you install a new compute node or WCF broker node from bare metal or reimage an existing node, the service pack is automatically applied to that node.

^ Top of page

Compatibility of Windows HPC Server 2008 R2 SP2 with previous versions of Windows HPC Server 2008 R2

The following list summarizes the supported compatibility of the release of Windows HPC Server 2008 R2 SP2 with Windows HPC Server 2008 R2 SP1 and the release to manufacturing (RTM) version of Windows HPC Server 2008 R2:

  • HPC Cluster Manager or HPC PowerShell in Windows HPC Server 2008 R2 SP2 can be used to manage a Windows HPC Server 2008 R2 SP1 or Windows HPC Server 2008 R2 head node. However, functionality that has been added in Windows HPC Server 2008 R2 SP2 cannot be used in a Windows HPC Server 2008 R2 SP1 or Windows HPC Server 2008 R2 cluster. Doing so usually results in an error. For information about features added in Windows HPC Server 2008 R2 SP2, see What's New in Windows HPC Server 2008 R2 Service Pack 2.

  • HPC Cluster Manager in Windows HPC Server 2008 R2 or Windows HPC Server 2008 R2 SP1 cannot be used to manage a Windows HPC Server 2008 R2 SP2 head node.

  • The job scheduler in Windows HPC Server 2008 R2 SP2 can be used to manage jobs in a Windows HPC Server 2008 R2 SP1 or Windows HPC Server 2008 R2 cluster.

  • Service Oriented Architecture (SOA)-based client applications that are written to run SOA jobs on a Windows HPC Server 2008 R2 SP1 or Windows HPC Server 2008 R2 cluster can also be used to run SOA jobs on a Windows Windows HPC Server 2008 R2 SP2 cluster.

  • A Windows HPC Server 2008 R2 SP2 cluster can accept SOA jobs from a SOA client that is compatible with Windows HPC Server 2008 R2 SP1 or Windows HPC Server 2008 R2. However, the SOA client cannot be used to set features that are new to Windows HPC Server 2008 R2 SP2. Doing so usually results in an error.

  • The job scheduler in Windows HPC Server 2008 R2 or in Windows HPC Server 2008 R2 SP1 can be used to manage jobs in a Windows HPC Server 2008 R2 SP2 cluster. However, the jobs cannot use job scheduling features that are new in SP2. For information about features added in Windows HPC Server 2008 R2 SP2, see What's New in Windows HPC Server 2008 R2 Service Pack 2.

  • In a cluster where the head node is upgraded to HPC Pack 2008 R2 SP2, to work properly the compute nodes and Windows Communication Foundation (WCF) broker nodes must also be upgraded to HPC Pack 2008 R2 SP2.

  • In a cluster where the head node is upgraded to HPC Pack 2008 R2 SP2, HPC Pack 2008 R2 or HPC Pack 2008 R2 SP1 workstation nodes do not have to be upgraded to HPC Pack 2008 R2 SP2. However, user activity detection settings for workstation nodes that can be configured in Windows HPC Server 2008 R2 SP2 are ignored without warning on HPC Pack 2008 R2 workstations.

    Note
    A head node where HPC Pack 2008 R2 or HPC Pack 2008 R2 SP1 is installed cannot connect to workstation nodes where HPC Pack 2008 R2 SP2 is installed.

^ Top of page

Install Microsoft HPC Pack 2008 R2 SP2 on the head node

To install Microsoft HPC Pack 2008 R2 SP2 on the head node computer

  1. Obtain the installation program for HPC Pack 2008 R2 SP2 (HPC2008R2_SP2-x64.exe) by downloading and extracting the HPC2008R2SP2-Update-x64.zip file from the Microsoft Download Center. Save the service pack installation program to installation media or to a network location.

  2. Run HPC2008R2_SP2-x64.exe as an administrator from the location where you saved the service pack.

  3. Read the informational message that appears. If you are ready to apply the service pack, click OK.

  4. Continue to follow the steps in the installation wizard.

Note
  • After you install the service pack, the computer restarts.

  • You can confirm that HPC Pack 2008 R2 SP2 is installed on the head node. To view the version number in HPC Cluster Manager, on the Help menu, click About. If HPC Pack 2008 R2 is installed, the server version number and the client version number shown are similar to 3.2.xxxx.x.

To help ensure that bare metal deployment of nodes functions properly after the service pack is installed, it is recommended that you run the UpdateHpcWinPE.cmd script on the head node. This script upgrades the Windows PE image (boot.wim).

To run the UpdateHpcWinPE.cmd script

  1. Open an elevated Command Prompt window on the head node. Click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.

  2. Type the following command, and then press Enter:

    UpdateHpcWinPE.cmd
    

    The script performs the update of boot.wim.

^ Top of page

Install HPC Pack 2008 R2 SP2 on a high availability head node

If you have set up a head node for high availability in the context of a failover cluster, use the following procedure to apply the service pack.

To install HPC Pack 2008 R2 SP2 on a high availability head node

  1. Obtain the installation program for HPC Pack 2008 R2 SP2 (HPC2008R2_SP2-x64.exe) by downloading and extracting the HPC2008R2SP2-Update-x64.zip file from the Microsoft Download Center. Save the service pack installation program to installation media or to a network location.

  2. Take the following high availability HPC services offline using Failover Cluster Manager: hpcscheduler, hpcsdm, hpcdiagnostics, hpcreporting, and hpcsession.

  3. Install the service pack on the active head node by running HPC2008R2_SP2-x64.exe.

    After you install the service pack on the active head node, in most cases, the active head node restarts and fails over to the second head node.

    Note
    Because the second head node is not upgraded, Failover Cluster Manager might report a failed status for the resource group and the HPC services.
  4. To install the service pack on the second head node, do the following:

    1. Take the following high availability HPC services offline using Failover Cluster Manager: hpcscheduler, hpcsdm, hpcdiagnostics, hpcreporting, and hpcsession.

    2. Verify that the second head node is the active head node. If it is not, make the second head node the active head node using Failover Cluster Manager.

    3. Install the service pack on the second head node.

      After you install the service pack on the second head node, the high availability HPC services are brought online automatically.

Important
During the installation of the service pack on each head node configured for high availability, leave the SQL Server resources online.

To help ensure that bare metal deployment of nodes functions properly after the service pack is installed, it is recommended that you run the UpdateHpcWinPE.cmd script on each head node configured for high availability. This script upgrades the Windows PE image (boot.wim).

To run the UpdateHpcWinPE.cmd script

  1. Open an elevated Command Prompt window on the head node. Click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Continue.

  2. Type the following command, and then press Enter:

    UpdateHpcWinPE.cmd
    

    The script performs the update of boot.wim.

  3. Repeat the previous steps on the other head node in the failover cluster.

^ Top of page

Install HPC Pack 2008 R2 SP2 on compute nodes, WCF broker nodes, and workstation nodes

To work properly with a head node that is upgraded with HPC Pack 2008 R2 SP2, existing compute nodes and WCF broker nodes must also be upgraded. You can optionally upgrade your existing workstation nodes. Depending on the type of node, you can use one of the following methods to install HPC Pack 2008 R2 SP2:

  • Reimage an existing compute node or broker node that was deployed using an operating system image. For more information, see Reimage Compute Nodes.

    Note
    After the head node is upgraded with HPC Pack 2008 R2 SP2, if you install a new node from bare metal or reimage an existing node, HPC Pack 2008 R2 with SP2 is automatically installed on that node.
  • Install SP2 on existing nodes that are running HPC Pack 2008 R2 with SP1, either manually or using a clusrun command.

To use clusrun to install HPC Pack 2008 SP2 on existing nodes

  1. Copy the appropriate version of SP2 (HPC2008R2_SP2-x64.exe or HPC2008R2_SP2-x86.exe), to a shared folder such as \\headnodename\SP2.

  2. In HPC Cluster Manager, view nodes by node group to identify a group of nodes that you want to update – for example, ComputeNodes.

  3. Take the nodes in the node group offline.

  4. Open an elevated command prompt window and type the appropriate clusrun command for the operating system of the patch, e.g.:

    clusrun /nodegroup:ComputeNodes \\headnodname\SP2\HPC2008R2_SP2-x64.exe -unattend -SystemReboot
    
    Note
    The –SystemReboot parameter is required and causes the updated nodes to restart after the service pack is installed.

    After the service pack is installed and the nodes in the group restart, bring the nodes online.

To run the HPC Pack 2008 R2 SP2 installation program manually on individual nodes that are currently running HPC Pack 2008 R2 with SP1, you can copy HPC2008R2_SP2-x64.exe or HPC2008R2_SP2-x86.exe to a shared folder on the head node. Then access the existing nodes by Remote Desktop Connection to install the service pack from the shared folder.

Important
If you have WCF broker nodes that are configured for high availability in a failover cluster, you should install HPC Pack 2008 R2 SP2 on the high availability broker nodes as follows:
  1. Install HPC Pack 2008 R2 SP2 on the passive head node

  2. Fail over the active broker node to the passive broker node

  3. Install HPC Pack 2008 R2 SP2 on the passive head node (which has not yet been upgraded)

Note
For more information about how to perform Mayntenance on WCF broker nodes that are configured in a failover cluster, see Performing Mayntenance on WCF Broker Nodes in a Failover Cluster with Windows HPC Server 2008 R2.

^ Top of page

Install the Microsoft HPC Pack web components

A separate installation program (HpcWebComponents.msi) and a configuration script (Set-HPCWebComponents.ps1) are used to install and configure the HPC Pack 2008 R2 web components.

Note
The HPC Pack 2008 R2 web components can only be installed on the head node of a Windows HPC Server 2008 R2 with SP2 or later cluster.

For additional information and step-by-step procedures, see Install the Microsoft HPC Pack Web Components.

^ Top of page

Runtime data share created during service pack installation

During the installation of the service pack on the head node, the Runtime$ file share is automatically configured using the local path C:\HPCRuntimeData\ for use by the HPC runtimes including the SOA common data runtime. This file share is automatically configured for read and write access by HPC users, and for full control by HPC administrators. A default storage quota of 25 GB is configured.

You can configure a shared folder on a separate file server to serve as the runtime data share for an Windows HPC Server 2008 R2 SP2 or later cluster. For more information and procedures, see Configure the Runtime Data Share.

^ Top of page

Install the HPC soft card key storage provider

To install the HPC soft card key storage provider, you must use the installation program HpcKsp_x64.msi or HpcKsp_x86.msi. You can download and extract the file from HPC2008R2SP2-Update-x64.zip or HPC2008R2SP2-Update-x96.zip available at the Microsoft Download Center, or locate the file on the full installation media for HPC Pack 2008 R2 with SP2 or later. To enable soft card authentication when submitting jobs to the Windows HPC Server 2008 R2 SP2 cluster, you must install the KSP on the following computers:

  • The head node of your cluster

  • The compute nodes and workstation nodes of your cluster

To install the KSP, run the version of the installation program that is appropriate for the operating system on each computer: HpcKsp_x64.msi or HpcKsp_x86.msi.

Important
You can install the HPC soft card KSP only on an edition of Windows® 7 or Windows Server® 2008 R2.

^ Top of page

Uninstall HPC Pack 2008 R2 with SP2

If you need to uninstall HPC Pack 2008 R2 with SP2, uninstall the different features in the following order:

  1. Microsoft HPC Pack 2008 R2 Services for Excel 2010

  2. Microsoft HPC Pack 2008 R2 Server Components

  3. Microsoft HPC Pack 2008 R2 Client Components

  4. Microsoft HPC Pack 2008 R2 MS-MPI Redistributable Pack

Important
Not all features are installed on all computers. For example, Microsoft HPC Pack 2008 R2 Services for Excel 2010 is only installed on the head node when the HPC Pack 2008 R2 Enterprise and HPC Pack 2008 R2 for Workstation edition is installed.

When HPC Pack 2008 R2 is installed on the head node, other programs are installed with it. After uninstalling HPC Pack 2008 R2, you can remove the following programs if they will no longer be used:

  • Microsoft Report Viewer Redistributable 2008

  • Microsoft SQL Server 2008 (64-bit)

    Note
    This program also includes: Microsoft SQL Server 2008 Browser, Microsoft SQL Server 2008 Setup Support Files, and Microsoft SQL Server VSS Writer.
  • Microsoft SQL Server 2008 Native Client

Additionally, the following server roles and features might have been added when HPC Pack 2008 R2 was installed, and can be removed if they will no longer be used:

  • Dynamic Host Configuration Protocol (DHCP) Server server role

  • File Services server role

  • Network Policy and Access Services server role

  • Windows Deployment Services server role

  • Microsoft .NET Framework feature

  • Message Queuing feature

^ Top of page

Known issues

The following are known issues in HPC Pack 2008 R2 SP2. For additional information about specific features, see What's New in Windows HPC Server 2008 R2 Service Pack 2.

HPC Basic Profile Web Service is deprecated

The HPC Basic Profile Web Service is deprecated in Windows HPC Server 2008 R2 with SP2. Use the Web Service Interface instead. For information about the Web Service Interface, see Working with the Web Service Interface.

Cannot add or remove HPC users or HPC administrators when connecting to a Windows HPC Server 2008 R2 SP1 or RTM cluster

Because of a known issue, you cannot use HPC Cluster Manager or HPC PowerShell in Windows HPC Server 2008 R2 SP2 to add or remove users or administrators from a Windows HPC Server 2008 R2 SP1 or Windows HPC Server 2008 R2 RTM cluster. If you do this, you will see an error message similar to Error 7000: GlobalGroup. You must perform this operation directly on the Windows HPC Server 2008 R2 SP1 or Windows HPC Server 2008 R2 head node, or connect to the head node using a compatible client or HPC PowerShell cmdlet.

Possible loss of cluster users when using high availability head nodes

Under certain conditions in a cluster configured for high availability of the head node in a failover cluster, if one of the head nodes is not available (or the HPC Management Service is not running on it), existing users are lost. The following two scenarios can occur:

  • During the migration of a failover cluster running Windows HPC Server 2008 to Windows HPC Server 2008 R2, the newly configured cluster will not have all of the cluster users and administrators

  • In an existing failover cluster running Windows HPC Server 2008 R2, when a cluster user is added while the HPC Management Service is not running on one of the head nodes, the newly added user will be removed when the HPC Management Service starts

Workaround

If this problem occurs during the migration of a failover cluster, perform the following procedure:

To add cluster users during the migration of a failover cluster

  1. In Step 5: Install HPC Pack 2008 R2 on the new failover cluster and import configuration data, after you run import.cmd to import the configuration data to the new failover cluster (step 9 in the procedure in that topic), wait 3 minutes.

  2. Start HPC PowerShell. Click Start, point to All Programs, click Microsoft HPC Pack 2008 R2, right-click HPC PowerShell, and then click Run as administrator. If the User Account Control dialog box appears, confirm that the action it displays is what you want, and then click Yes.

  3. Type the following command:

    Import-Clixml -Path "<exportFolder>\Backup\Configurations\Cluster_Users.xml" | % { Add-HpcMember -Name $_.Name -Role $_.Role.Value | out-null }
    

    where

    <ExportFolder> is the network path to the shared export folder.

  4. Continue with step 10 and step 11 in the procedure in Step 5: Install HPC Pack 2008 R2 on the new failover cluster and import configuration data.

  5. After you perform the required tasks in the Deployment To-do list, wait 5 minutes.

  6. If the newly configured cluster still does not have all of the cluster users and administrators, run the HPC PowerShell command in step 3 above again.

  7. Continue with the steps to migrate a failover cluster to Windows HPC Server 2008 R2. For more information, see Migrating a Failover Cluster Running Windows HPC Server 2008 to Windows HPC Server 2008 R2 Step-by-Step Guide.

If the problem occurs while the HPC Management Service is stopped on a head node, after you restart the service, check the HPC administrators and HPC users. If necessary, manually add them.

Runtime data share permissions may be set incorrectly on the second head node in a cluster configured for high availability of the head node

In a Windows HPC Server 2008 R2 SP2 cluster configured for high availability of the head node in the context a failover cluster, the runtime data share that is configured during installation may be accessible only when the first configured head node is active. When a failover occurs, applications that submit jobs requiring the runtime data share will get an Access denied exception. This problem occurs if the runtime data share was configured to use a local folder in the first shared disk in the clustered file server within the failover cluster.

Workaround

If you encounter this problem, configure the necessary permissions on the share so that applications can submit jobs that require the runtime data share on either head node. Perform the following general steps:

  1. In Failover Cluster Manager, fail over to the head node in the failover cluster on which the problem occurs.

  2. Click Start, point to Administrative Tools, and then click Share and Storage Management.

  3. Right-click Runtime$, click Properties, and then click Permissions.

  4. Set the following NTFS permissions for the HPCUsers group on the local computer:

    • Read

    • Create Files/Write Data

    • Create Folders/Append Data

  5. Set the following share permissions for the HPCUsers group:

    • Change

    • Read

  6. Set the following share permissions for the HPCAdminMirror group:

    • Change

    • Read

A runtime data share may be missing the SOA folder on a cluster configured for high availability of the head nodes

Under some conditions in a Windows HPC Server 2008 R2 SP2 cluster configured for high availability of the head node in the context a failover cluster, if the runtime data share is configured on a file server (including on one of the high availability head nodes) instead of on the first shared disk of the failover cluster, the SOA folder and its subfolders may fail to be created successfully after setup. Without this path, applications that use the SOA common data feature will fail. This problem is caused by a misconfiguration of the permissions on the runtime data share.

To avoid this problem, or to correct it after it occurs, ensure that you perform the following configuration steps in addition to the steps to configure a runtime data share for a cluster that contains a single head node:

  1. Configure the following NTFS permissions on the file share:

    Name Permission

    Computer account of each head node computer in the Windows HPC Server 2008 R2 cluster

    Full control

    Computer account of the clustered instance of the head node

    Full control

  2. If you are installing HPC Pack 2008 R2 with SP2, in the installation wizard specify the file share as the runtime data share, and then complete the wizard.

  3. If you are configuring a new runtime data share in an existing cluster, do the following:

    1. Run the cluscfg command on the head node to modify the HPC_RUNTIMESHARE environment variable. For example, to update the configuration of a cluster with a clustered head node instance named MyHeadNode to use the new runtime data share \\MyFileServer\NewRuntimeShare, type the following command at an elevated command prompt:

      cluscfg setenvs /scheduler:MyHeadNode HPC_RUNTIMESHARE=\\MyFileServer\NewRuntimeShare
      
    2. On the active head node in the cluster, stop and restart the hpcsession service. At an elevated command prompt, type the following commands:

      net stop hpcsession
      net start hpcsession
      

      Alternatively, type the following command:

      sc control hpcsession 128
      
  4. Configure the following NTFS permissions on the SOA subfolder of the runtime data share:

    Name Permission

    Computer account of each head node computer in the Windows HPC Server 2008 R2 cluster

    Full control

    Computer account of the clustered instance of the head node

    Full control

  5. On the active head node in the cluster, stop and restart the hpcsession service. At an elevated command prompt, type the following commands:

    net stop hpcsession
    net start hpcsession
    

    Alternatively, type the following command:

    sc control hpcsession 128
    

For more information, see Configure the Runtime Data Share.

A Windows Server service pack must be applied first to the head node

When applying a Windows Server 2008 R2 operating system service pack to the nodes in your cluster, the operating system service pack must be applied first to the head node (or head nodes). Deploying the operating system service pack first to compute nodes or broker nodes may cause the nodes to enter the Error state.

Workaround

If you applied a Windows Server 2008 R2 operating system service pack to the compute nodes and broker nodes before installing it on the head node, you can correct the problem by installing the service pack on the head node and restarting the head node computer.

Add one VHD at a time to the image store

While a VHD is being added (imported) to the image store, it is possible to start to add another VHD. If you do this, the status and error messages that you see may refer to the incorrect VHD image. To avoid this problem, you should add only one VHD at a time to the image store.

Only nodes in which cores is an exact multiple of sockets can be brought online

In Windows HPC Server 2008 R2 SP2, the total core count on a node must be an exact multiple of the count of processor sockets; otherwise, the node cannot be brought online. You might encounter this issue if you modified the logical core or socket count on a node. For example, in order to be brought online, a computer with 4 processor sockets must have 4, 8, 12, 16, or some other multiple of 4 cores. This restriction is not enforced in the RTM version of Windows HPC Server 2008 R2 nor in Windows HPC Server 2008 R2 SP1. This restriction was added as validation of a new configuration option in Windows HPC Server 2008 R2 SP2 that allows you to oversubscribe or undersubscribe a node by specifying the number of cores or sockets that the job scheduler can use for cluster jobs.

Setting subscribedCores to a large number or attempting to run a large number of Excel processes may cause tasks to fail

When setting the subscribedCores node property to a large number such as 200 and submitting a job with a large number of tasks, you may see tasks that fail with the error code -1073741502. You may also encounter task failures when attempting to run more than 16 Excel processes on a single node. This occurs when the job scheduler is no longer able to create new sessions on the computer.

Workaround

You can modify the SharedSection data in the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Session Manager\SubSystems\Windows registry value to increase the size of the desktop heap that is allocated for a noninteractive desktop window station on the compute nodes where oversubscription is used. For example, increasing the heap size allocated for a noninteractive window station to the size allocated for an interactive window station may prevent the problem. To increase the size of the desktop heap, see the instructions in the Microsoft Knowledge Base (https://support.microsoft.com/kb/184802).

A failure in the loading of a SOA service might not cause a task to fail when a node preparation task is used

In some cases, when you submit a service-oriented architecture (SOA) job that uses a node preparation task, and there is a failure in the loading of a SOA service, the service loading failure is not detected. Thus, the task will keep running without being canceled and retried, but no requests will be sent to that task. This can cause a SOA job to run less efficiently on a cluster, but no critical error is reported.

“400 Bad Request” error when using the REST interface

When submitting XML job descriptions to the cluster using the REST interface, you may receive a 400 Bad Request error message. This error may indicate that your XML has a missing or malformed XML namespace property.

To avoid this problem, update your XML template, or modify the code that you use to generate the XML request, to include the necessary properties. For more information, see the REST API reference.

hpcpack does not support all valid filename characters

When using the hpcpack utility to create a package, creation may fail if you have file or folder names that include the # character. For example, this character may appear in file or folder names when you use C# samples.

To avoid this problem, before using hpcpack, rename your files and folders so that their names do not include the # character.

Configuring a SQL Server alias incorrectly causes installation of HPC Pack to fail

Installation of HPC Pack 2008 R2 on the head node, with HPC databases stored on remote servers running SQL Server, can fail when using SQL Server aliasing if both the 32-bit and the 64-bit aliases have not been configured.

Workaround

Configure both 32-bit and 64-bit SQL Server aliases on the head node computer, and then try to install HPC Pack 2008 R2 again. Alternatively, install HPC Pack 2008 R2 on the head node, with HPC databases stored on remote servers, without configuring a SQL Server alias.