Release Notes for HPC Pack 2012 R2 Update 3
Applies To: Microsoft HPC Pack 2012 R2
These release notes address late-breaking issues and information for the high-performance computing (HPC) cluster administrator about Microsoft® HPC Pack 2012 R2 Update 3. Use this update to upgrade an existing HPC cluster, either on-premises or in Azure infrastructure services (IaaS), that is currently running HPC Pack 2012 R2 Update 2, or use the complete installation package to set up a new on-premises HPC cluster. You can also use HPC Pack 2012 R2 Update 3 VM images in the Azure Marketplace to set up a complete HPC cluster in Azure IaaS.
When performing an upgrade installation, you should upgrade all head nodes, compute nodes, Windows Communication Foundation (WCF) broker nodes, workstation nodes, unmanaged server nodes, and computers that are running the HPC Pack client utilities. For important information about new features in HPC Pack 2012 R2 Update 3, see What's New in HPC Pack 2012 R2 Update 3.
Go to the Microsoft Download Center to download the upgrade and installation packages for HPC Pack 2012 R2 Update 3.
Note
To get started with a new cluster installation, see the Getting Started Guide for Microsoft HPC Pack 2012 R2 and HPC Pack 2012. If you are migrating from an HPC Pack 2008 R2 cluster, see Migrate a Cluster to HPC Pack 2012 R2 or HPC Pack 2012. For deployment in Azure IaaS, see HPC Pack cluster options in Azure.
In this topic:
Before you upgrade to HPC Pack 2012 R2 Update 3
Upgrade the head node to Microsoft HPC Pack 2012 R2 Update 3
Upgrade compute nodes, WCF broker nodes, workstation nodes, and unmanaged server nodes to HPC Pack 2012 R2 Update 3
Redeploy existing Azure nodes
Upgrade client computers to HPC Pack 2012 R2 Update 3
Uninstall HPC Pack 2012 R2 Update 3
Known issues
HPC web portal may display job-related pages slowly
Linux on-premises nodemanager may fail to start if port 40002 has already been used on Linux nodes
Linux compute nodes may fail to work properly if -clusname is not set properly during compute node setup
Heat map view of Linux nodes may experience data loss if configured with short display interval
Limitations when downgrading from Update 3
Issue with node release task on Azure Batch pool
Before you upgrade to HPC Pack 2012 R2 Update 3
Perform the following actions before you upgrade to HPC Pack 2012 R2 Update 3:
Take all compute nodes, workstation nodes, and unmanaged server nodes offline and wait for all current jobs to drain.
If you have a node template in which an Automatic availability policy is configured, set the availability policy to Manual.
Stop all existing Azure “burst” nodes so that they are in the Not-Deployed state. If you do not stop them, you may be unable to use or delete the nodes from HPC Cluster Manager after the upgrade, but charges for their use will continue to accrue in Azure. You must redeploy (provision) the Azure nodes after you upgrade the head node.
Note
Under certain conditions, the upgrade installation program might prompt you to stop Azure nodes before it upgrades the head node, even if you have already stopped all Azure nodes (or do not have any Azure nodes deployed). In this case, you can safely continue the installation.
Ensure that all diagnostic tests have finished or are canceled.
Close any HPC Cluster Manager and HPC Job Manager applications that are connected to the cluster head node.
After all active operations on the cluster have stopped, back up all HPC databases by using a backup method of your choice.
Additional considerations for installing the upgrade
When you upgrade, several settings that are related to HPC services are reset to their default values, including the following:
Firewall rules
Event log settings for all HPC services
Service configuration properties such as dependencies and startup options
Service configuration files for the HPC services (for example, HpcSession.exe.config)
MSMQ message storage size
If the head node or WCF broker nodes are configured for high availability in a failover cluster, the HPC Pack related resources that are configured in the failover cluster
After you upgrade, you may need to re-create settings that you have customized for your cluster or restore them from backup files.
Note
You can find more installation details in the following log file after you upgrade the head node: %windir%\temp\HPCSetupLogs\hpcpatch-DateTime.txt
When you upgrade the head node, the files that the head node uses to deploy a compute node or a WCF broker node from bare metal are also updated. Later, if you install a new compute node or WCF broker node from bare metal or if you reimage an existing node, the upgrade is automatically applied to that node.
Upgrade the head node to Microsoft HPC Pack 2012 R2 Update 3
To upgrade the head node to Microsoft HPC Pack 2012 R2 Update 3
Download the x64 version of the HPC Pack 2012 R2 Update 3 upgrade installation program from the Microsoft Download Center. Save the installation program to installation media or to a network location.
Run the installation program as an administrator from the location where you saved it.
Read the informational message that appears. If you are ready to upgrade, click OK.
Note
-
After the installation completes, if you are prompted, restart the computer.
-
You can confirm that the head node is upgraded to HPC Pack 2012 R2 Update 3. To view the version number in HPC Cluster Manager, on the Help menu, click About. The server version number and the client version number that appear will be 4.5.5079.0.
If you have set up a head node for high availability in the context of a failover cluster, use the following procedure to apply the update.
To upgrade a high-availability head node to HPC Pack 2012 R2 Update 3
Download the x64 version of the HPC Pack 2012 R2 Update 3 upgrade installation program from the Microsoft Download Center. Save the installation program to installation media or to a network location.
Take the following high-availability HPC services offline by using Failover Cluster Manager: hpcscheduler, hpcsdm, hpcdiagnostics, hpcreporting, hpcsession, and hpcsoadiagmon.
Upgrade the active head node by running the installation program as an administrator from the location where you saved it.
After you upgrade the active head node, in most cases, the active head node restarts and fails over to the second head node.
Note
Because any additional head node is not upgraded, Failover Cluster Manager might report a failed status for the resource group and the HPC services.
Use the following procedure to upgrade any additional head node:
Take the following high-availability HPC services offline by using Failover Cluster Manager: hpcscheduler, hpcsdm, hpcdiagnostics, hpcreporting, hpcsession, and hpcsoadiagmon.
Verify that the head node is the active head node. If it is not, use Failover Cluster Manager to make the head node the active head node.
Upgrade the head node.
If you have additional head nodes in the cluster, move the current active head node to passive. After failover occurs, upgrade the current active head node according to the preceding steps.
Important
While you are upgrading each head node that is configured for high availability, leave the Microsoft SQL Server resources online.
Upgrade compute nodes, WCF broker nodes, workstation nodes, and unmanaged server nodes to HPC Pack 2012 R2 Update 3
To work with a head node that is upgraded to HPC Pack 2012 R2 Update 3, you must also upgrade existing compute nodes and WCF broker nodes. You can optionally upgrade your existing workstation nodes and unmanaged server nodes. Depending on the type of node, you can use one of the following methods to upgrade to HPC Pack 2012 R2 Update 3:
Upgrade existing nodes that are running HPC Pack 2012 R2 Update 2, either manually or by using a clusrun command.
Note
If you do not have administrative permissions on workstation nodes and unmanaged server nodes in the cluster, the clusrun command might not be able to upgrade the node. In these cases, the administrator of the workstation and unmanaged servers should perform the upgrade.
Reimage an existing compute node or broker node that was deployed by using an operating system image. For more information, see Reimage Compute Nodes.
Note
-
Ensure that you edit the node template to add a step to copy the MS-MPI installation program to each node. Starting in HPC Pack 2012 R2, MPISetup.exe is installed in a separate folder in the REMINST share on the head node, and it is not automatically installed when you deploy a node using a template created in an earlier version of HPC Pack.
-
After you upgrade the head node to HPC Pack 2012 R2 Update 3, if you install a new node from bare metal or if you reimage an existing node, HPC Pack 2012 R2 Update 3 is automatically installed on that node.
-
To use clusrun to upgrade existing nodes to HPC Pack 2012 R2 Update 3
Download the appropriate version of the HPC Pack 2012 R2 Update 3 upgrade installation program from the Microsoft Download Center. Save the installation program to installation media or to a shared folder such as \\headnodename\install.
In HPC Cluster Manager, view nodes by node group to identify a group of nodes that you want to upgrade; for example, ComputeNodes.
Take the nodes in the node group offline.
Open an elevated command prompt and type the appropriate clusrun command to install the upgrade. The following command is an example to install the x64 version.
clusrun /nodegroup:ComputeNodes \\headnodename\install\HPC2012R2_Update3_x64.exe -unattend -SystemReboot
Note
The SystemReboot parameter is required. It causes the nodes to restart after the upgrade completes.
After the upgrade and the nodes in the group restart, bring the nodes online.
To upgrade individual nodes manually to HPC Pack 2012 R2 Update 3, you can copy the installation program to a shared folder on the head node. Then, access the existing nodes by making a remote connection to run the upgrade installation program from the shared folder.
Important
If you have WCF broker nodes that are configured for high availability in a failover cluster, upgrade the high-availability broker nodes as follows:
-
Upgrade the active broker node to HPC Pack 2012 R2 Update 3.
-
Fail over the passive broker node to the active broker node.
-
Upgrade the active broker node (that is not yet updated).
Redeploy existing Azure nodes
If you previously added Azure “burst” nodes to your HPC Pack 2012 R2 Update 2 cluster, you must start (provision) those nodes again to install the updated HPC Pack components. If you previously changed the availability policy of the nodes from Automatic to Manual, you can reconfigure an Automatic availability policy in the Azure node template so that the nodes come online and offline at scheduled intervals. For more information, see Steps to Deploy Azure Nodes with Microsoft HPC Pack.
Upgrade client computers to HPC Pack 2012 R2 Update 3
To update computers on which the HPC Pack 2012 R2 Update 2 client utilities are installed, ensure that any HPC client applications, including HPC Cluster Manager and HPC Job Manager, are stopped. Then, upgrade the computers to HPC Pack 2012 R2 Update 3.
To upgrade client computers to HPC Pack 2012 R2 Update 3
Download the appropriate version of the HPC Pack 2012 R2 Update 3 upgrade installation program from the Microsoft Download Center. Save the installation program to installation media or to a network location.
Run the installation program as an administrator from the location where you saved it.
Read the informational message that appears. If you are ready to upgrade, click OK.
Uninstall HPC Pack 2012 R2 Update 3
Subject to some limitations (see Known issues), you can uninstall Update 3 on the head node to revert toHPC Pack 2012 R2 Update 2, or you can completely uninstall HPC Pack 2012 R2. To uninstall HPC Pack 2012 Update 3, uninstall the updates in the following order:
HPC Pack 2012 R2 Services for Excel 2010
HPC Pack 2012 R2 Server Components
HPC Pack 2012 R2 Client Components
To completely uninstall HPC Pack 2012 R2, uninstall the features in the following order:
HPC Pack 2012 R2 Web Components (if they are installed)
HPC Pack 2012 R2 Key Storage Provider (if it is installed)
HPC Pack 2012 R2 Services for Excel 2010
HPC Pack 2012 R2 Server Components
HPC Pack 2012 R2 Client Components
Important
Not all features are installed on all computers. For example, HPC Pack 2012 R2 Server Components is not installed when you choose to install only the client components.
When HPC Pack is installed on the head node, other programs are installed with it. After uninstalling HPC Pack 2012 R2 Update 3, you can remove the following programs if they will no longer be used:
Microsoft Report Viewer Redistributable 2010 SP1
Microsoft SQL Server 2014 Express
Note
This program also includes Microsoft SQL Server Setup Support Files.
Microsoft SQL Server 2012 Native Client
Additionally, the following server roles and features might have been added when HPC Pack was installed, and they can be removed if they will no longer be used:
Dynamic Host Configuration Protocol (DHCP) Server server role
File Services server role
File Server Resource Manager role service
Routing and Remote Access Service server role
Windows Deployment Services server role
Microsoft .NET Framework feature
Message Queuing feature
Known issues
HPC web portal may display job-related pages slowly
If the HPC web portal is enabled for a cluster where no SOA job was ever submitted, the portal may take a few seconds to show job related pages, including All Jobs, My Jobs, SOA Jobs, My SOA Jobs and the auto-redirected page after job submission.
To avoid this short period of lag, run the EchoClient tool (new in Update 3) to submit a simple SOA job: EchoClient.exe -n 1. HPC Pack configures several special properties in the back end when the first SOA job is submitted to the cluster, and after this the web portal loads the pages faster.
Linux on-premises nodemanager may fail to start if port 40002 has already been used on Linux nodes
The Linux on-premises nodemanager may fail to start if port 40002 has already been used by another process on Linux nodes. If the Linux compute nodes stay in the error state, please check the log file nodemanager.txt in the /opt/hpcnodemanager folder and look for entries similar to Address already in use. You can also run netstat –ano | grep 40002 to determine whether the port is occupied, and release it if possible.
Linux compute nodes may fail to work properly if -clusname is not set properly during compute node setup
Linux compute nodes use the information specified in –clusname during setup to communicate with the head node. If you use the HPC Pack generated self-signed certificate during head node setup, ensure that the name specified in –clusname is the FQDN of the head node . If you set your own certificate by Set-HpcLinuxCertificate on the head node, ensure that the name specified in –clusname is the subject name or is contained in the alternative subject name of the certificate.
Heat map view of Linux nodes may experience data loss if configured with short display interval
The heat map of Linux compute nodes may experience some data loss if the heat map view is configured with a short display interval such as 1 seconds. This is caused by the current UDP-based implementation of the Linux compute nodes metric reporting function.
To work around this issue, configure the heat map view with a longer display interval which is suitable for the network environment.
Limitations when downgrading from Update 3
The upgrade installation package for HPC Pack 2012 R2 Update 3 does not support uninstallation back to HPC Pack 2012 R2 Update 2 on the head node under the following conditions:
You have used the new “burst" to Azure Batch feature
Any node in yourHPC Pack cluster is equipped with NVIDIA GPUs
Additionally, if the HPC Pack 2012 R2 Update 2 on your computer was upgraded from its previous version (i.e., HPC Pack 2012 R2 Update 1), you cannot uninstall the update for HPC Pack Server Components from Control Panel. Instead, run the following command as administrator to uninstall it:
msiexec /package {166692AA-06DD-43E6-97F3-7D0B58220094} /uninstall {C39998C6-4B25-4030-B0D8-5042314EC924} /passive REBOOT=ReallySuppress REINSTALL=ALL /l*v c:\Update3Downgrade.log
Issue with node release task on Azure Batch pool
You cannot submit a job to an Azure Batch pool containing a node release task but without a node preparation task. This behavior is by design in the Batch service, and in this case the node release task will be not triggered by the Batch pool.
To work around this issue, add a "dummy" node preparation task to the job, such as running a simple command like hostname.