Release Notes for Microsoft HPC Pack 2012 R2 Update 2
Applies To: Microsoft HPC Pack 2012, Microsoft HPC Pack 2012 R2
These release notes address late-breaking issues and information for the high-performance computing (HPC) cluster administrator about Microsoft® HPC Pack 2012 R2 Update 2. You can use the HPC Pack 2012 R2 Update 2 VM images in the Azure Marketplace to set up a complete HPC cluster in Microsoft Azure infrastructure services (IaaS), or you can do a full installation to build a new on-premises cluster. You can also use this update to upgrade an existing HPC cluster (either in Azure IaaS or on-premises) that is currently running HPC Pack 2012 R2 Update 1.
When performing an upgrade installation, you should upgrade all head nodes, compute nodes, Windows Communication Foundation (WCF) broker nodes, workstation nodes, unmanaged server nodes, and computers that are running the HPC Pack client utilities. For important information about new features in HPC Pack 2012 R2 Update 2, see What's New in Microsoft HPC Pack 2012 R2 Update 2.
Go to the Microsoft Download Center to download the upgrade and installation packages for HPC Pack 2012 R2 Update 2.
Note
To get started with a new cluster installation, see the Getting Started Guide for Microsoft HPC Pack 2012 R2 and HPC Pack 2012. If you are migrating from an HPC Pack 2008 R2 cluster, see Migrate a Cluster to HPC Pack 2012 R2 or HPC Pack 2012.
In this topic:
Before you upgrade to HPC Pack 2012 R2 Update 2
Upgrade the head node to Microsoft HPC Pack 2012 R2 Update 2
Upgrade compute nodes, WCF broker nodes, workstation nodes, and unmanaged server nodes to HPC Pack 2012 R2 Update 2
Redeploy existing Azure nodes
Upgrade client computers to HPC Pack 2012 R2 Update 2
Uninstall HPC Pack 2012 R2 Update 2
Known issues
Problem when stopping some Azure nodes if the Azure node template specifies a reserved IP address
HPC Pack IaaS deployment script fails to set up mutual trust for root on Ubuntu Server 14.10
Updating some Linux RDMA nodes may cause errors after reboot
Azure File storage is not supported for CentOS 6.6
Copying large numbers of files through an Azure File share simultaneously from many nodes can cause copy failures
Compatibility issue when using a previous version of a SOA client
Before you upgrade to HPC Pack 2012 R2 Update 2
Important
The upgrade installation package for HPC Pack 2012 R2 Update 2 does not support uninstallation back to HPC Pack 2012 R2 Update 1. After you upgrade, if you want to downgrade to HPC Pack 2012 R2 Update 1, you must completely uninstall the HPC Pack 2012 R2 Update 2 features from the head node computer and the other computers in your cluster. If you want, you can reinstall HPC Pack 2012 R2 Update 2 and restore the data in the HPC databases.
Perform the following actions before you upgrade to HPC Pack 2012 R2 Update 2:
Take all compute nodes, workstation nodes, and unmanaged server nodes offline and wait for all current jobs to drain.
If you have a node template in which an Automatic availability policy is configured, set the availability policy to Manual.
Stop all existing Azure “burst” nodes so that they are in the Not-Deployed state. If you do not stop them, you may be unable to use or delete the nodes from HPC Cluster Manager after the upgrade, but charges for their use will continue to accrue in Azure. You must redeploy (provision) the Azure nodes after you upgrade the head node.
Note
Under certain conditions, the upgrade installation program might prompt you to stop Azure nodes before it upgrades the head node, even if you have already stopped all Azure nodes (or do not have any Azure nodes deployed). In this case, you can safely continue the installation.
Ensure that all diagnostic tests have finished or are canceled.
Close any HPC Cluster Manager and HPC Job Manager applications that are connected to the cluster head node.
After all active operations on the cluster have stopped, back up all HPC databases by using a backup method of your choice.
Additional considerations for installing the upgrade
When you upgrade, several settings that are related to HPC services are reset to their default values, including the following:
Firewall rules
Event log settings for all HPC services
Service configuration properties such as dependencies and startup options
Service configuration files for the HPC services (for example, HpcSession.exe.config)
MSMQ message storage size
If the head node or WCF broker nodes are configured for high availability in a failover cluster, the HPC Pack related resources that are configured in the failover cluster
After you upgrade, you may need to re-create settings that you have customized for your cluster or restore them from backup files.
Note
You can find more installation details in the following log file after you upgrade the head node: %windir%\temp\HPCSetupLogs\hpcpatch-DateTime.txt
When you upgrade the head node, the files that the head node uses to deploy a compute node or a WCF broker node from bare metal are also updated. Later, if you install a new compute node or WCF broker node from bare metal or if you reimage an existing node, the upgrade is automatically applied to that node.
Upgrade the head node to Microsoft HPC Pack 2012 R2 Update 2
To upgrade the head node to Microsoft HPC Pack 2012 R2 Update 2
Download the x64 version of the HPC Pack 2012 R2 Update 2 upgrade installation program from the Microsoft Download Center. Save the installation program to installation media or to a network location.
Run the installation program as an administrator from the location where you saved it.
Read the informational message that appears. If you are ready to upgrade, click OK.
Note
-
After the installation completes, if you are prompted, restart the computer.
-
You can confirm that the head node is upgraded to HPC Pack 2012 R2 Update 2. To view the version number in HPC Cluster Manager, on the Help menu, click About. The server version number and the client version number that appear will be 4.4.4864.0.
If you have set up a head node for high availability in the context of a failover cluster, use the following procedure to apply the update.
To upgrade a high-availability head node to HPC Pack 2012 R2 Update 2
Download the x64 version of the HPC Pack 2012 R2 Update 2 upgrade installation program from the Microsoft Download Center. Save the installation program to installation media or to a network location.
Take the following high-availability HPC services offline by using Failover Cluster Manager: hpcscheduler, hpcsdm, hpcdiagnostics, hpcreporting, hpcsession, and hpcsoadiagmon.
Upgrade the active head node by running the installation program as an administrator from the location where you saved it.
After you upgrade the active head node, in most cases, the active head node restarts and fails over to the second head node.
Note
Because any additional head node is not upgraded, Failover Cluster Manager might report a failed status for the resource group and the HPC services.
Use the following procedure to upgrade any additional head node:
Take the following high-availability HPC services offline by using Failover Cluster Manager: hpcscheduler, hpcsdm, hpcdiagnostics, hpcreporting, hpcsession, and hpcsoadiagmon.
Verify that the head node is the active head node. If it is not, use Failover Cluster Manager to make the head node the active head node.
Upgrade the head node.
If you have additional head nodes in the cluster, move the current active head node to passive. After failover occurs, upgrade the current active head node according to the preceding steps.
Important
While you are upgrading each head node that is configured for high availability, leave the Microsoft SQL Server resources online.
Note
High availability head nodes and broker nodes configurations are currently not supported in the Azure IaaS environment because the dependent Windows Server Failover Clustering is not yet fully supported.
Upgrade compute nodes, WCF broker nodes, workstation nodes, and unmanaged server nodes to HPC Pack 2012 R2 Update 2
To work with a head node that is upgraded to HPC Pack 2012 R2 Update 2, you must also upgrade existing compute nodes and WCF broker nodes. You can optionally upgrade your existing workstation nodes and unmanaged server nodes. Depending on the type of node, you can use one of the following methods to upgrade to HPC Pack 2012 R2 Update 2:
Upgrade existing nodes that are running HPC Pack 2012 R2 Update 1, either manually or by using a clusrun command.
Note
If you do not have administrative permissions on workstation nodes and unmanaged server nodes in the cluster, the clusrun command might not be able to upgrade the node. In these cases, the administrator of the workstation and unmanaged servers should perform the upgrade.
Reimage an existing compute node or broker node that was deployed by using an operating system image. For more information, see Reimage Compute Nodes.
Note
-
Ensure that you edit the node template to add a step to copy the MS-MPI installation program to each node. Starting in HPC Pack 2012 R2, MPISetup.exe is installed in a separate folder in the REMINST share on the head node, and it is not automatically installed when you deploy a node using a template created in an earlier version of HPC Pack.
-
After you upgrade the head node to HPC Pack 2012 R2 Update 2, if you install a new node from bare metal or if you reimage an existing node, HPC Pack 2012 R2 Update 2 is automatically installed on that node.
-
To use clusrun to upgrade existing nodes to HPC Pack 2012 R2 Update 2
Download the appropriate version of the HPC Pack 2012 R2 Update 2 upgrade installation program from the Microsoft Download Center. Save the installation program to installation media or to a shared folder such as \\headnodename\install.
In HPC Cluster Manager, view nodes by node group to identify a group of nodes that you want to upgrade; for example, ComputeNodes.
Take the nodes in the node group offline.
Open an elevated command prompt and type the appropriate clusrun command to install the upgrade. The following command is an example to install the x64 version.
clusrun /nodegroup:ComputeNodes \\headnodename\install\HPC2012R2_Update2_x64.exe -unattend -SystemReboot
Note
The SystemReboot parameter is required. It causes the nodes to restart after the upgrade completes.
After the upgrade and the nodes in the group restart, bring the nodes online.
To upgrade individual nodes manually to HPC Pack 2012 R2 Update 2, you can copy the installation program to a shared folder on the head node. Then, access the existing nodes by making a remote connection to run the upgrade installation program from the shared folder.
Important
If you have WCF broker nodes that are configured for high availability in a failover cluster, upgrade the high-availability broker nodes as follows:
-
Upgrade the active broker node to HPC Pack 2012 R2 Update 2.
-
Fail over the passive broker node to the active broker node.
-
Upgrade the active broker node (that is not yet updated).
Redeploy existing Azure nodes
If you previously added Azure “burst” nodes to your HPC Pack 2012 R2 Update 1 cluster, you must start (provision) those nodes again to install the updated HPC Pack components. If you previously changed the availability policy of the nodes from Automatic to Manual, you can reconfigure an Automatic availability policy in the Azure node template so that the nodes come online and offline at scheduled intervals. For more information, see Steps to Deploy Azure Nodes with Microsoft HPC Pack.
Upgrade client computers to HPC Pack 2012 R2 Update 2
To update computers on which the HPC Pack 2012 R2 Update 1 client utilities are installed, ensure that any HPC client applications, including HPC Cluster Manager and HPC Job Manager, are stopped. Then, upgrade the computers to HPC Pack 2012 R2 Update 2.
To upgrade client computers to HPC Pack 2012 R2 Update 2
Download the appropriate version of the HPC Pack 2012 R2 Update 2 upgrade installation program from the Microsoft Download Center. Save the installation program to installation media or to a network location.
Run the installation program as an administrator from the location where you saved it.
Read the informational message that appears. If you are ready to upgrade, click OK.
Uninstall HPC Pack 2012 R2 Update 2
To completely uninstall HPC Pack 2012 R2 Update 2, uninstall the features in the following order:
HPC Pack 2012 R2 Web Components (if they are installed)
HPC Pack 2012 R2 Key Storage Provider (if it is installed)
HPC Pack 2012 R2 Services for Excel 2010
HPC Pack 2012 R2 Server Components
HPC Pack 2012 R2 Client Components
Important
Not all features are installed on all computers. For example, HPC Pack 2012 R2 Server Components is not installed when you choose to install only the client components.
When HPC Pack is installed on the head node, other programs are installed with it. After uninstalling HPC Pack 2012 R2 Update 2, you can remove the following programs if they will no longer be used:
Microsoft Report Viewer Redistributable 2010 SP1
Microsoft SQL Server 2014 Express
Note
This program also includes Microsoft SQL Server Setup Support Files.
Microsoft SQL Server 2012 Native Client
Additionally, the following server roles and features might have been added when HPC Pack was installed, and they can be removed if they will no longer be used:
Dynamic Host Configuration Protocol (DHCP) Server server role
File Services server role
File Server Resource Manager role service
Routing and Remote Access Service server role
Windows Deployment Services server role
Microsoft .NET Framework feature
Message Queuing feature
Known issues
Problem when stopping some Azure nodes if the Azure node template specifies a reserved IP address
Due to a known issue in Microsoft Azure, when you try stopping some but not all of the running Azure nodes belonging to an Azure node template that specifies a reserved IP address, you may encounter a failure and see an error message in the provisioning log similar to Windows Azure deployment failure (BadRequest): The Service Configuration Model was not expected to be null.
Note
The error does not occur if you try stopping all the running Azure Nodes belonging to an Azure node template or the node template has a blank reserved IP address setting.
If you see this problem, the related Azure nodes will probably be in the Error state. In this case, you can bring the nodes Online or Offline to restore them. Alternatively, you can stop all the Azure nodes.
HPC Pack IaaS deployment script fails to set up mutual trust for root on Ubuntu Server 14.10
Azure PowerShell has limited support to set up an SSH Key pair to authenticate access to a Linux OS. Because of this, the HPC Pack IaaS deployment script currently doesn’t support setup of mutual trust for root on Ubuntu Server 14.10.
To work-around this issue, use an SMB share or NFS server instead and set up the mutual trust using clusrun. See Getting Started with HPC Pack and Linux in Azure for details.
Updating some Linux RDMA nodes may cause errors after reboot
Using yum update or yum upgrade on a size A8 or A9 Linux node created from current RDMA CentOS 6.5 or CentOS 7.0 images or using zypper update on a Linux node created from a current RDMA SUSE Linux Enterprise Server 12 images can cause a network interface update which sets the RDMA network adapter’s IP address from 172.16.*.* to 192.168.*.*. This incorrect network setting can cause an affected Linux node to enter an error state after reboot.
Azure File storage is not supported for CentOS 6.6
Currently Azure File storage (preview) is not supported for CentOS 6.6 virtual machines. You can use an SMB share or NFS server instead. Please refer to “Getting started with Linux on HPC Pack” for details.
Copying large numbers of files through an Azure File share simultaneously from many nodes can cause copy failures
Please refer to the Azure File (preview) storage service limits before attempting to copy large numbers of files through an Azure File share simultaneously from many nodes. Attempting operations that exceed the service limits may lead to some file copy failures on some nodes.
Compatibility issue when using a previous version of a SOA client
If you’re running SOA client application against an HPC Pack Update 2 Cluster, we strongly recommend that you upgrade your client utilities where your SOA client application runs.