Compartir a través de


Step 5: Add Linux nodes to the cluster

After configuring the cluster, you can optionally add HPC Linux compute nodes. The following are high-level steps to prepare and add Linux compute nodes.

5.1: Install Linux operating system on the computers 

To deploy HPC Linux compute nodes, you must install a supported distribution on the computers. For more information, see System Requirements for Microsoft HPC Pack 2019

5.2: Fetch Linux compute node installation binaries 

After deploying the head node, find the on-premises Linux node installation binaries in HPC Pack's install share.

Obtain the path of HPC Pack's install share by running the following PowerShell commands:

Add-PSSnapin microsoft.hpc 

Get-HpcClusterRegistry -PropertyName InstallShare

The path is returned in the command output.

The files hpcnodeagent.tar.gz and setup.py are the binaries required to install on-premises Linux compute nodes.

5.3: Set up a file share to share installation binaries to Linux compute nodes 

You have several choices, including an SMB share or an NFS share, to move data between the head node and the Linux compute nodes. The following steps set up an SMB share on the head node to share the binaries with Linux compute nodes. 

You can also copy the binaries to a centralized share. Just make sure the installation files are accessible from Linux compute nodes, and are executable by the users that need to log in to the Linux compute nodes. 

Tip

You can skip the following instructions if you know how to deploy the binaries to a file share that is accessible from the Linux compute nodes.

To set up an SMB share on the head node

  1. Create a folder on the head node and share it to Everyone with the Read/Write permission level. For example, share C:\SmbShare on the head node as \\<HeadNodeName>\SmbShare. Example: \\LN15-UB14-HN1\SmbShare.

  2. Mount the SMB share on each Linux node. For example, use the following commands to mount the share on the path /smbshare:

    mkdir –p /smbshare  
    
    mount -t cifs //LN15-UB14-HN1/SmbShare /smbshare -o vers=2.1,domain=<domainname>,username=<username>,password='<password>',dir_mode=0777,file_mode=0777  
    
    

    Note

    You must use cifs-utils to mount the SMB share from the Linux compute nodes. On CentOS and Red Hat distributions, install the package cifs-utils by running yum install.

  3. Copy the binaries hpcnodeagent.tar.gz and setup.py into \\LN15-UB14-HN1\SmbShare in the head node, and check that the files can be seen in the path /smbshare from the Linux compute nodes.

5.4: Install the certificate used to secure the communication between HPC nodes 

If your Certificate for the head node is self-signed, you should install the public certificate for the head node on the Linux computer as Trusted Root Certification Authorities. The public certificate named HpcHnPublicCert.cer is in the Certificates folder of the installation share.
The method to fulfill this step is different for different Linux distributions. Please check your Linux distribution's administration guide for details.

For example, on Ubuntu 14.04 LTS, convert the CER format certificate without private key for the head node to CRT format with openssl, save it to the directory /usr/local/share/ca-certificate, and run the following binary:

/usr/sbin/update-ca-certificates

5.5: Install the Linux compute node agent on the computers

Install the Linux compute nodes by executing the Python script setup.py. Ensure that Python is installed on the Linux nodes, and install it if not.

For the detailed usage of setup.py, type python setup.py --help.

For example, to add a Linux node to the cluster, type a command similar to the following command in a Bash shell on each Linux node.

python setup.py -install -connectionstring:'<connection string of the cluster>' -certfile:'<path to PFX certificate>'  

5.6: Verify the configuration

After you successfully install the Linux nodes, open HPC Cluster Manager to check the status of the HPC Pack cluster.

  • In Resource Management, list Linux nodes by clicking By Node Template -> LinuxNode Template.

  • View a heat map of the Linux nodes by switching to the Heat Map view in Resource Management.

  • Submit jobs to the Linux nodes by using the actions in Job Management.

To submit a test parametric sweep job to Linux nodes

  1. After selecting the Linux nodes in Resource Management, pivot to Job Management, and click New Parametric Sweep Job.

  2. In the New Parametric Sweep Job dialog box, specify a simple command line, such as hostname. Accept default values for the remaining settings, and then click Submit.

  3. After the job finishes, double-click the item to view the output of each task. In this example, each Linux node returns its hostname.