Types of Diagnostic Tests

Diagnostic tests in Windows HPC Server 2008 R2 fall into two main categories:

  • Tests that run commands

  • Tests that run jobs

Windows HPC Server 2008 R2 also provides for a couple of types of diagnostic tests that have autogenerated reports. The reports that these types of tests generate have the same overall appearance as the reports for the built-in diagnostic tests that Windows HPC Server 2008 R2 provides. These types of tests are the following types:

  • Consistency tests

  • Comparison test

Consistency and comparison tests are special cases of tests that run commands. You cannot designate a test that runs a job as a consistency or comparison test.

Tests that run commands

Most diagnostic tests are tests that perform the main work of the test by running a command. The command in this type of test can generally consist of any command or application that you can run at a command prompt, and includes the following items:

  • Built-in command line commands

  • Batch and command scripts

  • Windows PowerShell cmdlets and scripts run with the powershell.exe command

  • VBScript or other types of scripts run with the cscript command

  • Console applications or other executable applications

The main requirement for commands that you use in this type of test is that the command can run without prompting the user for additional input. If the command in a diagnostic test prompts the user for additional input, the prompt remains unseen and the test stalls.

The diagnostic service runs the command in this type of test with the HPC clusrun command. If you can run a command with clusrun, you can use it in a diagnostic test.

Because you need to specify the command that this type of diagnostic test should run for the main portion of the test, you are required to specify a RunStep stage for this type of test. The PreStep and PostStep stages are optional.

Tests that run jobs

Some diagnostic tests can only perform the main portion of their work if they run as an HPC job. Such tests include tests that perform the following types of operations:

  • Run Message Passing Interface (MPI) applications.

  • Run parametric sweep tasks.

Because you need to configure the job that is going to perform the main portion of the test, you must implement a PreStep stage that adds the tasks you want to run to a job that the diagnostic service automatically creates in advance for the RunStep stage, and you cannot specify a RunStep stage for this type of test. The diagnostic service stores the job identifier for the RunStep job in the DIAG_RUNSTEP_JOBID environment variable, and you can use the job identifier to configure and add tasks to the job.

If the XML file that defines a diagnostic test does not include a RunStep stage for the test, the diagnostic service automatically treats the test as a test that runs a job. If the PreStep stage of such a test does not add a task to the job, an error occurs.

Consistency Tests

A consistency test is a diagnostic test that checks whether the result of the command that the test runs in the same on all nodes. The overall result for the consistency test and the results for all of the nodes in the test is Success if the output of the test is identical for all of the nodes in the test. The overall result for the consistency test and the results for all of the nodes in the test is Failure if the output not all identical for all of the nodes.

You designate a test as a consistency test by including the AutoGenerateResult element with the TestType attribute set to Consistency in the XML file that defines the test.

When you designate a custom diagnostic test as a Consistency test, you do not need to implement a PostStep stage that compares all of results from the different nodes. The diagnostic service handles this comparison for you. The diagnostic service also automatically creates a report that summarizes whether the results were the same for all nodes, and lists the results for all of the nodes.

Comparison Tests

A comparison test is a diagnostic test that checks whether the result of the command that the test runs matches an expected value on the nodes specified for the test. The result for a node in a comparison test is Success if the output of the test on the node matches the specified expected value. The result for a node in a comparison test is Failure if the output of the test on the node does not match the specified expected value. The overall result for the test is Success only if the result for all of the nodes in the test is Success. If the result of the test for any node is Failure, the overall result of the test is Failure.

You designate a test as a comparison test by including the AutoGenerateResult element with the TestType attribute set to Comparison in the XML file that defines the test. You also need to define a global parameter for the test that contains the value that the test should produce, and designate that parameter as the one that contains the expected value by specifying the name of the parameter in the ExpectedValue attribute of the AutoGenerateResult element.

You can use the global parameter that you use to specify the expected value for the test as a parameter for the PreStep and RunStep commands, but under many circumstance you may not want to include this parameter in the PreStep and RunStep command. To specify that you want to use a global parameter only to specify the expected value for the comparison test and not for the PreStep and RunStep commands, set the UseOnlyInStep attribute to PostStep. You can use the parameter in more than one stage by setting the UseOnlyInStep attribute to a comma-separated list of the stages in which you want to use the parameter.

Windows HPC Server 2008 R2 does not restrict the Comparison test type to matching an exact expected value, but also supports matching a pattern for the expected value. You can specify that the test should match a pattern by using a .NET Framework regular expression for the value of the parameter that contains the expected value. You can specify this regular expression in the DefaultValue attribute for the Parameter element for the parameter that specifies the expected value, or the user can specify it as the value for that parameter when the user runs the test. For information about using .NET Framework regular expressions, see .NET Framework Regular Expressions (https://go.microsoft.com/fwlink/p/?linkid=165378).

When you designate a custom diagnostic test as a Comparison test, you do not need to implement a PostStep stage that compares the results from the different nodes to the expected value or pattern. The diagnostic service handles this comparison for you. The diagnostic service also automatically creates a report that summarizes whether the results matched the expected value for all of the nodes, and lists the results for all of the nodes.

Steps for Creating a Custom Diagnostic Test that Runs an Existing Command

Steps for Creating a Custom Diagnostic Test that Runs a Job

Steps for Creating a Custom Diagnostic Test that Checks that a Result is Consistent throughout an HPC Cluster

Steps for Creating a Custom Diagnostic Test that Checks that a Result Matches an Expected Value throughout an HPC Cluster

Diagnostic Test Stages

Diagnostics Extensibility in Windows HPC Server 2008 R2 Step-by-Step Guide