Partilhar via


Microsoft MPI v6 release

We are happy to announce the release of the newest version of Microsoft MPI (MS-MPI). MS-MPI v6 is the successor to the Microsoft MPI v5 redistributable package (released in November 2014). You can download a copy of MS-MPI v6 from the Microsoft download center here.

MS-MPI v6 includes the following features, improvements and fixes:

Non-blocking collective operations

This release adds our initial support for asynchronous collectives, offering MPI_Ibcast, MPI_Ireduce, MPI_Igather and MPI_Ibarrier. For more details on these APIs please refer to MPI 3.0 standards documentation here.

Multi-job affinity support

Multi-job affinity support is designed to allow multiple affinitized MPI jobs to co-exist on a single machine without overlapping the cores they run on. The mpi runtime will now detect that there are existing jobs pinned to cores, and will launch subsequent jobs on cores that are not currently in use.

The feature is exposed as a new option to mpiexec, -affinity_auto or –aa and is designed to work both under job schedulers such as Microsoft HPC Pack and in standalone SDK mode.

As an example, to run two 8 core jobs on a single 16 core machine you could use the following command line:

mpiexec –cores 8 –affinity_auto –affinity_layout sequential myapp.exe

or more succinctly

mpiexec –c 8 –aa –al seq myapp.exe

Support for multi-threaded applications

This version of MS-MPI offers support for multi-threaded applications by enabling the use of MPI_THREAD_MULTIPLE when calling MPI_Init_thread. This is designed to allow hybrid applications using OMP or other threading models to more easily leverage the MPI runtime.

The minimum supported server for this feature is Windows Server 2012. The minimum supported client for this feature is Windows 8.

New features from the MPI 3.0 standard

This release also adds a number of features from the MPI 3.0 standard. For more detail on these, please see the MPI standards documentation here

  • Support for MPI_Mprobe, MPI_Mrecv, MPI_Improbe and MPI_Imrecv
  • Support for MPI_COUNT, to allow large datatypes to be properly represented in MPI_STATUS structures.
  • Support for MPI_Type_create_hindexed_block
  • Support for MPI_Dist_graph_create ,MPI_Dist_graph_create_adjacent, MPI_Dist_graph_neighbours and MPI_Dist_graph_neighbours_count

Note: The SDK components for MS-MPI (headers and libraries) ship separately from the redistributable package binary files.

To learn more about MS-MPI, see Microsoft MPI on MSDN or for detailed questions, or future feature requests please send us email to askmpi@microsoft.com

You can also find useful information, and ask your own questions, in the Windows HPC MPI Forum.