Edit

Share via


MPI_Allreduce function

Combines values from all processes and distributes the result back to all processes.

Syntax

int MPIAPI MPI_Allreduce(
  _In_opt_  const void         *sendbuf,
  _Out_opt_       void         *recvbuf,
  _In_            int          count,
  _In_            MPI_Datatype datatype,
  _In_            MPI_Op       op,
  _In_            MPI_Comm     comm
);

Parameters

  • sendbuf [in, optional]
    The pointer to the data to be sent to all processes in the group. The number and data type of the elements in the buffer are specified in the count and datatype parameters.

    If the comm parameter references an intracommunicator, you can specify an in place option by specifying MPI_IN_PLACE in all processes. In this case, the input data is taken at each process from the receive buffer, where it will be replaced by the output data.

  • recvbuf [out, optional]
    The pointer to a buffer to receive the result of the reduction operation. This parameter is significant only at the root process.

  • count [in]
    The number of elements to send from this process.

  • datatype [in]
    The MPI_Datatype of each element in the buffer. This parameter must be compatible with the operation as specified in the op parameter.

  • op [in]
    The MPI_Op handle indicating the global reduction operation to perform. The handle can indicate a built-in or application-defined operation. For a list of predefined operations, see MPI_Op.

  • comm [in]
    The MPI_Comm communicator handle.

Return value

Returns MPI_SUCCESS on success. Otherwise, the return value is an error code.

In Fortran, the return value is stored in the IERROR parameter.

Fortran

    MPI_ALLREDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, IERROR)
        <type> SENDBUF(*), RECVBUF(*)
        INTEGER COUNT, DATATYPE, OP, COMM, IERROR

Remarks

If comm is an intercommunicator, the result of the reduction of the data provided by processes in group A is stored at each process in group B, and vice versa. Both groups should provide count and datatype arguments that specify the same type signature.

All completion calls (e.g., MPI_Wait) are supported for non-blocking reduction operations.

Requirements

Product

HPC Pack 2012 MS-MPI Redistributable Package, HPC Pack 2008 R2 MS-MPI Redistributable Package, HPC Pack 2008 MS-MPI Redistributable Package or HPC Pack 2008 Client Utilities

Header

Mpi.h; Mpif.h

Library

Msmpi.lib

DLL

Msmpi.dll

See also

MPI Collective Functions

MPI_Reduce

MPI_Datatype

MPI_Op