Edit

Share via


MPI_Iscatter function

Scatters data from one member across all members of a group in a non-blocking way. This function performs the inverse of the operation that is performed by the MPI_Igatherfunction.

Syntax

int MPIAPI MPI_Iscatter(
  _In_opt_  const void         *sendbuf,
  _In_             int         sendcount,
  _In_            MPI_Datatype sendtype,
  _Out_opt_       void         *recvbuf,
  _In_            int          recvcount,
  _In_            MPI_Datatype recvtype,
  _In_            int          root,
  _In_            MPI_Comm     comm,
  _Out_           MPI_Request  *request
);

Parameters

  • sendbuf [in, optional]
    The pointer to a buffer that contains the data to be sent to the root process.

    This parameter is ignored for all non-root processes.

    If the comm parameter references an intracommunicator, you can specify an in-place option by specifying MPI_IN_PLACE in the root process. The recvcount and recvtype parameters are ignored. The scattered vector is still considered to contain n segments, where n is the group size; the segment that corresponds to the root process is not moved.

  • sendcount [in]
    The number of elements in the send buffer. If sendcount is zero, the data part of the message is empty.

    This parameter is ignored for all non-root processes.

  • sendtype [in]
    The data type of each element in the buffer.

    This parameter is ignored for all non-root processes.

  • recvbuf [out, optional]
    The handle to a buffer that contains the data that is received on each process. The number and data type of the elements in the buffer are specified in the recvcount and recvtype parameters.

  • recvcount [in]
    The number of elements in the receive buffer. If the count is zero, the data part of the message is empty.

  • recvtype [in]
    The MPI data type of the elements in the receive buffer.

  • root [in]
    The rank of the receiving process within the specified communicator.

  • comm [in]
    The MPI_Comm communicator handle.

  • request [out]
    The MPI_Request handle representing the communication operation.

Return value

Returns MPI_SUCCESS on success. Otherwise, the return value is an error code.

In Fortran, the return value is stored in the IERROR parameter.

Fortran

    MPI_ISCATTER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, ROOT, COMM, REQUEST, IERROR)
        <type> SENDBUF(*), RECVBUF(*)
        INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, ROOT, COMM, REQUEST, IERROR

Remarks

A non-blocking call initiates a collective reduction operation which must be completed in a separate completion call. Once initiated, the operation may progress independently of any computation or other communication at participating processes. In this manner, non-blocking reduction operations can mitigate possible synchronizing effects of reduction operations by running them in the “background.”

All completion calls (e.g., MPI_Wait) are supported for non-blocking reduction operations.

Requirements

Product

Microsoft MPI v7

Header

Mpi.h; Mpif.h

Library

Msmpi.lib

DLL

Msmpi.dll

See also

MPI Collective Functions

MPI_Datatype

MPI_Scatter

MPI_Test

MPI_Testall

MPI_Testany

MPI_Testsome

MPI_Wait

MPI_Waitall

MPI_Waitany

MPI_Waitsome