Manual Pages for UNIX Darwin command on man MPI_Comm_split
MyWebUniversity

Manual Pages for UNIX Darwin command on man MPI_Comm_split

MPICommsplit(3OpenMPI) MPICommsplit(3OpenMPI)

NAME

MMPPIICCoommmmsspplliitt - Creates new communicators based on colors and keys.

SSYYNNTTAAXX CC SSyynnttaaxx

#include

int MPICommsplit(MPIComm comm, int color, int key, MPIComm *newcomm) FFoorrttrraann SSyynnttaaxx INCLUDE 'mpif.h'

MPICOMMSPLIT(COMM, COLOR, KEY, NEWCOMM, IERROR)

INTEGER COMM, COLOR, KEY, NEWCOMM, IERROR

CC++++ SSyynnttaaxx

#include

MPI::Intercomm MPI::Intercomm::Split(int color, int key) const MPI::Intracomm MPI::Intracomm::Split(int color, int key) const IINNPPUUTT PPAARRAAMMEETTEERRSS comm Communicator (handle). color Control of subset assignment (nonnegative integer). key Control of rank assigment (integer). OOUUTTPPUUTT PPAARRAAMMEETTEERRSS newcomm New communicator (handle).

IERROR Fortran only: Error status (integer).

DESCRIPTION

This function partitions the group associated with comm into disjoint

subgroups, one for each value of color. Each subgroup contains all pro-

cesses of the same color. Within each subgroup, the processes are ranked in the order defined by the value of the argument key, with ties broken according to their rank in the old group. A new communicator is created for each subgroup and returned in newcomm. A process may supply the color value MPIUNDEFINED, in which case newcomm returns MPICOMMNULL. This is a collective call, but each process is permitted to provide different values for color and key.

When you call MPICommsplit on an inter-communicator, the processes on

the left with the same color as those on the right combine to create a

new inter-communicator. The key argument describes the relative rank

of processes on each side of the inter-communicator. The function

returns MPICOMMNULL for those colors that are specified on only one

side of the inter-communicator, or for those that specify MPIUNEDE-

FINED as the color. A call to MPICommcreate(comm, group, newcomm) is equivalent to a call to MPICommsplit(comm, color, key, newcomm), where all members of group provide color = 0 and key = rank in group, and all processes that are not members of group provide color = MPIUNDEFINED. The function MPICommsplit allows more general partitioning of a group into one or more subgroups with optional reordering. The value of color must be nonnegative or MPIUNDEFINED. NNOOTTEESS

This is an extremely powerful mechanism for dividing a single communi-

cating group of processes into k subgroups, with k chosen implicitly by the user (by the number of colors asserted over all the processes). Each resulting communicator will be nonoverlapping. Such a division could be useful for defining a hierarchy of computations, such as for multigrid or linear algebra.

Multiple calls to MPICommsplit can be used to overcome the require-

ment that any call have no overlap of the resulting communicators (each

process is of only one color per call). In this way, multiple overlap-

ping communication structures can be created. Creative use of the color and key in such splitting operations is encouraged. Note that, for a fixed color, the keys need not be unique. It is MPICommsplit's responsibility to sort processes in ascending order according to this key, and to break ties in a consistent way. If all the keys are specified in the same way, then all the processes in a

given color will have the relative rank order as they did in their par-

ent group. (In general, they will have different ranks.) Essentially, making the key value zero for all processes of a given

color means that one needn't really pay attention to the rank-order of

the processes in the new communicator. EERRRROORRSS Almost all MPI routines return an error value; C routines as the value

of the function and Fortran routines in the last argument. C++ func-

tions do not return errors. If the default error handler is set to

MPI::ERRORSTHROWEXCEPTIONS, then on error the C++ exception mechanism

will be used to throw an MPI:Exception object. Before the error value is returned, the current MPI error handler is called. By default, this error handler aborts the MPI job, except for I/O function errors. The error handler may be changed with

MPICommseterrhandler; the predefined error handler MPIERRORSRETURN

may be used to cause error values to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

SEE ALSO

MPICommcreate MPIIntercommcreate MPICommdup MPICommfree Open MPI 1.2 September 2006 MPICommsplit(3OpenMPI)




Contact us      |      About us      |      Term of use      |       Copyright © 2000-2019 MyWebUniversity.com ™