MPI_Sendrecv
The pattern of sending and receiving we have just seen is so common that the MPI standard provides a built-in function to handle it, MPI_Sendrecv
. This function is guaranteed not to deadlock for an exchange between source and dest. In general, sendcount and recvcount, and the sendtype and recvtype, should be the same. Tags must also match appropriately.
Sendrecv
The syntax for MPI_Sendrecv is
int MPI_Sendrecv(&sendbuf, sendcount, sendtype, dest, sendtag,
&recvbuf, recvcount, recvtype, source, recvtag, comm, &status)
call MPI_Send(sendbuf, sendcount, sendtype, dest, sendtag,
recvbuf, recvcount, recvtype, source, recvtag, comm, status, ierr)
comm.Sendrecv([sendbuf,sendtype], dest, sendtag=0, recvbuf=None, source=ANY_SOURCE, recvtag=ANY_TAG, status=None)
Python programmers should observe that the above syntax is taken from the mpi4py
documentation, and values of variables are the defaults. As usual there is a lower-case form for pickled objects that does not use the list for the buffers.
Examples
We will rewrite the previous examples using MPI_Sendrecv. Each process will send its rank to its neighbor on the right and will receive that neighbor’s rank.
C++
#include <iostream>
#include <mpi.h>
using namespace std;
int main(int argc, char **argv) {
int rank, nprocs, message, neighbor;
MPI_Status status;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &nprocs);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if (nprocs < 2) {
cout<<"This program works only for at least two processes\n";
MPI_Finalize();
return 1;
}
else if (nprocs%2 != 0) {
cout<<"This program works only for an even number of processes\n";
MPI_Finalize();
return 2;
}
if (rank%2==0) {
neighbor = rank+1;
}
else {
neighbor = rank-1;
}
MPI_Sendrecv(&rank,1,MPI_INT,neighbor,0,&message,1,MPI_INT,neighbor,0,MPI_COMM_WORLD,&status);
cout<<rank<<" "<<message<<endl;
MPI_Finalize();
return 0;
}
Fortran
program exchange
use mpi
implicit none
integer :: rank, nprocs, neighbor, ierr
integer :: status(MPI_STATUS_SIZE)
integer :: message
call MPI_Init(ierr)
call MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr)
call MPI_Comm_size(MPI_COMM_WORLD, nprocs, ierr)
if (nprocs < 2) then
write(6,*) "This program works only for at least two processes"
call MPI_Finalize(ierr)
stop
else if ( mod(nprocs,2) /= 0 ) then
write(6,*) "This program works only for an even number of processes"
call MPI_Finalize(ierr)
stop
end if
if ( mod(rank,2)==0 ) then
neighbor = rank+1
else
neighbor = rank-1
end if
call MPI_Sendrecv(rank,1,MPI_INTEGER,neighbor,0,message,1,MPI_INTEGER, &
neighbor,0,MPI_COMM_WORLD,status,ierr)
write(*,*) rank, message
call MPI_Finalize(ierr)
end program
Python
import sys
import numpy as np
from mpi4py import MPI
comm=MPI.COMM_WORLD
nprocs=comm.Get_size()
rank=comm.Get_rank()
if nprocs<2:
print("This program works only for at least two processes.")
sys.exit()
elif nprocs%2!=0:
print("This program works only for an even number of processes.")
sys.exit()
message=np.zeros(1,dtype='int')
rank_val=rank*np.ones(1,dtype='int')
if rank%2==0:
neighbor=rank+1
else:
neighbor=rank-1
comm.Sendrecv([rank_val,MPI.INT],neighbor,0,[message,MPI.INT],neighbor,0,MPI.Status())
print(rank,message)
Exercise
Download the three codes for your language and try them.