Building and Running MPI Programs
We are now ready to write our first MPI program.
A Hello World example.
Download the appropriate code below for your choice of language.
C++
Each MPI program must include the mpi.h
header file. If the MPI distribution was installed correctly, the mpicc
or mpicxx
or equivalent wrapper will know the appropriate path for the header and will also link to the correct library.
C++
#include <iostream>
#include "mpi.h"
using namespace std;
int main(int argc, char *argv[]) {
int rank, npes;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &npes);
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
if ( rank == 0 ) {
cout<<"Running on "<<npes<<" Processes\n";
}
cout<<"Greetings from rank "<<rank<<"\n";
MPI_Finalize();
}
Fortran
All new Fortran programs should use the mpi
module provided by the MPI software. if the MPI distribution was installed correctly, the mpif90
or equivalent will find the module and link to the correct library.
Any recent MPI will also provide an mpi_f08
module. Its use is recommended, but we will wait till
later to introduce it. This new module takes better advance of modern Fortran features such as types. In addition, the ubuiquitous “ierror” parameter at the end of most argument lists becomes an optional argument in the mpi_f08 subroutine definitions. The compiler used must support at least the Fortran 2008 standard.
Fortran
program hello
use mpi
integer :: myrank, nprocs
integer :: err
call MPI_INIT(err)
call MPI_COMM_RANK(MPI_COMM_WORLD, myrank, err)
call MPI_COMM_SIZE(MPI_COMM_WORLD, nprocs, err)
if ( myrank == 0 ) then
print *, 'Running on ',nprocs,' Processes'
endif
print *, 'Greetings from process ', myrank
call MPI_FINALIZE(err)
end program
Python
The mpi4py
package consists of several objects. Many codes will need only to import the MPI
object.
Python
from mpi4py import MPI
import sys
myrank = MPI.COMM_WORLD.Get_rank()
nprocs = MPI.COMM_WORLD.Get_size()
if myrank == 0:
sys.stdout.write("Running on %d processes\n" %(nprocs))
sys.stdout.write("Greetings from process %d\n"%(myrank))
Build It
If using an HPC system log in to the appropriate frontend, such as login.hpc.virginia.edu
. If the system uses a software module system, run
module load gcc openmpi
For Python add
module load <python distribution>
This will also load the correct MPI libraries. You must have already installed mpi4py. Activate the conda environment if appropriate.
Use mpiexec and –np only on the frontends! Use for short tests only!
Compiling C
mpicc –o mpihello mpi1.c
Compiling C++
mpicxx –o mpihello mpi1.cxx
Compiling Fortran
mpif90 –o mpihello mpi1.f90
Execute it
C/C++/Fortran
mpiexec –np 4 ./mpihello
Python
mpiexec –np 4 python mpi1.py
Submit It
For HPC users, rite a Slurm script to run your program. Request 1 node and 10 cores on the standard partition. The process manager will know how many cores were requested from Slurm.
srun ./mpihello
Or
srun python mpihello\.py
Using the Intel Compilers and MPI
Intel compilers, MPI, and math libraries (the MKL) are widely used for high-performance applications such as MPI codes, especially on Intel-architecture systems. The appropriate MPI wrapper compilers are
#C
mpiicc -o mpihello mpi1.c
# C++
mpiicpc -o mpihello mpi1.cxx
# Fortran
mpiifort -o mpihello mpi1.f90
Do not use mpicc, mpicxx, or mpif90 with Intel compilers. Those are provided by Intel but use the gcc suite and can result in conflicts.