Distributed parallel programs can be developed using this library (not in Python's starndard library, requires that an MPI library is installed). It follows the MPI standard to some degree, but simplifies in many places, making development easy, at the price of performace. Not to be advised when interprocess communication is fine-grained with respect to communication given (much) larger overhead when compared with C/C++ or Fortran implementations.
all_to_all.py
: simple example of an all-to-all comminication.halo.py
: halo exchange example, illustrates 2D cartesian grid communicator, Sendrecv.pi.py
: implementation for computing pi as suggested in the slides.reduce.py
: example of numpy array reduction.ring.py
: implementation of a "token" send around a ring.run_ring.sh
: Bash script illustrating how to run the program.ring.pbs
: PBS script to run the ring program as a job.round_about.py
: another ring type implementation.exchange.py
: even ranks send, odd ranks receive, and vice versa.mpi_count.py
: count amino acids in a long sequence, distributing the work over processes.large_dna.txt
: example data file to use withmpi_count.py
.mpifitness.py
: application to time various MPI communications.pi_mpipool.py
: illustration of usingmpi.futures.MPIPoolExecutor
to compute the value of pi using a quadrature method.run_pi_mpipool.sh
: Bash script to runpi_mpipool.py
.file_trafficker.py
: file write/read test application that can run serially, multi-threaded, multi-process and MPI.mpi_io.py
: timing of MPI-IO operations.translate_bin.py
: translate binary to ASCII data.
The MPIPoolExecutor applications can be run using the command below for MPICH2:
$ mpiexec -n 1 -usize 3 ./file_trafficker.py --mode mpi ...
For Intel MPI/Open MPI:
$ mpiexec -n 3 python -m mpi4py.futures ./file_trafficker.py --mode mpi ...
This would run the application with 2 worker processes.