Boost C++ Libraries

...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards

This is the documentation for an old version of Boost. Click here to view this page for the latest version.
PrevUpHomeNext

Getting started

MPI Implementation
Configure and Build
Using Boost.MPI

Getting started with Boost.MPI requires a working MPI implementation, a recent version of Boost, and some configuration information.

MPI Implementation

To get started with Boost.MPI, you will first need a working MPI implementation. There are many conforming MPI implementations available. Boost.MPI should work with any of the implementations, although it has only been tested extensively with:

You can test your implementation using the following simple program, which passes a message from one processor to another. Each processor prints a message to standard output.

#include <mpi.h>
#include <iostream>

int main(int argc, char* argv[])
{
  MPI_Init(&argc, &argv);

  int rank;
  MPI_Comm_rank(MPI_COMM_WORLD, &rank);
  if (rank == 0) {
    int value = 17;
    int result = MPI_Send(&value, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
    if (result == MPI_SUCCESS)
      std::cout << "Rank 0 OK!" << std::endl;
  } else if (rank == 1) {
    int value;
    int result = MPI_Recv(&value, 1, MPI_INT, 0, 0, MPI_COMM_WORLD,
			  MPI_STATUS_IGNORE);
    if (result == MPI_SUCCESS && value == 17)
      std::cout << "Rank 1 OK!" << std::endl;
  }
  MPI_Finalize();
  return 0;
}

You should compile and run this program on two processors. To do this, consult the documentation for your MPI implementation. With OpenMPI, for instance, you compile with the mpiCC or mpic++ compiler, boot the LAM/MPI daemon, and run your program via mpirun. For instance, if your program is called mpi-test.cpp, use the following commands:

mpiCC -o mpi-test mpi-test.cpp
lamboot
mpirun -np 2 ./mpi-test
lamhalt

When you run this program, you will see both Rank 0 OK! and Rank 1 OK! printed to the screen. However, they may be printed in any order and may even overlap each other. The following output is perfectly legitimate for this MPI program:

Rank Rank 1 OK!
0 OK!

If your output looks something like the above, your MPI implementation appears to be working with a C++ compiler and we're ready to move on.

Configure and Build

Build Environement

As the rest of Boost, Boost.MPI uses version 2 of the Boost.Build system for configuring and building the library binary.

Please refer to the general Boost installation instructions for Unix Variant (including Unix, Linux and MacOS) or Windows. The simplified build instructions should apply on most platforms with a few specific modifications described below.

Bootstrap

As described in the boost installation instructions, go to to root of your Boost source distribution and run the bootstrap script (./bootstrap.sh for unix variants or bootstrap.bat for Windows). That will generate a 'project-config.jam` file in the root directory. Use your favourite text editor and add the following line:

using mpi ;

Alternatively, you can provided explicitly the list of Boost libraries you want to build. Please refer to the --help option of the bootstrap` script.

Setting up your MPI Implementation

First, you need to scan the include/boost/mpi/config.hpp file and check if some settings needs to be modified for your MPI implementation or preferences.

In particular, the BOOST_MPI_HOMOGENEOUS macro, that you will need to comment out if you plan tu run on an heterogeneous set of macines. See the optimization notes below.

Most MPI implementations requires specific compilation and link options. In order to mask theses options to the user, most MPI implementations provides wrappers which silently pass those options to the compiler.

Depending on your MPI implementation, some work might be needed to tell Boost which specific MPI option to use. This is done through the using mpi ; directive of the project-config.jam file.

The general form is the following (do not forget to leave spaces around : and before ;):

using mpi
   : [<MPI compiler wrapper>]
   : [<compilation and link options>]
   : [<mpi runner>] ;
  • If you're lucky

For those who uses MPICH2, OpenMPI or some of their derivatives, configuration can be almost automatic. In fact, if your mpicxx command is in your path, you just need to use:

using mpi ;

The directive will find the wrapper and deduce the options to use.

  • If your wrapper is not in your path

...or if it does not have a usual wrapper name, you will need to tell the build system where to find it:

using mpi : /opt/mpi/bullxmpi/1.2.8.3/bin/mpicc ;
  • If your wrapper is really excentric

or does not exist at all (it happens), you need to provide the compilation and build options to the build environement using jam directives. For example, the following could be used for a specific Intel MPI implementation:

using mpi : mpiicc :
      <library-path>/softs/intel/impi/5.0.1.035/intel64/lib
      <library-path>/softs/intel/impi/5.0.1.035/intel64/lib/release_mt
      <include>/softs/intel/impi/5.0.1.035/intel64/include
      <find-shared-library>mpifort
      <find-shared-library>mpi_mt
      <find-shared-library>mpigi
      <find-shared-library>dl
      <find-shared-library>rt ;

To do that, you need to guess the libraries and include directories associated with your environement. You can refer to the your specific MPI environement documentation. Most of the time thoug, your wrapper have an option that provide that information, it usually starts with --show:

$ mpiicc -show
icc -I/softs/intel//impi/5.0.3.048/intel64/include -L/softs/intel//impi/5.0.3.048/intel64/lib/release_mt -L/softs/intel//impi/5.0.3.048/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /softs/intel//impi/5.0.3.048/intel64/lib/release_mt -Xlinker -rpath -Xlinker /softs/intel//impi/5.0.3.048/intel64/lib -Xlinker -rpath -Xlinker /opt/intel/mpi-rt/5.0/intel64/lib/release_mt -Xlinker -rpath -Xlinker /opt/intel/mpi-rt/5.0/intel64/lib -lmpifort -lmpi -lmpigi -ldl -lrt -lpthread
$

[ $ mpicc --showme icc -I/opt/mpi/bullxmpi/1.2.8.3/include -pthread -L/opt/mpi/bullxmpi/1.2.8.3/lib -lmpi -ldl -lm -lnuma -Wl,--export-dynamic -lrt -lnsl -lutil -lm -ldl $ mpicc --showme:compile -I/opt/mpi/bullxmpi/1.2.8.3/include -pthread $ mpicc --showme:link -pthread -L/opt/mpi/bullxmpi/1.2.8.3/lib -lmpi -ldl -lm -lnuma -Wl,--export-dynamic -lrt -lnsl -lutil -lm -ldl $ ]

To see the results of MPI auto-detection, pass --debug-configuration on the bjam command line.

  • If you want to run the regression tests

...Which is a good thing.

The (optional) third argument configures Boost.MPI for running regression tests. These parameters specify the executable used to launch jobs (the default is "mpirun") followed by any necessary arguments to this to run tests and tell the program to expect the number of processors to follow (default: "-np"). With the default parameters, for instance, the test harness will execute, e.g.,

mpirun -np 4 all_gather_test

Some implementations provides alternative launcher that can be more convenient. For exemple, Intel's MPI provides the mpiexec.hydra:

$mpiexec.hydra -np 4 all_gather_test

which does not requires any daemon to be running (as opposed to their mpirun command). Such a launcher need to be specified though:

using mpi : mpiicc :
      .....
 : mpiexec.hydra -n  ;

Build and Install

To build the whole Boost ditribution:

$cd <boost distribution>
$./b2 install
[Tip] Tip

Or, if you have a multi-cpu machine (say 24):

$cd <boost distribution>
$./b2 -j24 install

Installation of Boost.MPI can be performed in the build step by specifying install on the command line and (optionally) providing an installation location, e.g.,

$./b2 install

This command will install libraries into a default system location. To change the path where libraries will be installed, add the option --prefix=PATH.

Then, you can run the regression tests with:

$cd <boost distribution/lib/mpi/test
$....../b2

Using Boost.MPI

To build applications based on Boost.MPI, compile and link them as you normally would for MPI programs, but remember to link against the boost_mpi and boost_serialization libraries, e.g.,

mpic++ -I/path/to/boost/mpi my_application.cpp -Llibdir \
  -lboost_mpi-gcc-mt-1_35 -lboost_serialization-gcc-d-1_35.a

If you plan to use the Python bindings for Boost.MPI in conjunction with the C++ Boost.MPI, you will also need to link against the boost_mpi_python library, e.g., by adding -lboost_mpi_python-gcc-mt-1_35 to your link command. This step will only be necessary if you intend to register C++ types or use the skeleton/content mechanism from within Python.


PrevUpHomeNext