Boost C++ Libraries

...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards

Click here to view the latest version of this page.
PrevUpHomeNext

Chapter 19. Boost.MPI

Douglas Gregor

Matthias Troyer

Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt )

Table of Contents

Introduction
Getting started
MPI Implementation
Configure and Build
Installing and Using Boost.MPI
Testing Boost.MPI
Tutorial
Point-to-Point communication
Collective operations
Managing communicators
Separating structure from content
Performance optimizations
Mapping from C MPI to Boost.MPI
Reference
Header <boost/mpi.hpp>
Header <boost/mpi/allocator.hpp>
Header <boost/mpi/collectives.hpp>
Header <boost/mpi/collectives_fwd.hpp>
Header <boost/mpi/communicator.hpp>
Header <boost/mpi/config.hpp>
Header <boost/mpi/datatype.hpp>
Header <boost/mpi/datatype_fwd.hpp>
Header <boost/mpi/environment.hpp>
Header <boost/mpi/exception.hpp>
Header <boost/mpi/graph_communicator.hpp>
Header <boost/mpi/group.hpp>
Header <boost/mpi/intercommunicator.hpp>
Header <boost/mpi/nonblocking.hpp>
Header <boost/mpi/operations.hpp>
Header <boost/mpi/packed_iarchive.hpp>
Header <boost/mpi/packed_oarchive.hpp>
Header <boost/mpi/python.hpp>
Header <boost/mpi/request.hpp>
Header <boost/mpi/skeleton_and_content.hpp>
Header <boost/mpi/skeleton_and_content_fwd.hpp>
Header <boost/mpi/status.hpp>
Header <boost/mpi/timer.hpp>
Python Bindings
Quickstart
Transmitting User-Defined Data
Collectives
Skeleton/Content Mechanism
C++/Python MPI Compatibility
Reference
Design Philosophy
Performance Evaluation
Revision History
Acknowledgments

Introduction

Boost.MPI is a library for message passing in high-performance parallel applications. A Boost.MPI program is one or more processes that can communicate either via sending and receiving individual messages (point-to-point communication) or by coordinating as a group (collective communication). Unlike communication in threaded environments or using a shared-memory library, Boost.MPI processes can be spread across many different machines, possibly with different operating systems and underlying architectures.

Boost.MPI is not a completely new parallel programming library. Rather, it is a C++-friendly interface to the standard Message Passing Interface (MPI), the most popular library interface for high-performance, distributed computing. MPI defines a library interface, available from C, Fortran, and C++, for which there are many MPI implementations. Although there exist C++ bindings for MPI, they offer little functionality over the C bindings. The Boost.MPI library provides an alternative C++ interface to MPI that better supports modern C++ development styles, including complete support for user-defined data types and C++ Standard Library types, arbitrary function objects for collective algorithms, and the use of modern C++ library techniques to maintain maximal efficiency.

At present, Boost.MPI supports the majority of functionality in MPI 1.1. The thin abstractions in Boost.MPI allow one to easily combine it with calls to the underlying C MPI library. Boost.MPI currently supports:

  • Communicators: Boost.MPI supports the creation, destruction, cloning, and splitting of MPI communicators, along with manipulation of process groups.
  • Point-to-point communication: Boost.MPI supports point-to-point communication of primitive and user-defined data types with send and receive operations, with blocking and non-blocking interfaces.
  • Collective communication: Boost.MPI supports collective operations such as reduce and gather with both built-in and user-defined data types and function objects.
  • MPI Datatypes: Boost.MPI can build MPI data types for user-defined types using the Boost.Serialization library.
  • Separating structure from content: Boost.MPI can transfer the shape (or "skeleton") of complexc data structures (lists, maps, etc.) and then separately transfer their content. This facility optimizes for cases where the data within a large, static data structure needs to be transmitted many times.

Boost.MPI can be accessed either through its native C++ bindings, or through its alternative, Python interface.

Last revised: June 25, 2013 at 22:26:52 GMT


PrevUpHomeNext