Boost C++ Libraries of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards

This is the documentation for an old version of boost. Click here for the latest Boost documentation.

Running Boost Regression Tests


That's it! You don't even need a CVS client installed.


Running tests

To start a regression run, simply run providing it with the following two arguments:

For example:

python --runner=Metacomm --toolsets=gcc,vc7

If you are interested in seeing all available options, run python or python --help. See also the Advanced use section below.

Note: If you are behind a firewall/proxy server, everything should still "just work". In the rare cases when it doesn't, you can explicitly specify the proxy server parameters through the --proxy option, e.g.:

python ... --proxy=


The regression run procedure will:

The report merger process running continuously on MetaCommunications site will merge all submitted test runs and publish them at

Advanced use

Providing detailed information about your environment

Once you have your regression results displayed in the Boost-wide reports, you may consider providing a bit more information about yourself and your test environment. This additional information will be presented in the reports on a page associated with your runner ID.

By default, the page's content is just a single line coming from the comment.html file in your directory, specifying the tested platform. You can put online a more detailed description of your environment, such as your hardware configuration, compiler builds, and test schedule, by simply altering the file's content. Also, please consider providing your name and email address for cases where Boost developers have questions specific to your particular set of results.

Incremental runs

You can run in incremental mode [4] by simply passing it an identically named command-line flag:

python ... --incremental

Dealing with misbehaved tests/compilers

Depending on the environment/C++ runtime support library the test is compiled with, a test failure/termination may cause an appearance of a dialog window, requiring human intervention to proceed. Moreover, the test (or even of the compiler itself) can fall into infinite loop, or simply run for too long. To allow to take care of these obstacles, add the --monitored flag to the script invocation:

python ... --monitored

That's it. Knowing your intentions, the script will be able to automatically deal with the listed issues [5].

Getting sources from CVS

If you already have a CVS client installed and configured, you might prefer to get the sources directly from the Boost CVS repository. To communicate this to the script, you just need to pass it your SourceForge user ID using the --user option; for instance:

python ... --user=agurtovoy

You can also specify the user as anonymous, requesting anonymous CVS access. Note, though, that the files obtained this way tend to lag behind the actual CVS state by several hours, sometimes up to twelve. By contrast, the tarball the script downloads by default is at most one hour behind.

Integration with a custom driver script

Even if you've already been using a custom driver script, and for some reason you don't want to take over of the entire test cycle, getting your regression results into Boost-wide reports is still easy!

In fact, it's just a matter of modifying your script to perform two straightforward operations:

  1. Timestamp file creation needs to be done before the CVS update/checkout. The file's location doesn't matter (nor does the content), as long as you know how to access it later. Making your script to do something as simple as echo >timestamp would work just fine.

  2. Collecting and uploading logs can be done any time after process_jam_log' s run, and is as simple as an invocation of the local copy of $BOOST_ROOT/tools/regression/xsl_reports/runner/ script that was just obtained from the CVS with the rest of the sources. You'd need to provide with the following three arguments:

    --locate-root   directory to to scan for "test_log.xml" files
    --runner        runner ID (e.g. "Metacomm")
    --timestamp     path to a file which modification time will be used
                    as a timestamp of the run ("timestamp" by default)

    For example, assuming that the run's resulting binaries are in the $BOOST_ROOT/bin directory (the default Boost.Build setup), the invocation might look like this:

    python $BOOST_ROOT/tools/regression/xsl_reports/runner/

Patching Boost sources

You might encounter an occasional need to make local modifications to the Boost codebase before running the tests, without disturbing the automatic nature of the regression process. To implement this under

  1. Codify applying the desired modifications to the sources located in the ./boost subdirectory in a single executable script named patch_boost (patch_boost.bat on Windows).
  2. Place the script in the directory.

The driver will check for the existence of the patch_boost script, and, if found, execute it after obtaining the Boost sources.


Please send all comments/suggestions regarding this document and the testing procedure itself to the Boost Testing list.


[1]If you are running regressions interlacingly with a different set of compilers (e.g. for Intel in the morning and GCC at the end of the day), you need to provide a different runner id for each of these runs, e.g. your_name-intel, and your_name-gcc.
[2]The limitations of the reports' format/medium impose a direct dependency between the number of compilers you are testing with and the amount of space available for your runner id. If you are running regressions for a single compiler, please make sure to choose a short enough id that does not significantly disturb the reports' layout.
[3]If --toolsets option is not provided, the script will try to use the platform's default toolset (gcc for most Unix-based systems).

By default, the script runs in what is known as full mode: on each invocation all the files that were left in place by the previous run -- including the binaries for the successfully built tests and libraries -- are deleted, and everything is rebuilt once again from scratch. By contrast, in incremental mode the already existing binaries are left intact, and only the tests and libraries which source files has changed since the previous run are re-built and re-tested.

The main advantage of incremental runs is a significantly shorter turnaround time, but unfortunately they don't always produce reliable results. Some type of changes to the codebase (changes to the bjam testing subsystem in particular) often require switching to a full mode for one cycle in order to produce trustworthy reports.

As a general guideline, if you can afford it, testing in full mode is preferable.

[5]Note that at the moment this functionality is available only if you are running on a Windows platform. Contributions are welcome!