Boost C++ Libraries

...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards

Click here to view the latest version of this page.

Generating Library Status Tables

Purpose

Any time one considers using a library as large and complex as the Boost libraries, he must have a way of validating the the library functions in his environment. This should be done when the library is installed and anytime questions are raised regarding its applicabililty and/or its usage.

The procedures described here permit a user to run any combination of tests on any or all libraries and generate a set of convenient tables which show which libraries pass which tests under what conditions.

Preliminaries

Generating these tables requires a couple of utility programs: process_jam_log and library_status. These can be built by moving to the directory tools/regression/build and invoking bjam. If all goes well these utility programs will be found in the directory dist/bin. From there they should be moved to a place in the current path.

Running Tests for One Library

  1. Start from your command line environment.
  2. set the current directory to:../libs/<library name>/test
  3. Invoke one of the following:
  4. This will display short help message describing the how to set the command line arguments for the compilers and variants you want to appear in the final table.
  5. Setting these arguments requires rudimentary knowledge of bjam usage. Hopefully, if you've arrived at this page you've gained the required knowledge during the installation and library build process.
  6. Rerun the abve command with the argument set accordingly.
  7. When the command terminates, there should be a file named "library_status.html" in the current directory.
  8. Display this file with any web browser.
There should appear a table similar to the following for the regex library.
Test Name msvc-7.1
debug release
link-static threading-multi link-static threading-multi
threading-multi threading-multi
bad_expression_test Missing Warn Missing Warn
captures Missing Fail Missing Fail
captures_test Missing Warn Missing Warn
concept_check Missing Pass Missing Pass
icu_concept_check Missing Pass Missing Pass
object_cache_test Missing Warn Missing Warn
posix_api_check Missing Warn Missing Warn
posix_api_check_cpp Missing Pass Missing Pass
recursion_test Missing Warn Missing Warn
regex_config_info Missing Pass Missing Pass
regex_dll_config_info Missing Pass Missing Pass
regex_regress Pass* Missing Pass* Missing
regex_regress_dll Missing Pass* Missing Pass*
regex_regress_threaded Missing Pass Missing Pass
static_mutex_test Missing Pass Missing Pass
test_collate_info Missing Warn Missing Warn
unicode_iterator_test Missing Warn Missing Warn
wide_posix_api_check_c Missing Warn Missing Warn
wide_posix_api_check_cpp Missing Warn Missing Warn

This table was generated by invoking the following command line:

../../../tools/regression/library_test --toolset=msvc-7.1 variant=debug,release

from within the .../libs/regex/test directory.

This table shows the regex test results for both debug and release versions of the library. Also it displays the fact that one of the tests is run specifically with the static linking/multi-threading versions of the runtime libraries. The cells marked "Missing" correspond to tests that were not run for some reason or another. This is usually because the corresponding Jamfile.v2 excludes this test for the given combination of compiler and build attributes. In this example, all tests were run with the same compiler. If additional compilers were used, they would appear as more columns in the table.

The table above is just an illustration so the links don't actually point to anything. In the table you generated, the links will display a page describing any errors, warnings or other available information about the tests. If the test passes, usually, there is no additional information and hence no link.

The tables are cumulative. That is, if you run one set of tests now and tests with different attributes later, the table will contain all the results to date. The test results are stored in ../bin.v2/libs/test/<library%gt;/.... To reinitialize the test results to empty, delete the corresponding files in this directory.

The procedure above assumes that the table are generated within the directory ../libs/<library>/test. This is the most common case since this directory contains the Jamfile.v2 as well as the source code that is used by official boost testers. However, this is just a convention. The table can be generated for other directories within the libary. One possiblity would be to generate the table for all the examples in ../libs/%lt;library%gt;/example. Or one might have a special directory of performance tests which take a long time to run and hence are not suitable for running by official boost testers. Just remember that library status table is generated in the directory from which the library_test command is invoked.

Running Tests for All Libraries

For those with *nix or cygwin command line shells, there is shell script that can be run from the boost root directory:

tools/regression/library_test_all

The command line arguments are the same as for running the test for one library. This script creates all the html files in all the test directories as well as an html page in the status directory named library_status_summary.html. This can be used to browse through all test results for all test in all libraries.


Copyright 2007 Robert Ramey. Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)

Revised $Date: 2007-11-23 12:03:14 -0500 (Fri, 23 Nov 2007) $