The procedures described here permit a user to run any combination of tests on any or all libraries and generate a set of convenient tables which show which libraries pass which tests under what conditions.
library_status. These can be built by moving to the directory
tools/regression/buildand invoking bjam. If all goes well these utility programs will be found in the directory
dist/bin. From there they should be moved to a place in the current path.
This table was generated by invoking the following command line:
from within the .../libs/regex/test directory.
This table shows the regex test results for both debug and release
versions of the library. Also it displays the fact that one of the tests is
run specifically with the static linking/multi-threading versions of the
runtime libraries. The cells marked "Missing" correspond to tests that were
not run for some reason or another. This is usually because the
Jamfile.v2 excludes this test for the given
combination of compiler and build attributes. In this example, all tests
were run with the same compiler. If additional compilers were used, they
would appear as more columns in the table.
The table above is just an illustration so the links don't actually point to anything. In the table you generated, the links will display a page describing any errors, warnings or other available information about the tests. If the test passes, usually, there is no additional information and hence no link.
The tables are cumulative. That is, if you run one set of tests now and
tests with different attributes later, the table will contain all the
results to date. The test results are stored in
../bin.v2/libs/test/<library%gt;/.... To reinitialize the
test results to empty, delete the corresponding files in this
The procedure above assumes that the table are generated within the
../libs/<library>/test. This is the most
common case since this directory contains the
well as the source code that is used by official boost testers. However,
this is just a convention. The table can be generated for other directories
within the libary. One possiblity would be to generate the table for all
the examples in
../libs/%lt;library%gt;/example. Or one might
have a special directory of performance tests which take a long time to run
and hence are not suitable for running by official boost testers. Just
remember that library status table is generated in the directory from which
library_test command is invoked.
The command line arguments are the same as for running the test for one
library. This script creates all the html files in all the test directories
as well as an html page in the
status directory named
library_status_summary.html. This can be used to browse
through all test results for all test in all libraries.
Copyright 2011 Bryce Lelbach.
Copyright 2007-2011 Robert Ramey.
Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)