This is not possible, since Jamfile does not have "current" value of any
feature, be it toolset, build variant or anything else. For a single invocation of
bjam, any given main target can be built with several property sets.
For example, user can request two build variants on the command line. Or one library
is built as shared when used from one application, and as static when used from another.
Obviously, Jamfile is read only once, so generally, there's no single value of a feature
you can access in Jamfile.
A feature has a specific value only when building a target, and there are two ways how you can use that value:
The most likely case is that you're trying to compile the same file twice, with almost the same, but differing properties. For example:
exe a : a.cpp : <include>/usr/local/include ; exe b : a.cpp ;
The above snippet requires two different compilations of 'a.cpp', which differ only in 'include' property. Since the 'include' property is free, Boost.Build can't generate two objects files into different directories. On the other hand, it's dangerous to compile the file only once -- maybe you really want to compile with different includes.
To solve this issue, you need to decide if file should be compiled once or twice.
Two compile file only once, make sure that properties are the same:
exe a : a.cpp : <include>/usr/local/include ; exe b : a.cpp : <include>/usr/local/include ;
If changing the properties is not desirable, for example if 'a' and 'b' target have other sources which need specific properties, separate 'a.cpp' into it's own target:
obj a_obj : a.cpp : <include>/usr/local/include ; exe a : a_obj ;
To compile file twice, you can make the object file local to the main target:
exe a : [ obj a_obj : a.cpp ] : <include>/usr/local/include ; exe b : [ obj a_obj : a.cpp ] ;
A good question is why Boost.Build can't use some of the above approaches automatically. The problem is that such magic would require additional implementation complexities and would only help in half of the cases, while in other half we'd be silently doing the wrong thing. It's simpler and safe to ask user to clarify his intention in such cases.
Many users would like to use environment variables in Jamfiles, for example, to control location of external libraries. In many cases you better declare those external libraries in the site-config.jam file, as documented in the recipes section. However, if the users already have the environment variables set up, it's not convenient to ask them to set up site-config.jam files as well, and using environment variables might be reasonable.
In Boost.Build V2, each Jamfile is a separate namespace, and the variables defined in environment is imported into the global namespace. Therefore, to access environment variable from Jamfile, you'd need the following code:
import os ; local SOME_LIBRARY_PATH = [ os.environ SOME_LIBRARY_PATH ] ; exe a : a.cpp : <include>$(SOME_LIBRARY_PATH) ;
For internal reasons, Boost.Build sorts all the properties alphabetically. This means that if you write:
exe a : a.cpp : <include>b <include>a ;
then the command line with first mention the "a" include directory, and then "b", even though they are specified in the opposite order. In most cases, the user doesn't care. But sometimes the order of includes, or other properties, is important. For example, if one uses both the C++ Boost library and the "boost-sandbox" (libraries in development), then include path for boost-sandbox must come first, because some headers may override ones in C++ Boost. For such cases, a special syntax is provided:
exe a : a.cpp : <include>a&&b ;
&& symbols separate values of an
property, and specify that the order of the values should be preserved. You
are advised to use this feature only when the order of properties really
matters, and not as a convenient shortcut. Using it everywhere might
negatively affect performance.
On the Unix-like operating systems, the order in which static libraries are specified when invoking the linker is important, because by default, the linker uses one pass though the libraries list. Passing the libraries in the incorrect order will lead to a link error. Further, this behaviour is often used to make one library override symbols from another. So, sometimes it's necessary to force specific order of libraries.
Boost.Build tries to automatically compute the right order. The primary rule is that if library a "uses" library b, then library a will appear on the command line before library b. Library a is considered to use b is b is present either in the sources of a or in its requirements. To explicitly specify the use relationship one can use the <use> feature. For example, both of the following lines will cause a to appear before b on the command line:
lib a : a.cpp b ; lib a : a.cpp : <use>b ;
The same approach works for searched libraries, too:
lib z ; lib png : : <use>z ; exe viewer : viewer png z ;
SHELL builtin can be used for the purpose:
local gtk_includes = [ SHELL "gtk-config --cflags" ] ;
You might want to use the location of the project-root in your Jamfiles. To do it, you'd need to declare path constant in your project-root.jam:
path-constant TOP : . ;
After that, the
TOP variable can be used in every Jamfile.
If one file must be compiled with special options, you need to
explicitly declare an
obj target for that file and then use
that target in your
exe a : a.cpp b ; obj b : b.cpp : <optimization>off ;
Of course you can use other properties, for example to specify specific compiler options:
exe a : a.cpp b ; obj b : b.cpp : <cflags>-g ;
You can also use conditional properties for finer control:
exe a : a.cpp b ; obj b : b.cpp : <variant>release:<optimization>off ;
(This entry is specific to Unix system.)Before answering the questions, let's recall a few points about shared libraries. Shared libraries can be used by several applications, or other libraries, without phisycally including the library in the application. This can greatly decrease the total size of applications. It's also possible to upgrade a shared library when the application is already installed. Finally, shared linking can be faster.
However, the shared library must be found when the application is
started. The dynamic linker will search in a system-defined list of
paths, load the library and resolve the symbols. Which means that you
should either change the system-defined list, given by the
LD_LIBRARY_PATH environment variable, or install the
libraries to a system location. This can be inconvenient when
developing, since the libraries are not yet ready to be installed, and
cluttering system paths is undesirable. Luckily, on Unix there's another
An executable can include a list of additional library paths, which
will be searched before system paths. This is excellent for development,
because the build system knows the paths to all libraries and can include
them in executables. That's done when the
feature has the
true value, which is the
default. When the executables should be installed, the story is
Obviously, installed executable should not hardcode paths to your
development tree. (The
stage rule explicitly disables the
hardcode-dll-paths feature for that reason.) However, you
can use the
dll-path feature to add explicit paths
manually. For example:
stage installed : application : <dll-path>/usr/lib/snake <location>/usr/bin ;
will allow the application to find libraries placed to
If you install libraries to a nonstandard location and add an explicit path, you get more control over libraries which will be used. A library of the same name in a system location will not be inadvertently used. If you install libraries to a system location and do not add any paths, the system administrator will have more control. Each library can be individually upgraded, and all applications will use the new library.
Which approach is best depends on your situation. If the libraries are relatively standalone and can be used by third party applications, they should be installed in the system location. If you have lots of libraries which can be used only by your application, it makes sense to install it to a nonstandard directory and add an explicit path, like the example above shows. Please also note that guidelines for different systems differ in this respect. The Debian guidelines prohibit any additional search paths, and Solaris guidelines suggest that they should always be used.
It is desirable to declare standard libraries available on a given system. Putting target declaration in Jamfile is not really good, since locations of the libraries can vary. The solution is to put the following to site-config.jam.
import project ; project.initialize $(__name__) ; project site-config ; lib zlib : : <name>z ;
The second line allows this module to act as project. The third line gives id to this project — it really has no location and cannot be used otherwise. The fourth line just declares a target. Now, one can write:
exe hello : hello.cpp /site-config//zlib ;
in any Jamfile.
In modern C++, libraries often consist of just header files, without any source files to compile. To use such libraries, you need to add proper includes and, maybe, defines, to your project. But with large number of external libraries it becomes problematic to remember which libraries are header only, and which are "real" ones. However, with Boost.Build a header-only library can be declared as Boost.Build target and all dependents can use such library without remebering if it's header-only or not.
Header-only libraries are declared using the
that specifies only usage requirements, for example:
alias mylib : # no sources : # no build requirements : # no default build : <include>whatever ;
The includes specified in usage requirements of
automatically added to build properties of all dependents. The dependents
need not care if
mylib is header-only or not, and it's possible
to later make
mylib into a regular compiled library.
If you already have proper usage requirements declared for project where
header-only library is defined, you don't need to duplicate them for
project my : usage-requirements <include>whatever ; alias mylib ;