Boost C++ Libraries

...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards

This is the documentation for an old version of Boost. Click here to view this page for the latest version.
PrevUpHomeNext

Sink backends

Text stream backend
Text file backend
Text multi-file backend
Syslog backend
Windows debugger output backend
Windows event log backends
#include <boost/log/sinks/text_ostream_backend.hpp>

The text output stream sink backend is the most generic backend provided by the library out of the box. The backend is implemented in the basic_text_ostream_backend class template (text_ostream_backend and wtext_ostream_backend convenience typedefs provided for narrow and wide character support). It supports formatting log records into strings and putting into one or several streams. Each attached stream gets the same result of formatting, so if you need to format log records differently for different streams, you will need to create several sinks - each with its own formatter.

The backend also provides a feature that may come useful when debugging your application. With the auto_flush method one can tell the sink to automatically flush the buffers of all attached streams after each log record is written. This will, of course, degrade logging performance, but in case of an application crash there is a good chance that last log records will not be lost.

void init_logging()
{
    boost::shared_ptr< logging::core > core = logging::core::get();

    // Create a backend and attach a couple of streams to it
    boost::shared_ptr< sinks::text_ostream_backend > backend =
        boost::make_shared< sinks::text_ostream_backend >();
    backend->add_stream(
        boost::shared_ptr< std::ostream >(&std::clog, boost::null_deleter()));
    backend->add_stream(
        boost::shared_ptr< std::ostream >(new std::ofstream("sample.log")));

    // Enable auto-flushing after each log record written
    backend->auto_flush(true);

    // Wrap it into the frontend and register in the core.
    // The backend requires synchronization in the frontend.
    typedef sinks::synchronous_sink< sinks::text_ostream_backend > sink_t;
    boost::shared_ptr< sink_t > sink(new sink_t(backend));
    core->add_sink(sink);
}

#include <boost/log/sinks/text_file_backend.hpp>

Although it is possible to write logs into files with the text stream backend the library also offers a special sink backend with an extended set of features suitable for file-based logging. The features include:

  • Log file rotation based on file size and/or time
  • Flexible log file naming
  • Placing the rotated files into a special location in the file system
  • Deleting the oldest files in order to free more space on the file system

The backend is called text_file_backend.

[Warning] Warning

This sink uses Boost.Filesystem internally, which may cause problems on process termination. See here for more details.

File rotation

File rotation is implemented by the sink backend itself. The file name pattern and rotation thresholds can be specified when the text_file_backend backend is constructed.

void init_logging()
{
    boost::shared_ptr< logging::core > core = logging::core::get();

    boost::shared_ptr< sinks::text_file_backend > backend =
        boost::make_shared< sinks::text_file_backend >(
            keywords::file_name = "file_%5N.log",                                          1
            keywords::rotation_size = 5 * 1024 * 1024,                                     2
            keywords::time_based_rotation = sinks::file::rotation_at_time_point(12, 0, 0)  3
        );

    // Wrap it into the frontend and register in the core.
    // The backend requires synchronization in the frontend.
    typedef sinks::synchronous_sink< sinks::text_file_backend > sink_t;
    boost::shared_ptr< sink_t > sink(new sink_t(backend));

    core->add_sink(sink);
}

1

file name pattern

2

rotate the file upon reaching 5 MiB size...

3

...or every day, at noon, whichever comes first

[Note] Note

The file size at rotation can be imprecise. The implementation counts the number of characters written to the file, but the underlying API can introduce additional auxiliary data, which would increase the log file's actual size on disk. For instance, it is well known that Windows and DOS operating systems have a special treatment with regard to new-line characters. Each new-line character is written as a two byte sequence 0x0D 0x0A instead of a single 0x0A. Other platform-specific character translations are also known.

The time-based rotation is not limited by only time points. There are following options available out of the box:

  1. Time point rotations: rotation_at_time_point class. This kind of rotation takes place whenever the specified time point is reached. The following variants are available:
    • Every day rotation, at the specified time. This is what was presented in the code snippet above:
      sinks::file::rotation_at_time_point(12, 0, 0)
      
    • Rotation on the specified day of every week, at the specified time. For instance, this will make file rotation to happen every Tuesday, at midnight:
      sinks::file::rotation_at_time_point(date_time::Tuesday, 0, 0, 0)
      
      in case of midnight, the time can be omitted:
      sinks::file::rotation_at_time_point(date_time::Tuesday)
      
    • Rotation on the specified day of each month, at the specified time. For example, this is how to rotate files on the 1-st of every month:
      sinks::file::rotation_at_time_point(gregorian::greg_day(1), 0, 0, 0)
      
      like with weekdays, midnight is implied:
      sinks::file::rotation_at_time_point(gregorian::greg_day(1))
      
  2. Time interval rotations: rotation_at_time_interval class. With this predicate the rotation is not bound to any time points and happens as soon as the specified time interval since the previous rotation elapses. This is how to make rotations every hour:
    sinks::file::rotation_at_time_interval(posix_time::hours(1))
    

If none of the above applies, one can specify his own predicate for time-based rotation. The predicate should take no arguments and return bool (the true value indicates that the rotation should take place). The predicate will be called for every log record being written to the file.

bool is_it_time_to_rotate();

void init_logging()
{
    // ...

    boost::shared_ptr< sinks::text_file_backend > backend =
        boost::make_shared< sinks::text_file_backend >(
            keywords::file_name = "file_%5N.log",
            keywords::time_based_rotation = &is_it_time_to_rotate
        );

    // ...
}
[Note] Note

The log file rotation takes place on an attempt to write a new log record to the file. Thus the time-based rotation is not a strict threshold, either. The rotation will take place as soon as the library detects that the rotation should have happened.

The file name pattern may contain a number of wildcards, like the one you can see in the example above. Supported placeholders are:

  • Current date and time components. The placeholders conform to the ones specified by Boost.DateTime library.
  • File counter (%N) with an optional width specification in the printf-like format. The file counter will always be decimal, zero filled to the specified width.
  • A percent sign (%%).

A few quick examples:

Template

Expands to

file_%N.log

file_1.log, file_2.log...

file_%3N.log

file_001.log, file_002.log...

file_%Y%m%d.log

file_20080705.log, file_20080706.log...

file_%Y-%m-%d_%H-%M-%S.%N.log

file_2008-07-05_13-44-23.1.log, file_2008-07-06_16-00-10.2.log...

[Important] Important

Although all Boost.DateTime format specifiers will work, there are restrictions on some of them, if you intend to scan for old log files. This functionality is discussed in the next section.

The sink backend allows hooking into the file rotation process in order to perform pre- and post-rotation actions. This can be useful to maintain log file validity by writing headers and footers. For example, this is how we could modify the init_logging function in order to write logs into XML files:

// Complete file sink type
typedef sinks::synchronous_sink< sinks::text_file_backend > file_sink;

void write_header(sinks::text_file_backend::stream_type& file)
{
    file << "<?xml version=\"1.0\"?>\n<log>\n";
}

void write_footer(sinks::text_file_backend::stream_type& file)
{
    file << "</log>\n";
}

void init_logging()
{
    // Create a text file sink
    boost::shared_ptr< file_sink > sink(new file_sink(
        keywords::file_name = "%Y%m%d_%H%M%S_%5N.xml",  1
        keywords::rotation_size = 16384                 2
    ));

    sink->set_formatter
    (
        expr::format("\t<record id=\"%1%\" timestamp=\"%2%\">%3%</record>")
            % expr::attr< unsigned int >("RecordID")
            % expr::attr< boost::posix_time::ptime >("TimeStamp")
            % expr::xml_decor[ expr::stream << expr::smessage ]            3
    );

    // Set header and footer writing functors
    sink->locked_backend()->set_open_handler(&write_header);
    sink->locked_backend()->set_close_handler(&write_footer);

    // Add the sink to the core
    logging::core::get()->add_sink(sink);
}

1

the resulting file name pattern

2

rotation size, in characters

3

the log message has to be decorated, if it contains special characters

See the complete code.

Finally, the sink backend also supports the auto-flush feature, like the text stream backend does.

Managing rotated files

After being closed, the rotated files can be collected. In order to do so one has to set up a file collector by specifying the target directory where to collect the rotated files and, optionally, size thresholds. For example, we can modify the init_logging function to place rotated files into a distinct directory and limit total size of the files. Let's assume the following function is called by init_logging with the constructed sink:

void init_file_collecting(boost::shared_ptr< file_sink > sink)
{
    sink->locked_backend()->set_file_collector(sinks::file::make_collector(
        keywords::target = "logs",                      1
        keywords::max_size = 16 * 1024 * 1024,          2
        keywords::min_free_space = 100 * 1024 * 1024    3
    ));
}

1

the target directory

2

maximum total size of the stored files, in bytes

3

minimum free space on the drive, in bytes

The max_size and min_free_space parameters are optional, the corresponding threshold will not be taken into account if the parameter is not specified.

One can create multiple file sink backends that collect files into the same target directory. In this case the most strict thresholds are combined for this target directory. The files from this directory will be erased without regard for which sink backend wrote it, i.e. in the strict chronological order.

[Warning] Warning

The collector does not resolve log file name clashes between different sink backends, so if the clash occurs the behavior is undefined, in general. Depending on the circumstances, the files may overwrite each other or the operation may fail entirely.

The file collector provides another useful feature. Suppose you ran your application 5 times and you have 5 log files in the "logs" directory. The file sink backend and file collector provide a scan_for_files method that searches the target directory for these files and takes them into account. So, if it comes to deleting files, these files are not forgotten. What's more, if the file name pattern in the backend involves a file counter, scanning for older files allows updating the counter to the most recent value. Here is the final version of our init_logging function:

void init_logging()
{
    // Create a text file sink
    boost::shared_ptr< file_sink > sink(new file_sink(
        keywords::file_name = "%Y%m%d_%H%M%S_%5N.xml",
        keywords::rotation_size = 16384
    ));

    // Set up where the rotated files will be stored
    init_file_collecting(sink);

    // Upon restart, scan the directory for files matching the file_name pattern
    sink->locked_backend()->scan_for_files();

    sink->set_formatter
    (
        expr::format("\t<record id=\"%1%\" timestamp=\"%2%\">%3%</record>")
            % expr::attr< unsigned int >("RecordID")
            % expr::attr< boost::posix_time::ptime >("TimeStamp")
            % expr::xml_decor[ expr::stream << expr::smessage ]
    );

    // Set header and footer writing functors
    namespace bll = boost::lambda;

    sink->locked_backend()->set_open_handler
    (
        bll::_1 << "<?xml version=\"1.0\"?>\n<log>\n"
    );
    sink->locked_backend()->set_close_handler
    (
        bll::_1 << "</log>\n"
    );

    // Add the sink to the core
    logging::core::get()->add_sink(sink);
}

There are two methods of file scanning: the scan that involves file name matching with the file name pattern (the default) and the scan that assumes that all files in the target directory are log files. The former applies certain restrictions on the placeholders that can be used within the file name pattern, in particular only file counter placeholder and these placeholders of Boost.DateTime are supported: %y, %Y, %m, %d, %H, %M, %S, %f. The latter scanning method, in its turn, has its own drawback: it does not allow updating the file counter in the backend. It is also considered to be more dangerous as it may result in unintended file deletion, so be cautious. The all-files scanning method can be enabled by passing it as an additional parameter to the scan_for_files call:

// Look for all files in the target directory
backend->scan_for_files(sinks::file::scan_all);
#include <boost/log/sinks/text_multifile_backend.hpp>

While the text stream and file backends are aimed to store all log records into a single file/stream, this backend serves a different purpose. Assume we have a banking request processing application and we want logs related to every single request to be placed into a separate file. If we can associate some attribute with the request identity then the text_multifile_backend backend is the way to go.

void init_logging()
{
    boost::shared_ptr< logging::core > core = logging::core::get();

    boost::shared_ptr< sinks::text_multifile_backend > backend =
        boost::make_shared< sinks::text_multifile_backend >();

    // Set up the file naming pattern
    backend->set_file_name_composer
    (
        sinks::file::as_file_name_composer(expr::stream << "logs/" << expr::attr< std::string >("RequestID") << ".log")
    );

    // Wrap it into the frontend and register in the core.
    // The backend requires synchronization in the frontend.
    typedef sinks::synchronous_sink< sinks::text_multifile_backend > sink_t;
    boost::shared_ptr< sink_t > sink(new sink_t(backend));

    // Set the formatter
    sink->set_formatter
    (
        expr::stream
            << "[RequestID: " << expr::attr< std::string >("RequestID")
            << "] " << expr::smessage
    );

    core->add_sink(sink);
}

You can see we used a regular formatter in order to specify file naming pattern. Now, every log record with a distinct value of the "RequestID" attribute will be stored in a separate file, no matter how many different requests are being processed by the application concurrently. You can also find the multiple_files example in the library distribution, which shows a similar technique to separate logs generated by different threads of the application.

If using formatters is not appropriate for some reason, you can provide your own file name composer. The composer is a mere function object that accepts a log record as a single argument and returns a value of the text_multifile_backend::path_type type.

[Note] Note

The multi-file backend has no knowledge of whether a particular file is going to be used or not. That is, if a log record has been written into file A, the library cannot tell whether there will be more records that fit into the file A or not. This makes it impossible to implement file rotation and removing unused files to free space on the file system. The user will have to implement such functionality himself.

#include <boost/log/sinks/syslog_backend.hpp>

The syslog backend, as comes from its name, provides support for the syslog API that is available on virtually any UNIX-like platform. On Windows there exists at least one public implementation of the syslog client API. However, in order to provide maximum flexibility and better portability the library offers built-in support for the syslog protocol described in RFC 3164. Thus on Windows only the built-in implementation is supported, while on UNIX-like systems both built-in and system API based implementations are supported.

The backend is implemented in the syslog_backend class. The backend supports formatting log records, and therefore requires thread synchronization in the frontend. The backend also supports severity level translation from the application-specific values to the syslog-defined values. This is achieved with an additional function object, level mapper, that receives a set of attribute values of each log record and returns the appropriate syslog level value. This value is used by the backend to construct the final priority value of the syslog record. The other component of the syslog priority value, the facility, is constant for each backend object and can be specified in the backend constructor arguments.

Level mappers can be written by library users to translate the application log levels to the syslog levels in the best way. However, the library provides two mappers that would fit this need in obvious cases. The direct_severity_mapping class template provides a way to directly map values of some integral attribute to syslog levels, without any value conversion. The custom_severity_mapping class template adds some flexibility and allows to map arbitrary values of some attribute to syslog levels.

Anyway, one example is better than a thousand words.

// Complete sink type
typedef sinks::synchronous_sink< sinks::syslog_backend > sink_t;

void init_native_syslog()
{
    boost::shared_ptr< logging::core > core = logging::core::get();

    // Create a backend
    boost::shared_ptr< sinks::syslog_backend > backend(new sinks::syslog_backend(
        keywords::facility = sinks::syslog::user,               1
        keywords::use_impl = sinks::syslog::native              2
    ));

    // Set the straightforward level translator for the "Severity" attribute of type int
    backend->set_severity_mapper(sinks::syslog::direct_severity_mapping< int >("Severity"));

    // Wrap it into the frontend and register in the core.
    // The backend requires synchronization in the frontend.
    core->add_sink(boost::make_shared< sink_t >(backend));
}

void init_builtin_syslog()
{
    boost::shared_ptr< logging::core > core = logging::core::get();

    // Create a new backend
    boost::shared_ptr< sinks::syslog_backend > backend(new sinks::syslog_backend(
        keywords::facility = sinks::syslog::local0,             3
        keywords::use_impl = sinks::syslog::udp_socket_based    4
    ));

    // Setup the target address and port to send syslog messages to
    backend->set_target_address("192.164.1.10", 514);

    // Create and fill in another level translator for "MyLevel" attribute of type string
    sinks::syslog::custom_severity_mapping< std::string > mapping("MyLevel");
    mapping["debug"] = sinks::syslog::debug;
    mapping["normal"] = sinks::syslog::info;
    mapping["warning"] = sinks::syslog::warning;
    mapping["failure"] = sinks::syslog::critical;
    backend->set_severity_mapper(mapping);

    // Wrap it into the frontend and register in the core.
    core->add_sink(boost::make_shared< sink_t >(backend));
}

1

the logging facility

2

the native syslog API should be used

3

the logging facility

4

the built-in socket-based implementation should be used

Please note that all syslog constants, as well as level extractors, are declared within a nested namespace syslog. The library will not accept (and does not declare in the backend interface) native syslog constants, which are macros, actually.

Also note that the backend will default to the built-in implementation and user logging facility, if the corresponding constructor parameters are not specified.

[Tip] Tip

The set_target_address method will also accept DNS names, which it will resolve to the actual IP address. This feature, however, is not available in single threaded builds.

#include <boost/log/sinks/debug_output_backend.hpp>

Windows API has an interesting feature: a process, being run under a debugger, is able to emit messages that will be intercepted and displayed in the debugger window. For example, if an application is run under the Visual Studio IDE it is able to write debug messages to the IDE window. The basic_debug_output_backend backend provides a simple way of emitting such messages. Additionally, in order to optimize application performance, a special filter is available that checks whether the application is being run under a debugger. Like many other sink backends, this backend also supports setting a formatter in order to compose the message text.

The usage is quite simple and straightforward:

// Complete sink type
typedef sinks::synchronous_sink< sinks::debug_output_backend > sink_t;

void init_logging()
{
    boost::shared_ptr< logging::core > core = logging::core::get();

    // Create the sink. The backend requires synchronization in the frontend.
    boost::shared_ptr< sink_t > sink(new sink_t());

    // Set the special filter to the frontend
    // in order to skip the sink when no debugger is available
    sink->set_filter(expr::is_debugger_present());

    core->add_sink(sink);
}

Note that the sink backend is templated on the character type. This type defines the Windows API version that is used to emit messages. Also, debug_output_backend and wdebug_output_backend convenience typedefs are provided.

#include <[boost/log/sinks/event_log_backend.hpp]>

Windows operating system provides a special API for publishing events related to application execution. A wide range of applications, including Windows components, use this facility to provide the user with all essential information about computer health in a single place - an event log. There can be more than one event log. However, typically all user-space applications use the common Application log. Records from different applications or their parts can be selected from the log by a record source name. Event logs can be read with a standard utility, an Event Viewer, that comes with Windows.

Although it looks very tempting, the API is quite complicated and intrusive, which makes it difficult to support. The application is required to provide a dynamic library with special resources that describe all events the application supports. This library must be registered in the Windows registry, which pins its location in the file system. The Event Viewer uses this registration to find the resources and compose and display messages. The positive feature of this approach is that since event resources can describe events differently for different languages, it allows the application to support event internationalization in a quite transparent manner: the application simply provides event identifiers and non-localizable event parameters to the API, and it does the rest of the work.

In order to support both the simplistic approach "it just works" and the more elaborate event composition, including internationalization support, the library provides two sink backends that work with event log API.

Simple event log backend

The basic_simple_event_log_backend backend is intended to encapsulate as much of the event log API as possible, leaving interface and usage model very similar to other sink backends. It contains all resources that are needed for the Event Viewer to function properly, and registers the Boost.Log library in the Windows registry in order to populate itself as the container of these resources.

[Important] Important

The library must be built as a dynamic library in order to use this backend flawlessly. Otherwise event description resources are not linked into the executable, and the Event Viewer is not able to display events properly.

The only thing user has to do to add Windows event log support to his application is to provide event source and log names (which are optional and can be automatically suggested by the library), set up an appropriate filter, formatter and event severity mapping.

// Complete sink type
typedef sinks::synchronous_sink< sinks::simple_event_log_backend > sink_t;

// Define application-specific severity levels
enum severity_level
{
    normal,
    warning,
    error
};

void init_logging()
{
    // Create an event log sink
    boost::shared_ptr< sink_t > sink(new sink_t());

    sink->set_formatter
    (
        expr::format("%1%: [%2%] - %3%")
            % expr::attr< unsigned int >("LineID")
            % expr::attr< boost::posix_time::ptime >("TimeStamp")
            % expr::smessage
    );

    // We'll have to map our custom levels to the event log event types
    sinks::event_log::custom_event_type_mapping< severity_level > mapping("Severity");
    mapping[normal] = sinks::event_log::info;
    mapping[warning] = sinks::event_log::warning;
    mapping[error] = sinks::event_log::error;

    sink->locked_backend()->set_event_type_mapper(mapping);

    // Add the sink to the core
    logging::core::get()->add_sink(sink);
}

Having done that, all logging records that pass to the sink will be formatted the same way they are in the other sinks. The formatted message will be displayed in the Event Viewer as the event description.

Advanced event log backend

The basic_event_log_backend allows more detailed control over the logging API, but requires considerably more scaffolding during initialization and usage.

First, the user has to build his own library with the event resources (the process is described in MSDN). As a part of this process one has to create a message file that describes all events. For the sake of example, let's assume the following contents were used as the message file:

; /* --------------------------------------------------------
; HEADER SECTION
; */
SeverityNames=(Debug=0x0:MY_SEVERITY_DEBUG
            Info=0x1:MY_SEVERITY_INFO
            Warning=0x2:MY_SEVERITY_WARNING
            Error=0x3:MY_SEVERITY_ERROR
            )

; /* --------------------------------------------------------
; MESSAGE DEFINITION SECTION
; */

MessageIdTypedef=WORD

MessageId=0x1
SymbolicName=MY_CATEGORY_1
Language=English
Category 1
.

MessageId=0x2
SymbolicName=MY_CATEGORY_2
Language=English
Category 2
.

MessageId=0x3
SymbolicName=MY_CATEGORY_3
Language=English
Category 3
.

MessageIdTypedef=DWORD

MessageId=0x100
Severity=Warning
Facility=Application
SymbolicName=LOW_DISK_SPACE_MSG
Language=English
The drive %1 has low free disk space. At least %2 Mb of free space is recommended.
.

MessageId=0x101
Severity=Error
Facility=Application
SymbolicName=DEVICE_INACCESSIBLE_MSG
Language=English
The drive %1 is not accessible.
.

MessageId=0x102
Severity=Info
Facility=Application
SymbolicName=SUCCEEDED_MSG
Language=English
Operation finished successfully in %1 seconds.
.

After compiling the resource library, the path to this library must be provided to the sink backend constructor, among other parameters used with the simple backend. The path may contain placeholders that will be expanded with the appropriate environment variables.

// Create an event log sink
boost::shared_ptr< sinks::event_log_backend > backend(
    new sinks::event_log_backend((
        keywords::message_file = "%SystemDir%\\event_log_messages.dll",
        keywords::log_name = "My Application",
        keywords::log_source = "My Source"
    ))
);

Like the simple backend, basic_event_log_backend will register itself in the Windows registry, which will enable the Event Viewer to display the emitted events.

Next, the user will have to provide the mapping between the application logging attributes and event identifiers. These identifiers were provided in the message compiler output as a result of compiling the message file. One can use basic_event_composer and one of the event ID mappings, like in the following example:

// Create an event composer. It is initialized with the event identifier mapping.
sinks::event_log::event_composer composer(
    sinks::event_log::direct_event_id_mapping< int >("EventID"));

// For each event described in the message file, set up the insertion string formatters
composer[LOW_DISK_SPACE_MSG]
    // the first placeholder in the message
    // will be replaced with contents of the "Drive" attribute
    % expr::attr< std::string >("Drive")
    // the second placeholder in the message
    // will be replaced with contents of the "Size" attribute
    % expr::attr< boost::uintmax_t >("Size");

composer[DEVICE_INACCESSIBLE_MSG]
    % expr::attr< std::string >("Drive");

composer[SUCCEEDED_MSG]
    % expr::attr< unsigned int >("Duration");

// Then put the composer to the backend
backend->set_event_composer(composer);

As you can see, one can use regular formatters to specify which attributes will be inserted instead of placeholders in the final event message. Aside from that, one can specify mappings of attribute values to event types and categories. Suppose our application has the following severity levels:

// Define application-specific severity levels
enum severity_level
{
    normal,
    warning,
    error
};

Then these levels can be mapped onto the values in the message description file:

// We'll have to map our custom levels to the event log event types
sinks::event_log::custom_event_type_mapping< severity_level > type_mapping("Severity");
type_mapping[normal] = sinks::event_log::make_event_type(MY_SEVERITY_INFO);
type_mapping[warning] = sinks::event_log::make_event_type(MY_SEVERITY_WARNING);
type_mapping[error] = sinks::event_log::make_event_type(MY_SEVERITY_ERROR);

backend->set_event_type_mapper(type_mapping);

// Same for event categories.
// Usually event categories can be restored by the event identifier.
sinks::event_log::custom_event_category_mapping< int > cat_mapping("EventID");
cat_mapping[LOW_DISK_SPACE_MSG] = sinks::event_log::make_event_category(MY_CATEGORY_1);
cat_mapping[DEVICE_INACCESSIBLE_MSG] = sinks::event_log::make_event_category(MY_CATEGORY_2);
cat_mapping[SUCCEEDED_MSG] = sinks::event_log::make_event_category(MY_CATEGORY_3);

backend->set_event_category_mapper(cat_mapping);

[Tip] Tip

As of Windows NT 6 (Vista, Server 2008) it is not needed to specify event type mappings. This information is available in the message definition resources and need not be duplicated in the API call.

Now that initialization is done, the sink can be registered into the core.

// Create the frontend for the sink
boost::shared_ptr< sinks::synchronous_sink< sinks::event_log_backend > > sink(
    new sinks::synchronous_sink< sinks::event_log_backend >(backend));

// Set up filter to pass only records that have the necessary attribute
sink->set_filter(expr::has_attr< int >("EventID"));

logging::core::get()->add_sink(sink);

In order to emit events it is convenient to create a set of functions that will accept all needed parameters for the corresponding events and announce that the event has occurred.

BOOST_LOG_INLINE_GLOBAL_LOGGER_DEFAULT(event_logger, src::severity_logger_mt< severity_level >)

// The function raises an event of the disk space depletion
void announce_low_disk_space(std::string const& drive, boost::uintmax_t size)
{
    BOOST_LOG_SCOPED_THREAD_TAG("EventID", (int)LOW_DISK_SPACE_MSG);
    BOOST_LOG_SCOPED_THREAD_TAG("Drive", drive);
    BOOST_LOG_SCOPED_THREAD_TAG("Size", size);
    // Since this record may get accepted by other sinks,
    // this message is not completely useless
    BOOST_LOG_SEV(event_logger::get(), warning) << "Low disk " << drive
        << " space, " << size << " Mb is recommended";
}

// The function raises an event of inaccessible disk drive
void announce_device_inaccessible(std::string const& drive)
{
    BOOST_LOG_SCOPED_THREAD_TAG("EventID", (int)DEVICE_INACCESSIBLE_MSG);
    BOOST_LOG_SCOPED_THREAD_TAG("Drive", drive);
    BOOST_LOG_SEV(event_logger::get(), error) << "Cannot access drive " << drive;
}

// The structure is an activity guard that will emit an event upon the activity completion
struct activity_guard
{
    activity_guard()
    {
        // Add a stop watch attribute to measure the activity duration
        m_it = event_logger::get().add_attribute("Duration", attrs::timer()).first;
    }
    ~activity_guard()
    {
        BOOST_LOG_SCOPED_THREAD_TAG("EventID", (int)SUCCEEDED_MSG);
        BOOST_LOG_SEV(event_logger::get(), normal) << "Activity ended";
        event_logger::get().remove_attribute(m_it);
    }

private:
    logging::attribute_set::iterator m_it;
};

Now you are able to call these helper functions to emit events. The complete code from this section is available in the event_log example in the library distribution.


PrevUpHomeNext