Boost C++ Libraries

...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards

This is the documentation for an old version of Boost. Click here to view this page for the latest version.
PrevUpHomeNext

Managed Memory Segments

Making Interprocess Data Communication Easy
Managed Shared Memory
Managed Mapped File
Managed Memory Segment Features
Managed Memory Segment Advanced Features
Managed Heap Memory And Managed External Buffer

As we have seen, Boost.Interprocess offers some basic classes to create shared memory objects and file mappings and map those mappable classes to the process' address space.

However, managing those memory segments is not not easy for non-trivial tasks. A mapped region is a fixed-length memory buffer and creating and destroying objects of any type dynamically, requires a lot of work, since it would require programming a memory management algorithm to allocate portions of that segment. Many times, we also want to associate a names to objects created in shared memory, so all the processes can find the object using the name.

Boost.Interprocess offers 4 managed memory segment classes:

  • To manage a shared memory mapped region (basic_managed_shared_memory class).
  • To manage a memory mapped file (basic_managed_mapped_file).
  • To manage a heap allocated (operator new) memory buffer (basic_managed_heap_memory class).
  • To manage a user provided fixed size buffer (basic_managed_external_buffer class).

The first two classes manage memory segments that can be shared between processes. The third is useful to create complex data-bases to be sent though other mechanisms like message queues to other processes. The fourth class can manage any fixed size memory buffer. The first two classes will be explained in the next two sections. basic_managed_heap_memory and basic_managed_external_buffer will be explained later.

The most important services of a managed memory segment are:

  • Dynamic allocation of portions of a memory the segment.
  • Construction of C++ objects in the memory segment. These objects can be anonymous or we can associate a name to them.
  • Searching capabilities for named objects.
  • Customization of many features: memory allocation algorithm, index types or character types.
  • Atomic constructions and destructions so that if the segment is shared between two processes it's impossible to create two objects associated with the same name, simplifying synchronization.

All Boost.Interprocess managed memory segment classes are templatized classes that can be customized by the user:

template
      <
         class CharType, 
         class MemoryAlgorithm, 
         template<class IndexConfig> class IndexType
      >
class basic_managed_shared_memory / basic_managed_mapped_file /
      basic_managed_heap_memory   / basic_external_buffer;

These classes can be customized with the following template parameters:

  • CharType is the type of the character that will be used to identify the created named objects (for example, char or wchar_t)
  • MemoryAlgorithm is the memory algorithm used to allocate portions of the segment (for example, rbtree_best_fit ). The internal typedefs of the memory algorithm also define:
    • The synchronization type (MemoryAlgorithm::mutex_family) to be used in all allocation operations. This allows the use of user-defined mutexes or avoiding internal locking (maybe code will be externally synchronized by the user).
    • The Pointer type (MemoryAlgorithm::void_pointer) to be used by the memory allocation algorithm or additional helper structures (like a map to mantain object/name associations). All STL compatible allocators and containers to be used with this managed memory segment will use this pointer type. The pointer type will define if the managed memory segment can be mapped between several processes. For example, if void_pointer is offset_ptr<void> we will be able to map the managed segment in different base addresses in each process. If void_pointer is void* only fixed address mapping could be used.
    • See Writing a new memory allocation algorithm for more details about memory algorithms.
  • IndexType is the type of index that will be used to store the name-object association (for example, a map, a hash-map, or an ordered vector).

This way, we can use char or wchar_t strings to identify created C++ objects in the memory segment, we can plug new shared memory allocation algorithms, and use the index type that is best suited to our needs.

As seen, basic_managed_shared_memory offers a great variety of customization. But for the average user, a common, default shared memory named object creation is needed. Because of this, Boost.Interprocess defines the most common managed shared memory specializations:

//!Defines a managed shared memory with c-strings as keys for named objects,
//!the default memory algorithm (with process-shared mutexes, 
//!and offset_ptr as internal pointers) as memory allocation algorithm
//!and the default index type as the index.
//!This class allows the shared memory to be mapped in different base 
//!in different processes
typedef 
   basic_managed_shared_memory<char
                              ,/*Default memory algorithm defining offset_ptr<void> as void_pointer*/
                              ,/*Default index type*/>
   managed_shared_memory;

//!Defines a managed shared memory with wide strings as keys for named objects,
//!the default memory algorithm (with process-shared mutexes, 
//!and offset_ptr as internal pointers) as memory allocation algorithm
//!and the default index type as the index.
//!This class allows the shared memory to be mapped in different base 
//!in different processes
typedef 
   basic_managed_shared_memory<wchar_t
                              ,/*Default memory algorithm defining offset_ptr<void> as void_pointer*/
                              ,/*Default index type*/>
   wmanaged_shared_memory;

managed_shared_memory allocates objects in shared memory asociated with a c-string and wmanaged_shared_memory allocates objects in shared memory asociated with a wchar_t null terminated string. Both define the pointer type as offset_ptr<void> so they can be used to map the shared memory at different base addresses in different processes.

If the user wants to map the shared memory in the same address in all processes and want to use raw pointers internally instead of offset pointers, Boost.Interprocess defines the following types:

//!Defines a managed shared memory with c-strings as keys for named objects,
//!the default memory algorithm (with process-shared mutexes, 
//!and offset_ptr as internal pointers) as memory allocation algorithm
//!and the default index type as the index.
//!This class allows the shared memory to be mapped in different base 
//!in different processes*/
typedef basic_managed_shared_memory
   <char
   ,/*Default memory algorithm defining void * as void_pointer*/
   ,/*Default index type*/>
fixed_managed_shared_memory;

//!Defines a managed shared memory with wide strings as keys for named objects,
//!the default memory algorithm (with process-shared mutexes, 
//!and offset_ptr as internal pointers) as memory allocation algorithm
//!and the default index type as the index.
//!This class allows the shared memory to be mapped in different base 
//!in different processes
typedef basic_managed_shared_memory
   <wchar_t
   ,/*Default memory algorithm defining void * as void_pointer*/
   ,/*Default index type*/>
wfixed_managed_shared_memory;

Managed shared memory is an advanced class that combines a shared memory object and a mapped region that covers all the shared memory object. That means that when we create a new managed shared memory:

  • A new shared memory object is created.
  • The whole shared memory object is mapped in the process' address space.
  • Some helper objects are constructed (name-object index, internal synchronization objects, internal variables...) in the mapped region to implement managed memory segment features.

When we open a managed shared memory

  • A shared memory object is opened.
  • The whole shared memory object is mapped in the process' address space.

To use a managed shared memory, you must include the following header:

#include <boost/interprocess/managed_shared_memory.hpp>

//1.  Creates a new shared memory object
//    called "MySharedMemory".
//2.  Maps the whole object to this
//    process' address space.
//3.  Constructs some objects in shared memory
//    to implement managed features.
//!!  If anything fails, throws interprocess_exception
//
managed_shared_memory segment      (create_only,       "MySharedMemory", //Shared memory object name      65536);           //Shared memory object size in bytes

/1. Creates a new shared memory object / called "MySharedMemory". /2. Maps the whole object to this / process' address space. /3. Constructs some objects in shared memory / to implement managed features. /!! If anything fails, throws interprocess_exception / managed_shared_memory segment (create_only, "MySharedMemory", //Shared memory object name 65536); //Shared memory object size in bytes

//1.  Opens a shared memory object
//    called "MySharedMemory".
//2.  Maps the whole object to this
//    process' address space.
//3.  Obtains pointers to constructed internal objects
//    to implement managed features.
//!!  If anything fails, throws interprocess_exception
//
managed_shared_memory segment      (open_only,       "MySharedMemory");//Shared memory object name[c++]

//1.  If the segment was previously created
//    equivalent to "open_only".
//2.  Otherwise, equivalent to "open_only" (size is ignored)
//!!  If anything fails, throws interprocess_exception
//
managed_shared_memory segment      (open_or_create,       "MySharedMemory", //Shared memory object name      65536);           //Shared memory object size in bytes

/1. Opens a shared memory object / called "MySharedMemory". /2. Maps the whole object to this / process' address space. /3. Obtains pointers to constructed internal objects / to implement managed features. /!! If anything fails, throws interprocess_exception / managed_shared_memory segment (open_only, "MySharedMemory");//Shared memory object name

//1.  If the segment was previously created
//    equivalent to "open_only".
//2.  Otherwise, equivalent to "open_only" (size is ignored)
//!!  If anything fails, throws interprocess_exception
//
managed_shared_memory segment      (open_or_create,       "MySharedMemory", //Shared memory object name      65536);           //Shared memory object size in bytes

/1. If the segment was previously created / equivalent to "open_only". /2. Otherwise, equivalent to "open_only" (size is ignored) /!! If anything fails, throws interprocess_exception // managed_shared_memory segment (open_or_create, "MySharedMemory", //Shared memory object name 65536); //Shared memory object size in bytes When the a managed_shared_memory object is destroyed, the shared memory object is automatically unmapped, and all the resources are freed. To remove the shared memory object from the system you must use the shared_memory_object::remove function. Shared memory object removing might fail if any process still has the shared memory object mapped.

The user can also map the managed shared memory in a fixed address. This option is essential when using using fixed_managed_shared_memory. To do this, just add the mapping address as an extra parameter:

fixed_managed_shared_memory segment      (open_only      ,"MyFixedAddressSharedMemory" //Shared memory object name
   ,(void*)0x30000000            //Mapping address

Windows users might also want to use native windows shared memory instead of the portable shared_memory_object based managed memory. This is achieved through the basic_managed_windows_shared_memory class. To use it just include:

#include <boost/interprocess/managed_windows_shared_memory.hpp>

This class has the same interface as basic_managed_shared_memory but uses native windows shared memory. Note that this managed class has the same lifetime issues as the windows shared memory: when the last process attached to the windows shared memory is detached from the memory (or ends/crashes) the memory is destroyed. So there is no persistence support for windows shared memory.

For more information about managed shared memory capabilities, see basic_managed_shared_memory class reference.

As seen, basic_managed_mapped_file offers a great variety of customization. But for the average user, a common, default shared memory named object creation is needed. Because of this, Boost.Interprocess defines the most common managed mapped file specializations:

//Named object creation managed memory segment
//All objects are constructed in the memory-mapped file
//   Names are c-strings, 
//   Default memory management algorithm(rbtree_best_fit with no mutexes)
//   Name-object mappings are stored in the default index type (flat_map)
typedef basic_managed_mapped_file < 
   char, 
   rbtree_best_fit<mutex_family, offset_ptr<void> >,
   flat_map_index
   >  managed_mapped_file;

//Named object creation managed memory segment
//All objects are constructed in the memory-mapped file
//   Names are wide-strings, 
//   Default memory management algorithm(rbtree_best_fit with no mutexes)
//   Name-object mappings are stored in the default index type (flat_map)
typedef basic_managed_mapped_file< 
   wchar_t, 
   rbtree_best_fit<mutex_family, offset_ptr<void> >,
   flat_map_index
   >  wmanaged_mapped_file;

managed_mapped_file allocates objects in a memory mapped files asociated with a c-string and wmanaged_mapped_file allocates objects in a memory mapped file asociated with a wchar_t null terminated string. Both define the pointer type as offset_ptr<void> so they can be used to map the file at different base addresses in different processes.

Managed mapped file is an advanced class that combines a file and a mapped region that covers all the file. That means that when we create a new managed mapped file:

  • A new file is created.
  • The whole file is mapped in the process' address space.
  • Some helper objects are constructed (name-object index, internal synchronization objects, internal variables...) in the mapped region to implement managed memory segment features.

When we open a managed mapped file

  • A file is opened.
  • The whole file is mapped in the process' address space.

To use a managed mapped file, you must include the following header:

#include <boost/interprocess/managed_mapped_file.hpp>

//1.  Creates a new file
//    called "MyMappedFile".
//2.  Maps the whole file to this
//    process' address space.
//3.  Constructs some objects in the memory mapped
//    file to implement managed features.
//!!  If anything fails, throws interprocess_exception
//
managed_mapped_file mfile      (create_only,      "MyMappedFile",   //Mapped file name      65536);           //Mapped file size

/1. Creates a new file / called "MyMappedFile". /2. Maps the whole file to this / process' address space. /3. Constructs some objects in the memory mapped / file to implement managed features. /!! If anything fails, throws interprocess_exception / managed_mapped_file mfile (create_only, "MyMappedFile", //Mapped file name 65536); //Mapped file size

//1.  Opens a file
//    called "MyMappedFile".
//2.  Maps the whole file to this
//    process' address space.
//3.  Obtains pointers to constructed internal objects
//    to implement managed features.
//!!  If anything fails, throws interprocess_exception
//
managed_mapped_file mfile      (open_only,      "MyMappedFile");  //Mapped file name[c++]

//1.  If the file was previously created
//    equivalent to "open_only".
//2.  Otherwise, equivalent to "open_only" (size is ignored)
//
//!!  If anything fails, throws interprocess_exception
//
managed_mapped_file mfile      (open_or_create,      "MyMappedFile",   //Mapped file name      65536);           //Mapped file size

/1. Opens a file / called "MyMappedFile". /2. Maps the whole file to this / process' address space. /3. Obtains pointers to constructed internal objects / to implement managed features. /!! If anything fails, throws interprocess_exception / managed_mapped_file mfile (open_only, "MyMappedFile"); //Mapped file name

//1.  If the file was previously created
//    equivalent to "open_only".
//2.  Otherwise, equivalent to "open_only" (size is ignored)
//
//!!  If anything fails, throws interprocess_exception
//
managed_mapped_file mfile      (open_or_create,      "MyMappedFile",   //Mapped file name      65536);           //Mapped file size

/1. If the file was previously created / equivalent to "open_only". /2. Otherwise, equivalent to "open_only" (size is ignored) / /!! If anything fails, throws interprocess_exception / managed_mapped_file mfile (open_or_create, "MyMappedFile", //Mapped file name 65536); //Mapped file size When the a managed_mapped_file object is destroyed, the file is automatically unmapped, and all the resources are freed. To remove the file from the filesystem you can use standard C std::remove or Boost.Filesystem's remove() functions. File removing might fail if any process still has the file mapped in memory or the file is open by any process.

For more information about managed mapped file capabilities, see basic_managed_mapped_file class reference.

The following features are common to all managed memory segment classes, but we will use managed shared memory in our examples. We can do the same with memory mapped files or other managed memory segment classes.

If a basic raw-byte allocation is needed from a managed memory segment, (for example, a managed shared memory), to implement top-level interprocess communications, this class offers allocate and deallocate functions. The allocation function comes with throwing and no throwing versions. Throwing version throws boost::interprocess::bad_alloc (which derives from std::bad_alloc) if there is no more memory and the non-throwing version returns 0 pointer.

#include <boost/interprocess/managed_shared_memory.hpp>

int main()
{
   using namespace boost::interprocess;

   //Managed memory segment that allocates portions of a shared memory
   //segment with the default management algorithm
   shared_memory_object::remove("MyManagedShm");
   try{
      managed_shared_memory managed_shm(create_only, "MyManagedShm", 65536);

      //Allocate 100 bytes of memory from segment, throwing version
      void *ptr = managed_shm.allocate(100);

      //Deallocate it
      managed_shm.deallocate(ptr);

      //Non throwing version
      ptr = managed_shm.allocate(100, std::nothrow);

      //Deallocate it
      managed_shm.deallocate(ptr);
   }
   catch(...){
      shared_memory_object::remove("MyManagedShm");
      throw;
   }
   shared_memory_object::remove("MyManagedShm");
   return 0;
}

The class also offers conversions between absolute addresses that belong to a managed memory segment and a handle that can be passed using any interprocess mechanism. That handle can be transformed again to an absolute address using a managed memory segment that also contains that object. Handles can be used as keys between processes to identify allocated portions of a managed memory segment or objects constructed in the managed segment.

//Process A obtains the offset of the address
managed_shared_memory::handle handle = 
   segment.get_handle_from_address(processA_address);

//Process A sends this address using any mechanism to process B

//Process B obtains the handle and transforms it again to an address
managed_shared_memory::handle handle = ...
void * processB_address = segment.get_address_from_handle(handle);

When constructing objects in a managed memory segment (managed shared memory, managed mapped files...) associated with a name, the user has a varied object construction family to "construct" or to "construct if not found". Boost.Interprocess can construct a single object or an array of objects. The array can be constructed with the same parameters for all objects or we can define each parameter from a list of iterators:

//!Allocates and constructs an object of type MyType (throwing version)
MyType *ptr = managed_memory_segment.construct<MyType>("Name") (par1, par2...);

//!Allocates and constructs an array of objects of type MyType (throwing version) 
//!Each object receives the same parameters (par1, par2, ...)
MyType *ptr = managed_memory_segment.construct<MyType>("Name")[count](par1, par2...);

//!Tries to find a previously created object. If not present, allocates 
//!and constructs an object of type MyType (throwing version)
MyType *ptr = managed_memory_segment.find_or_construct<MyType>("Name") (par1, par2...);

//!Tries to find a previously created object. If not present, allocates and 
//!constructs an array of objects of type MyType (throwing version). Each object 
//!receives the same parameters (par1, par2, ...)
MyType *ptr = managed_memory_segment.find_or_construct<MyType>("Name")[count](par1, par2...);

//!Allocates and constructs an array of objects of type MyType (throwing version) 
//!Each object receives parameters returned with the expression (*it1++, *it2++,... )
MyType *ptr = managed_memory_segment.construct_it<MyType>("Name")[count](it1, it2...);

//!Tries to find a previously created object. If not present, allocates and constructs 
//!an array of objects of type MyType (throwing version).  Each object receives  
//!parameters returned with the expression (*it1++, *it2++,... )
MyType *ptr = managed_memory_segment.find_or_construct_it<MyType>("Name")[count](it1, it2...);

//!Tries to find a previously created object. Returns a pointer to the object and the 
//!count (if it is not an array, returns 1). If not present, the returned pointer is 0
std::pair<MyType *,std::size_t> ret = managed_memory_segment.find<MyType>("Name");

//!Destroys the created object, returns false if not present
bool destroyed = managed_memory_segment.destroy<MyType>("Name");

//!Destroys the created object via pointer
managed_memory_segment.destroy_ptr(ptr);

All these functions have a non-throwing version, that is invoked with an additional parameter std::nothrow. For example, for simple object construction:

//!Allocates and constructs an object of type MyType (no throwing version)
MyType *ptr = managed_memory_segment.construct<MyType>("Name", std::nothrow) (par1, par2...);

Sometimes, the user doesn't want to create class objects associated with a name. For this purpose, Boost.Interprocess can create anonymous objects in a managed memory segment. All named object construction functions are available to construct anonymous objects. To allocate an anonymous objects, the user must use "boost::interprocess::anonymous_instance" name instead of a normal name:

MyType *ptr = managed_memory_segment.construct<MyType>(anonymous_instance) (par1, par2...);

//Other construct variants can also be used (including non-throwing ones)
...

//We can only destroy the anonymous object via pointer
managed_memory_segment.destroy_ptr(ptr);

Find functions have no sense here, since anonymous objects have no name. We can only destroy the anonymous object via pointer.

Sometimes, the user wants to emulate a singleton in a managed memory segment. Obviously, as the managed memory segment is constructed at run-time, the user must construct and destroy this object explicitly. But how can the user be sure that the object is the only object of its type in the managed memory segment? This can be emulated using a named object and checking if it is present before trying to create one, but all processes must agree in the object's name, that can also conflict with other existing names.

To solve this, Boost.Interprocess offers a "unique object" creation in a managed memory segment. Only one instance of a class can be created in a managed memory segment using this "unique object" service (you can create more named objects of this class, though) so it makes easier the emulation of singleton-like objects across processes, for example, to design pooled, shared memory allocators. The object can be searched using the type of the class as a key.

// Construct
MyType *ptr = managed_memory_segment.construct<MyType>(unique_instance) (par1, par2...);

// Find it
std::pair<MyType *,std::size_t> ret = managed_memory_segment.find<MyType>(unique_instance);

// Destroy it
managed_memory_segment.destroy<MyType>(unique_instance);

// Other construct and find variants can also be used (including non-throwing ones)
//...

// We can also destroy the unique object via pointer
MyType *ptr = managed_memory_segment.construct<MyType>(unique_instance) (par1, par2...);
managed_shared_memory.destroy_ptr(ptr);

The find function obtains a pointer to the only object of type T that can be created using this "unique instance" mechanism.

One of the features of named/unique allocations/searches/destructions is that they are atomic. Named allocations use the recursive synchronization scheme defined by the internal mutex_family typedef defined of the memory allocation algorithm template parameter (MemoryAlgorithm). That is, the mutex type used to synchronize named/unique allocations is defined by the MemoryAlgorithm::mutex_family::recursive_mutex_type type. For shared memory, and memory mapped file based managed segments this recursive mutex is defined as boost::interprocess::interprocess_recursive_mutex.

If two processes can call:

MyType *ptr = managed_shared_memory.find_or_construct<MyType>("Name")[count](par1, par2...);

at the same time, but only one process will create the object and the other will obtain a pointer to the created object.

Raw allocation using allocate() can be called also safely while executing named/anonymous/unique allocations, just like when programming a multithreaded application inserting an object in a mutex-protected map does not block other threads from calling new[] while the map thread is searching the place where it has to insert the new object. The synchronization does happen once the map finds the correct place and it has to allocate raw memory to construct the new value.

This means that if we are creating or searching for a lot of named objects, we only block creation/searches from other processes but we don't block another process if that process is inserting elements in a shared memory vector.

As seen, managed memory segments, when creating named objects, store the name/object association in an index. The index is a map with the name of the object as a key and a pointer to the object as the mapped type. The default specializations, managed_shared_memory and wmanaged_shared_memory, use flat_map_index as the index type.

Each index has its own characteristics, like search-time, insertion time, deletion time, memory use, and memory allocation patterns. Boost.Interprocess offers 3 index types right now:

  • boost::interprocess::flat_map_index flat_map_index: Based on boost::interprocess::flat_map, an ordered vector similar to Loki library's AssocVector class, offers great search time and minimum memory use. But the vector must be reallocated when is full, so all data must be copied to the new buffer. Ideal when insertions are mainly in initialization time and in run-time we just need searches.
  • boost::interprocess::map_index map_index: Based on boost::interprocess::map, a managed memory ready version of std::map. Since it's a node based container, it has no reallocations, the tree must be just rebalanced sometimes. Offers equilibrated insertion/deletion/search times with more overhead per node comparing to boost::interprocess::flat_map_index. Ideal when searches/insertions/deletions are in random order.
  • boost::interprocess::null_index null_index: This index is for people using a managed memory segment just for raw memory buffer allocations and they don't make use of named/unique allocations. This class is just empty and saves some space and compilation time. If you try to use named object creation with a managed memory segment using this index, you will get a compilation error.

As an example, if we want to define new managed shared memory class using boost::interprocess::map as the index type we just must specify [boost::interprocess::map_index map_index] as a template parameter:

//This managed memory segment can allocate objects with:
// -> a wchar_t string as key
// -> boost::interprocess::rbtree_best_fit with process-shared mutexes 
//       as memory allocation algorithm.
// -> boost::interprocess::map<...> as the index to store name/object mappings
//
typedef boost::interprocess::basic_managed_shared_memory
         <  wchar_t
         ,  boost::interprocess::rbtree_best_fit<boost::interprocess::mutex_family, offset_ptr<void> >
         ,  boost::interprocess::map_index
         >  my_managed_shared_memory;

Boost.Interprocess plans to offer an unordered_map based index as soon as this container is included in Boost. If these indexes are not enough for you, you can define your own index type. To know how to do this, go to Building custom indexes section.

All Boost.Interprocess managed memory segment classes construct in their respective memory segments (shared memory, memory mapped files, heap memory...) some structures to implement the memory management algorithm, named allocations, synchronization objects... All these objects are encapsulated in a single object called segment manager. A managed memory mapped file and a managed shared memory use the same segment manager to implement all managed memory segment features, due to the fact that a segment manager is a class that manages a fixed size memory buffer. Since both shared memory or memory mapped files are accessed though a mapped region, and a mapped region is a fixed size memory buffer, a single segment manager class can manage several managed memory segment types.

Some Boost.Interprocess classes require a pointer to the segment manager in their constructors, and the segment manager can be obtained from any managed memory segment using get_segment_manager member:

managed_shared_memory::segment_manager *seg_manager =
   managed_shm.get_segment_manager();

Once an object is constructed using construct<> function family, the programmer can obtain information about the object using a pointer to the object. The programmer can obtain the following information:

  • Name of the object: If it's a named instance, the name used in the construction function is returned, otherwise 0 is returned.
  • Length of the object: Returns the number of elements of the object (1 if it's a single value, >=1 if it's an array).
  • The type of construction: Whether the object was construct using a named, unique or anonymous construction.

Here is an example showing this functionality:

#include <boost/interprocess/managed_shared_memory.hpp>
#include <cassert>
#include <cstring>

class my_class
{
   //...
};

int main()
{
   using namespace boost::interprocess;
   typedef managed_shared_memory msm;
   shared_memory_object::remove("MyManagedShm");

   try{
      msm managed_shm(create_only, "MyManagedShm", 10000*sizeof(std::size_t));

      //Construct objects
      my_class *named_object  = managed_shm.construct<my_class>("Object name")[1]();
      my_class *unique_object = managed_shm.construct<my_class>(unique_instance)[2]();
      my_class *anon_object   = managed_shm.construct<my_class>(anonymous_instance)[3]();

      //Now test "get_instance_name" function.
      assert(0 == std::strcmp(msm::get_instance_name(named_object), "Object name"));
      assert(0 == msm::get_instance_name(unique_object));
      assert(0 == msm::get_instance_name(anon_object));

      //Now test "get_instance_type" function.
      assert(named_type     == msm::get_instance_type(named_object));
      assert(unique_type    == msm::get_instance_type(unique_object));
      assert(anonymous_type == msm::get_instance_type(anon_object));

      //Now test "get_instance_length" function.
      assert(1 == msm::get_instance_length(named_object));
      assert(2 == msm::get_instance_length(unique_object));
      assert(3 == msm::get_instance_length(anon_object));

      managed_shm.destroy_ptr(named_object);
      managed_shm.destroy_ptr(unique_object);
      managed_shm.destroy_ptr(anon_object);
   }
   catch(...){
      shared_memory_object::remove("MyManagedShm");
      throw;
   }
   shared_memory_object::remove("MyManagedShm");
   return 0;
}

These functions are available to obtain information about the managed memory segments:

Obtain the size of the memory segment:

managed_shm.get_size();

Obtain the number of free bytes of the segment:

managed_shm.get_free_memory();

Clear to zero the free memory:

managed_shm.zero_free_memory();

Know if all memory has been deallocated, false otherwise:

managed_shm.all_memory_deallocated();

Test internal structures of the managed segment. Returns true if no errors are detected:

managed_shm.check_sanity();

Obtain the number of named and unique objects allocated in the segment:

managed_shm.get_num_named_objects();
managed_shm.get_num_unique_objects();

Once a managed segment is created the managed segment can't be grown. The limitation is not easily solvable: every process attached to the managed segment would need to be stopped, notified of the new size, they would need to remap the managed segment and continue working. Nearly impossible to achieve with a user-level library without the help of the operating system kernel.

On the other hand, Boost.Interprocess offers off-line segment growing. What does this mean? That the segment can be grown if no process has mapped the managed segment. If the application can find a moment where no process is attached it can grow or shrink to fit the managed segment.

Here we have an example showing how to grow and shrink to fit managed_shared_memory:

#include <boost/interprocess/managed_shared_memory.hpp>
#include <boost/interprocess/managed_mapped_file.hpp>
#include <cassert>

class MyClass
{
   //...
};

int main()
{
   using namespace boost::interprocess;
   try{
      {  //Remove old shared memory if present
         shared_memory_object::remove("MyManagedShm");
         //Create a managed shared memory
         managed_shared_memory shm(create_only, "MyManagedShm", 1000);
         //Check size
         assert(shm.get_size() == 1000);
         //Construct a named object
         MyClass *myclass = shm.construct<MyClass>("MyClass")();
         //The managed segment is unmapped here
      }
      {
         //Now that the segment is not mapped grow it adding extra 500 bytes
         managed_shared_memory::grow("MyManagedShm", 500);
         //Map it again
         managed_shared_memory shm(open_only, "MyManagedShm");
         //Check size
         assert(shm.get_size() == 1500);
         //Check "MyClass" is still there
         MyClass *myclass = shm.find<MyClass>("MyClass").first;
         assert(myclass != 0);
         //The managed segment is unmapped here
      }
      {
         //Now minimize the size of the segment
         managed_shared_memory::shrink_to_fit("MyManagedShm");
         //Map it again
         managed_shared_memory shm(open_only, "MyManagedShm");
         //Check size
         assert(shm.get_size() < 1000);
         //Check "MyClass" is still there
         MyClass *myclass = shm.find<MyClass>("MyClass").first;
         assert(myclass != 0);
         //The managed segment is unmapped here
      }
   }
   catch(...){
      shared_memory_object::remove("MyManagedShm");
      throw;
   }
   //Remove the managed segment
   shared_memory_object::remove("MyManagedShm");
   return 0;
}

managed_mapped_file also offers a similar function to grow or shrink_to_fit the managed file. Please, remember that no process should be modifying the file/shared memory while the growing/shrinking process is performed. Otherwise, the managed segment will be corrupted.

As mentioned, the managed segment stores the information about named and unique objects in two indexes. Depending on the type of those indexes, the index must reallocate some auxiliary structures when new named or unique allocations are made. For some indexes, if the user knows how many maned or unique objects is going to create it's possible to preallocate some structures to obtain much better performance (if the index is an ordered vector it can preallocate memory to avoid reallocations, if the index is a hash structure it can preallocate the bucket array...).

The following functions reserve memory to make the subsequent allocation of named or unique objects more efficient. These functions are only useful for pseudo-intrusive or non-node indexes (like flat_map_index, iunordered_set_index). These functions has no effect with the default index (iset_index) or other indexes (map_index):

managed_shm.reserve_named_objects(1000);
managed_shm.reserve_unique_objects(1000);

managed_shm.reserve_named_objects(1000);
managed_shm.reserve_unique_objects(1000);

Managed memory segments also offer the possibility to iterate through constructed named and unique objects for debugging purposes. Caution: this iteration is not thread-safe so the user should make sure that no other thread is manipulating named or unique indexes (creating, erasing, reserving...) in the segment. Other operations not involving indexes can be concurrently executed (raw memory allocation/deallocations, for example).

The following functions return constant iterators to the range of named and unique objects stored in the managed segment. Depending on the index type, iterators might be invalidated after a named or unique creation/erasure/reserve operation:

typedef managed_shared_memory::const_named_iterator const_named_it;
const_named_it named_beg = managed_shm.named_begin();
const_named_it named_end = managed_shm.named_end();

typedef managed_shared_memory::const_unique_iterator const_unique_it;
const_unique_it unique_beg = managed_shm.unique_begin();
const_unique_it unique_end = managed_shm.unique_end();

for(; named_beg != named_end; ++named_beg){
   //A pointer to the name of the named object
   const managed_shared_memory::char_type *name = named_beg->name();
   //The length of the name
   std::size_t name_len = named_beg->name_length();
   //A constant void pointer to the named object
   const void *value = named_beg->value();
}

for(; unique_beg != unique_end; ++unique_beg){
   //The typeid(T).name() of the unique object
   const char *typeid_name = unique_beg->name();
   //The length of the name
   std::size_t name_len = unique_beg->name_length();
   //A constant void pointer to the unique object
   const void *value = unique_beg->value();
}

Sometimes it's interesting to be able to allocate aligned fragments of memory because of some hardware or software restrictions. Sometimes, having aligned memory is an feature that can be used to improve several memory algorithms.

This allocation is similar to the previously shown raw memory allocation but it takes an additional parameter specifying the alignment. There is a restriction for the alignment: the alignment must be power of two.

If a user wants to allocate many aligned blocks (for example aligned to 128 bytes), the size that minimizes the memory waste is a value that's is nearly a multiple of that alignment (for example 2*128 - some bytes). The reason for this is that every memory allocation usually needs some additional metadata in the first bytes of the allocated buffer. If the user can know the value of "some bytes" and if the first bytes of a free block of memory are used to fulfill the aligned allocation, the rest of the block can be left also aligned and ready for the next aligned allocation. Note that requesting a size multiple of the alignment is not optimal because lefts the next block of memory unaligned due to the needed metadata.

Once the programmer knows the size of the payload of every memory allocation, he can request a size that will be optimal to allocate aligned chunks of memory maximizing both the size of the request and the possibilities of future aligned allocations. This information is stored in the PayloadPerAllocation constant of managed memory segments.

Here's is an small example showing how aligned allocation is used:

#include <boost/interprocess/managed_shared_memory.hpp>
#include <cassert>

int main()
{
   using namespace boost::interprocess;

   //Managed memory segment that allocates portions of a shared memory
   //segment with the default management algorithm
   shared_memory_object::remove("MyManagedShm");

   try{
      managed_shared_memory managed_shm(create_only, "MyManagedShm", 65536);

      const std::size_t Alignment = 128;

      //Allocate 100 bytes aligned to Alignment from segment, throwing version
      void *ptr = managed_shm.allocate_aligned(100, Alignment);

      //Check alignment
      assert(((char*)ptr-(char*)0) % Alignment == 0);

      //Deallocate it
      managed_shm.deallocate(ptr);

      //Non throwing version
      ptr = managed_shm.allocate_aligned(100, Alignment, std::nothrow);

      //Check alignment
      assert(((char*)ptr-(char*)0) % Alignment == 0);

      //Deallocate it
      managed_shm.deallocate(ptr);

      //If we want to efficiently allocate aligned blocks of memory
      //use managed_shared_memory::PayloadPerAllocation value
      assert(Alignment > managed_shared_memory::PayloadPerAllocation);

      //This allocation will maximize the size of the aligned memory
      //and will increase the possibility of finding more aligned memory
      ptr = managed_shm.allocate_aligned
         (3*Alignment - managed_shared_memory::PayloadPerAllocation, Alignment);

      //Check alignment
      assert(((char*)ptr-(char*)0) % Alignment == 0);

      //Deallocate it
      managed_shm.deallocate(ptr);
   }
   catch(...){
      shared_memory_object::remove("MyManagedShm");
      throw;
   }
   shared_memory_object::remove("MyManagedShm");
   return 0;
}

If an application needs to allocate a lot of memory buffers but it needs to deallocate them independently, the application is normally forced to loop calling allocate(). Managed memory segments offer an alternative function to pack several allocations in a single call obtaining memory buffers that:

  • are packed contiguously in memory (which improves locality)
  • can be independently deallocated.

This allocation method is much faster than calling allocate() in a loop. The downside is that the segment must provide a contiguous memory segment big enough to hold all the allocations. Managed memory segments offer this functionality through allocate_many() functions. There are 2 types of allocate_many functions:

  • Allocation of N buffers of memory with the same size.
  • Allocation ot N buffers of memory, each one of different size.

//!Allocates n_elements of elem_size bytes.
multiallocation_iterator allocate_many(std::size_t elem_size, std::size_t min_elements, std::size_t preferred_elements, std::size_t &received_elements);

//!Allocates n_elements, each one of elem_sizes[i] bytes.
multiallocation_iterator allocate_many(const std::size_t *elem_sizes, std::size_t n_elements);

//!Allocates n_elements of elem_size bytes. No throwing version.
multiallocation_iterator allocate_many(std::size_t elem_size, std::size_t min_elements, std::size_t preferred_elements, std::size_t &received_elements, std::nothrow_t nothrow);

//!Allocates n_elements, each one of elem_sizes[i] bytes. No throwing version.
multiallocation_iterator allocate_many(const std::size_t *elem_sizes, std::size_t n_elements, std::nothrow_t nothrow);

All functions return a multiallocation iterator that can be used to obtain pointers to memory the user can overwrite. A multiallocation_iterator:

  • Becomes invalidated if the memory is pointing to is deallocated or the next iterators (which previously were reachable with operator++) become invalid.
  • Returned from allocate_many can be checked in a boolean expression to know if the allocation has been successful.
  • A default constructed multiallocation iterator indicates both an invalid iterator and the "end" iterator.
  • Dereferencing an iterator (operator *()) returns a char & referencing the first byte user can overwrite in the memory buffer.
  • The iterator category depends on the memory allocation algorithm, but it's a least a forward iterator.

Here's an small example showing all this functionality:

#include <boost/interprocess/managed_shared_memory.hpp>
#include <cassert>//assert
#include <cstring>//std::memset
#include <new>    //std::nothrow
#include <vector> //std::vector

int main()
{
   using namespace boost::interprocess;
   typedef managed_shared_memory::multiallocation_iterator multiallocation_iterator;

   //Try to erase any previous managed segment with the same name
   shared_memory_object::remove("MyManagedShm");

   try{
      managed_shared_memory managed_shm(create_only, "MyManagedShm", 65536);

      //Allocate 16 elements of 100 bytes in a single call. Non-throwing version.
      multiallocation_iterator beg_it = managed_shm.allocate_many(100, 16, std::nothrow);

      //To check for an error, we can use a boolean expression
      //or compare it with a default constructed iterator
      assert(!beg_it == (beg_it == multiallocation_iterator()));
      
      //Check if the memory allocation was successful
      if(!beg_it)  return 1;

      //Allocated buffers
      std::vector<char*> allocated_buffers;

      //Initialize our data
      for( multiallocation_iterator it = beg_it, end_it; it != end_it; ){
         allocated_buffers.push_back(&*it);
         //The iterator must be incremented before overwriting memory
         //because otherwise, the iterator is invalidated.
         std::memset(&*it++, 0, 100);
      }

      //Now deallocate
      while(!allocated_buffers.empty()){
         managed_shm.deallocate(allocated_buffers.back());
         allocated_buffers.pop_back();
      }

      //Allocate 10 buffers of different sizes in a single call. Throwing version
      std::size_t sizes[10];
      for(std::size_t i = 0; i < 10; ++i)
         sizes[i] = i*3;

      beg_it  = managed_shm.allocate_many(sizes, 10);

      //Iterate each allocated buffer and deallocate
      //The "end" condition can be also checked with operator!
      for(multiallocation_iterator it = beg_it; it;){
         //The iterator must be incremented before overwriting memory
         //because otherwise, the iterator is invalidated.
         managed_shm.deallocate(&*it++);
      }
   }
   catch(...){
      shared_memory_object::remove("MyManagedShm");
      throw;
   }
   shared_memory_object::remove("MyManagedShm");
   return 0;
}

Allocating N buffers of the same size improves the performance of pools and node containers (for example STL-like lists): when inserting a range of forward iterators in a STL-like list, the insertion function can detect the number of needed elements and allocate in a single call. The nodes still can be deallocated.

Allocating N buffers of different sizes can be used to speed up allocation in cases where several objects must always be allocated at the same time but deallocated at different times. For example, a class might perform several initial allocations (some header data for a network packet, for example) in its constructor but also allocations of buffers that might be reallocated in the future (the data to be sent through the network). Instead of allocating all the data independently, the constructor might use allocate_many() to speed up the initialization, but it still can deallocate and expand the memory of the variable size element.

In general, allocate_many is useful with large values of N. Overuse of allocate_many can increase the effective memory usage, because it can't reuse existing non-contiguous memory fragments that might be available for some of the elements.

When programming some data structures such as vectors, memory reallocation becomes an important tool to improve performance. Managed memory segments offer an advanced reallocation function that offers:

  • Forward expansion: An allocated buffer can be expanded so that the end of the buffer is moved further. New data can be written between the old end and the new end.
  • Backwards expansion: An allocated buffer can be expanded so that the beginning of the buffer is moved backwards. New data can be written between the new beginning and the old beginning.
  • Shrinking: An allocated buffer can be shrunk so that the end of the buffer is moved backwards. The memory between the new end and the old end can be reused for future allocations.

The expansion can be combined with the allocation of a new buffer if the expansion fails obtaining a function with "expand, if fails allocate a new buffer" semantics.

Apart from this features, the function always returns the real size of the allocated buffer, because many times, due to alignment issues the allocated buffer a bit bigger than the requested size. Thus, the programmer can maximize the memory use using allocation_command.

Here's the declaration of the function:

enum allocation_type
{
   //Bitwise OR (|) combinable values
   allocate_new        = ...,
   expand_fwd          = ...,
   expand_bwd          = ...,
   shrink_in_place     = ...,
   nothrow_allocation  = ...
};


template<class T>
std::pair<T *, bool>
   allocation_command( allocation_type command
                     , std::size_t limit_size
                     , std::size_t preferred_size
                     , std::size_t &received_size
                     , T *reuse_ptr = 0);

Preconditions for the function:

  • If the parameter command contains the value shrink_in_place it can't contain any of these values: expand_fwd, expand_bwd.
  • If the parameter command contains expand_fwd or expand_bwd, the parameter reuse_ptr must be non-null and returned by a previous allocation function.
  • If the parameter command contains the value shrink_in_place, the parameter limit_size must be equal or greater than the parameter preferred_size.
  • If the parameter command contains any of these values: expand_fwd or expand_bwd, the parameter limit_size must be equal or less than the parameter preferred_size.

Which are the effects of this function:

  • If the parameter command contains the value shrink_in_place, the function will try to reduce the size of the memory block referenced by pointer reuse_ptr to the value preferred_size moving only the end of the block. If it's not possible, it will try to reduce the size of the memory block as much as possible as long as this results in size(p) <= limit_size. Success is reported only if this results in preferred_size <= size(p) and size(p) <= limit_size.
  • If the parameter command only contains the value expand_fwd (with optional additional nothrow_allocation), the allocator will try to increase the size of the memory block referenced by pointer reuse moving only the end of the block to the value preferred_size. If it's not possible, it will try to increase the size of the memory block as much as possible as long as this results in size(p) >= limit_size. Success is reported only if this results in limit_size <= size(p).
  • If the parameter command only contains the value expand_bwd (with optional additional nothrow_allocation), the allocator will try to increase the size of the memory block referenced by pointer reuse_ptr only moving the start of the block to a returned new position new_ptr. If it's not possible, it will try to move the start of the block as much as possible as long as this results in size(new_ptr) >= limit_size. Success is reported only if this results in limit_size <= size(new_ptr).
  • If the parameter command only contains the value allocate_new (with optional additional nothrow_allocation), the allocator will try to allocate memory for preferred_size objects. If it's not possible it will try to allocate memory for at least limit_size` objects.
  • If the parameter command only contains a combination of expand_fwd and allocate_new, (with optional additional nothrow_allocation) the allocator will try first the forward expansion. If this fails, it would try a new allocation.
  • If the parameter command only contains a combination of expand_bwd and allocate_new (with optional additional nothrow_allocation), the allocator will try first to obtain preferred_size objects using both methods if necessary. If this fails, it will try to obtain limit_size objects using both methods if necessary.
  • If the parameter command only contains a combination of expand_fwd and expand_bwd (with optional additional nothrow_allocation), the allocator will try first forward expansion. If this fails it will try to obtain preferred_size objects using backwards expansion or a combination of forward and backwards expansion. If this fails, it will try to obtain limit_size objects using both methods if necessary.
  • If the parameter command only contains a combination of allocation_new, expand_fwd and expand_bwd, (with optional additional nothrow_allocation) the allocator will try first forward expansion. If this fails it will try to obtain preferred_size objects using new allocation, backwards expansion or a combination of forward and backwards expansion. If this fails, it will try to obtain limit_size objects using the same methods.
  • The allocator always writes the size or the expanded/allocated/shrunk memory block in received_size. On failure the allocator writes in received_size a possibly successful limit_size parameter for a new call.

Throws an exception if two conditions are met:

  • The allocator is unable to allocate/expand/shrink the memory or there is an error in preconditions
  • The parameter command does not contain nothrow_allocation.

This function returns:

  • The address of the allocated memory or the new address of the expanded memory as the first member of the pair. If the parameter command contains nothrow_allocation the first member will be 0 if the allocation/expansion fails or there is an error in preconditions.
  • The second member of the pair will be false if the memory has been allocated, true if the memory has been expanded. If the first member is 0, the second member has an undefined value.

Notes:

  • If the user chooses char as template argument the returned buffer will be suitably aligned to hold any type.
  • If the user chooses char as template argument and a backwards expansion is performed, although properly aligned, the returned buffer might not be suitable because the distance between the new beginning and the old beginning might not multiple of the type the user wants to construct, because due to internal restriction the expansion can be slightly bigger than the requested. When performing backwards expansion, if you have already constructed objects in the old buffer, make sure to specify correctly the type.

Here is an small example that shows the use of allocation_command:

#include <boost/interprocess/managed_shared_memory.hpp>
#include <cassert>

int main()
{
   using namespace boost::interprocess;

   //Managed memory segment that allocates portions of a shared memory
   //segment with the default management algorithm
   shared_memory_object::remove("MyManagedShm");

   try{
      managed_shared_memory managed_shm(create_only, "MyManagedShm", 10000*sizeof(std::size_t));

      //Allocate at least 100 bytes, 1000 bytes if possible
      std::size_t received_size, min_size = 100, preferred_size = 1000;
      std::size_t *ptr = managed_shm.allocation_command<std::size_t>
         (allocate_new, min_size, preferred_size, received_size).first;

      //Received size must be bigger than min_size
      assert(received_size >= min_size);

      //Get free memory
      std::size_t free_memory_after_allocation = managed_shm.get_free_memory();

      //Now write the data
      for(std::size_t i = 0; i < received_size; ++i) ptr[i] = i;

      //Now try to triplicate the buffer. We won't admit an expansion
      //lower to the double of the original buffer.
      //This "should" be successful since no other class is allocating
      //memory from the segment
      std::size_t expanded_size;
      std::pair<std::size_t *, bool> ret = managed_shm.allocation_command
         (expand_fwd, received_size*2, received_size*3, expanded_size, ptr);

      //Check invariants
      assert(ret.second == true);
      assert(ret.first == ptr);
      assert(expanded_size >= received_size*2);

      //Get free memory and compare
      std::size_t free_memory_after_expansion = managed_shm.get_free_memory();
      assert(free_memory_after_expansion < free_memory_after_allocation);

      //Write new values
      for(std::size_t i = received_size; i < expanded_size; ++i)  ptr[i] = i;

      //Try to shrink approximately to min_size, but the new size
      //should be smaller than min_size*2.
      //This "should" be successful since no other class is allocating
      //memory from the segment
      std::size_t shrunk_size;
      ret = managed_shm.allocation_command
         (shrink_in_place, min_size*2, min_size, shrunk_size, ptr);

      //Check invariants
      assert(ret.second == true);
      assert(ret.first == ptr);
      assert(shrunk_size <= min_size*2);
      assert(shrunk_size >= min_size);

      //Get free memory and compare
      std::size_t free_memory_after_shrinking = managed_shm.get_free_memory();
      assert(free_memory_after_shrinking > free_memory_after_expansion);

      //Deallocate the buffer
      managed_shm.deallocate(ptr);
   }
   catch(...){
      shared_memory_object::remove("MyManagedShm");
      throw;
   }
   shared_memory_object::remove("MyManagedShm");
   return 0;
}

allocation_commmand is a very powerful function that can lead to important performance gains. It's specially useful when programming vector-like data structures where the programmer can minimize both the number of allocation requests and the memory waste.

Boost.Interprocess offers managed shared memory between processes using managed_shared_memory or managed_mapped_file. Two processes just map the same the memory mappable resoure and read from and write to that object.

Many times, we don't want to use that shared memory approach and we prefer to send serialized data through network, local socket or message queues. Serialization can be done through Boost.Serialization or similar library. However, if two processes share the same ABI (application binary interface), we could use the same object and container construction capabilities of managed_shared_memory or managed_heap_memory to build all the information in a single buffer that will be sent, for example, though message queues. The receiver would just copy the data to a local buffer, and it could read or modify it directly without deserializing the data . This approach can be much more efficient that a complex serialization mechanism.

Applications for Boost.Interprocess services using non-shared memory buffers:

  • Create and use STL compatible containers and allocators, in systems where dynamic memory is not recommendable.
  • Build complex, easily serializable databases in a single buffer:
    • To share data between threads
    • To save and load information from/to files.
  • Duplicate information (containers, allocators, etc...) just copying the contents of one buffer to another one.
  • Send complex information and objects/databases using serial/inter-process/network communications.

To help with this management, Boost.Interprocess provides two useful classes, basic_managed_heap_memory and basic_managed_external_buffer:

Sometimes, the user wants to create simple objects, STL compatible containers, STL compatible strings and more, all in a single buffer. This buffer could be a big static buffer, a memory-mapped auxiliary device or any other user buffer.

This would allow an easy serialization and we-ll just need to copy the buffer to duplicate all the objects created in the original buffer, including complex objects like maps, lists.... Boost.Interprocess offers managed memory segment classes to handle user provided buffers that allow the same functionality as shared memory classes:

//Named object creation managed memory segment
//All objects are constructed in a a user provided buffer
template <
            class CharType, 
            class MemoryAlgorithm, 
            template<class IndexConfig> class IndexType
         >
class basic_managed_external_buffer;

//Named object creation managed memory segment
//All objects are constructed in a a user provided buffer
//   Names are c-strings, 
//   Default memory management algorithm
//    (rbtree_best_fit with no mutexes and relative pointers)
//   Name-object mappings are stored in the default index type (flat_map)
typedef basic_managed_external_buffer < 
   char, 
   rbtree_best_fit<null_mutex_family, offset_ptr<void> >,
   flat_map_index
   >  managed_external_buffer;

//Named object creation managed memory segment
//All objects are constructed in a a user provided buffer
//   Names are wide-strings, 
//   Default memory management algorithm
//    (rbtree_best_fit with no mutexes and relative pointers)
//   Name-object mappings are stored in the default index type (flat_map)
typedef basic_managed_external_buffer< 
   wchar_t, 
   rbtree_best_fit<null_mutex_family, offset_ptr<void> >,
   flat_map_index
   >  wmanaged_external_buffer;

To use a managed external buffer, you must include the following header:

#include <boost/interprocess/managed_external_buffer.hpp>

Let's see an example of the use of managed_external_buffer:

#include <boost/interprocess/managed_external_buffer.hpp>
#include <boost/interprocess/allocators/allocator.hpp>
#include <boost/interprocess/containers/list.hpp>
#include <cstring>

int main()
{
   using namespace boost::interprocess;

   //Create the static memory who will store all objects
   const int memsize = 65536;
   static char static_buffer [memsize];

   //This managed memory will construct objects associated with
   //a wide string in the static buffer
   wmanaged_external_buffer objects_in_static_memory
      (create_only, static_buffer, memsize);

   //We optimize resources to create 100 named objects in the static buffer
   objects_in_static_memory.reserve_named_objects(100);

   //Alias a integer node allocator type
   //This allocator will allocate memory inside the static buffer
   typedef allocator<int, wmanaged_external_buffer::segment_manager>
      allocator_t;

   //Alias a STL compatible list to be constructed in the static buffer
   typedef list<int, allocator_t>    MyBufferList;

   //The list must be initialized with the allocator
   //All objects created with objects_in_static_memory will
   //be stored in the static_buffer!
   MyBufferList *list = objects_in_static_memory.construct<MyBufferList>(L"MyList")
                           (objects_in_static_memory.get_segment_manager());

   //Since the allocation algorithm from wmanaged_external_buffer uses relative
   //pointers and all the pointers constructed int the static memory point
   //to objects in the same segment,  we can create another static buffer
   //from the first one and duplicate all the data.
   static char static_buffer2 [memsize];
   std::memcpy(static_buffer2, static_buffer, memsize);
   
   //Now open the duplicated managed memory passing the memory as argument
   wmanaged_external_buffer objects_in_static_memory2 
      (open_only, static_buffer2, memsize);

   //Check that "MyList" has been duplicated in the second buffer
   if(!objects_in_static_memory2.find<MyBufferList>(L"MyList").first)
      return 1;

   //Destroy the lists from the static buffers
   objects_in_static_memory.destroy<MyBufferList>(L"MyList");
   objects_in_static_memory2.destroy<MyBufferList>(L"MyList");
   return 0;
}

Boost.Interprocess STL compatible allocators can also be used to place STL compatible containers in the user segment.

basic_managed_external_buffer can be also useful to build small databases for embedded systems limiting the size of the used memory to a predefined memory chunk, instead of letting the database fragment the heap memory.

The use of heap memory (new/delete) to obtain a buffer where the user wants to store all his data is very common, so Boost.Interprocess provides some specialized classes that work exclusively with heap memory.

These are the classes:

//Named object creation managed memory segment
//All objects are constructed in a single buffer allocated via new[]
template <
            class CharType, 
            class MemoryAlgorithm, 
            template<class IndexConfig> class IndexType
         >
class basic_managed_heap_memory;

//Named object creation managed memory segment
//All objects are constructed in a single buffer allocated via new[]
//   Names are c-strings, 
//   Default memory management algorithm
//    (rbtree_best_fit with no mutexes and relative pointers)
//   Name-object mappings are stored in the default index type (flat_map)
typedef basic_managed_heap_memory < 
   char, 
   rbtree_best_fit<null_mutex_family>,
   flat_map_index
   >  managed_heap_memory;

//Named object creation managed memory segment
//All objects are constructed in a single buffer allocated via new[]
//   Names are wide-strings, 
//   Default memory management algorithm
//    (rbtree_best_fit with no mutexes and relative pointers)
//   Name-object mappings are stored in the default index type (flat_map)
typedef basic_managed_heap_memory< 
   wchar_t, 
   rbtree_best_fit<null_mutex_family>,
   flat_map_index
   >  wmanaged_heap_memory;

To use a managed heap memory, you must include the following header:

#include <boost/interprocess/managed_heap_memory.hpp>

The use is exactly the same as boost::interprocess::basic_managed_external_buffer, except that memory is created by the managed memory segment itself using dynamic (new/delete) memory.

basic_managed_heap_memory also offers a grow(std::size_t extra_bytes) function that tries to resize internal heap memory so that we have room for more objects. But be careful, if memory is reallocated, the old buffer will be copied into the new one so all the objects will be binary-copied to the new buffer. To be able to use this function, all pointers constructed in the heap buffer that point to objects in the heap buffer must be relative pointers (for example offset_ptr). Otherwise, the result is undefined. Here is an example:

#include <boost/interprocess/containers/list.hpp>
#include <boost/interprocess/managed_heap_memory.hpp>
#include <boost/interprocess/allocators/allocator.hpp>
#include <cstddef>

using namespace boost::interprocess;
typedef list<int, allocator<int, managed_heap_memory::segment_manager> > 
   MyList;

int main ()
{
   //We will create a buffer of 1000 bytes to store a list
   managed_heap_memory heap_memory(1000);

   MyList * mylist = heap_memory.construct<MyList>("MyList")
                        (heap_memory.get_segment_manager());

   //Obtain handle, that identifies the list in the buffer
   managed_heap_memory::handle_t list_handle = heap_memory.get_handle_from_address(mylist);

   //Fill list until there is no more memory in the buffer
   try{
      while(1) {
         mylist->insert(mylist->begin(), 0);
      }
   }
   catch(const bad_alloc &){
      //memory is full
   }
   //Let's obtain the size of the list
   std::size_t old_size = mylist->size();

   //To make the list bigger, let's increase the heap buffer
   //in 1000 bytes more.
   heap_memory.grow(1000);

   //If memory has been reallocated, the old pointer is invalid, so
   //use previously obtained handle to find the new pointer.
   mylist = static_cast<MyList *>
               (heap_memory.get_address_from_handle(list_handle));
   
   //Fill list until there is no more memory in the buffer
   try{
      while(1) {
         mylist->insert(mylist->begin(), 0);
      }
   }
   catch(const bad_alloc &){
      //memory is full
   }

   //Let's obtain the new size of the list      
   std::size_t new_size = mylist->size();

   assert(new_size > old_size);

   //Destroy list
   heap_memory.destroy_ptr(mylist);

   return 0;
}

All managed memory segments have similar capabilities (memory allocation inside the memory segment, named object construction...), but there are some remarkable differences between managed_shared_memory, managed_mapped_file and managed_heap_memory, managed_external_file.

  • Default specializations of managed shared memory and mapped file use process-shared mutexes. Heap memory and external buffer have no internal synchronization by default. The cause is that the first two are thought to be shared between processes (although memory mapped files could be used just to obtain a persistent object data-base for a process) whereas the last two are thought to be used inside one process to construct a serialized named object data-base that can be sent though serial interprocess communications (like message queues, localhost network...).
  • The first two create a system-global object (a shared memory object or a file) shared by several processes, whereas the last two are objects that don't create system-wide resources.

To see the utility of managed heap memory and managed external buffer classes, the following example shows how a message queue can be used to serialize a whole database constructed in a memory buffer using Boost.Interprocess, send the database through a message queue and duplicated in another buffer:

//This test creates a in memory data-base using Interprocess machinery and 
//serializes it through a message queue. Then rebuilds the data-base in 
//another buffer and checks it against the original data-base
bool test_serialize_db()
{
   //Typedef data to create a Interprocess map   
   typedef std::pair<const std::size_t, std::size_t> MyPair;
   typedef std::less<std::size_t>   MyLess;
   typedef node_allocator<MyPair, managed_external_buffer::segment_manager>
      node_allocator_t;
   typedef map<std::size_t, 
               std::size_t, 
               std::less<std::size_t>, 
               node_allocator_t>
               MyMap;

   //Some constants
   const std::size_t BufferSize  = 65536;
   const std::size_t MaxMsgSize  = 100;

   //Allocate a memory buffer to hold the destiny database using vector<char>
   std::vector<char> buffer_destiny(BufferSize, 0);

   message_queue::remove(test::get_process_id_name());
   {
      //Create the message-queues
      message_queue mq1(create_only, test::get_process_id_name(), 1, MaxMsgSize);

      //Open previously created message-queue simulating other process
      message_queue mq2(open_only, test::get_process_id_name());

      //A managed heap memory to create the origin database
      managed_heap_memory db_origin(buffer_destiny.size());

      //Construct the map in the first buffer
      MyMap *map1 = db_origin.construct<MyMap>("MyMap")
                                       (MyLess(), 
                                       db_origin.get_segment_manager());
      if(!map1)
         return false;

      //Fill map1 until is full 
      try{
         std::size_t i = 0;
         while(1){
            (*map1)[i] = i;
            ++i;
         }
      }
      catch(boost::interprocess::bad_alloc &){}

      //Data control data sending through the message queue
      std::size_t sent = 0;
      std::size_t recvd = 0;
      std::size_t total_recvd = 0;
      unsigned int priority;

      //Send whole first buffer through the mq1, read it 
      //through mq2 to the second buffer
      while(1){
         //Send a fragment of buffer1 through mq1
         std::size_t bytes_to_send = MaxMsgSize < (db_origin.get_size() - sent) ? 
                                       MaxMsgSize : (db_origin.get_size() - sent);
         mq1.send( &static_cast<char*>(db_origin.get_address())[sent]
               , bytes_to_send
               , 0);
         sent += bytes_to_send;
         //Receive the fragment through mq2 to buffer_destiny
         mq2.receive( &buffer_destiny[total_recvd]
                  , BufferSize - recvd
                  , recvd
                  , priority);
         total_recvd += recvd;

         //Check if we have received all the buffer
         if(total_recvd == BufferSize){
            break;
         }
      }
      
      //The buffer will contain a copy of the original database 
      //so let's interpret the buffer with managed_external_buffer
      managed_external_buffer db_destiny(open_only, &buffer_destiny[0], BufferSize);

      //Let's find the map
      std::pair<MyMap *, std::size_t> ret = db_destiny.find<MyMap>("MyMap");
      MyMap *map2 = ret.first;

      //Check if we have found it
      if(!map2){
         return false;
      }

      //Check if it is a single variable (not an array)
      if(ret.second != 1){
         return false;
      }

      //Now let's compare size
      if(map1->size() != map2->size()){
         return false;
      }

      //Now let's compare all db values
      for(std::size_t i = 0, num_elements = map1->size(); i < num_elements; ++i){
         if((*map1)[i] != (*map2)[i]){
            return false;
         }
      }
      
      //Destroy maps from db-s
      db_origin.destroy_ptr(map1);
      db_destiny.destroy_ptr(map2);
   }
   message_queue::remove(test::get_process_id_name());
   return true;
}


PrevUpHomeNext