Boost C++ Libraries

...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards

PrevUpHomeNext

Customizing Boost.Interprocess

Writing a new shared memory allocation algorithm
Building custom STL compatible allocators for Boost.Interprocess
Building custom indexes

If the default algorithm does not satisfy user requirements, it's easy to provide different algorithms like bitmapping or more advanced segregated lists to meet requirements. The class implementing the algorithm must be compatible with shared memory, so it shouldn't have any virtual function or virtual inheritance or any indirect base class with virtual function or inheritance.

This is the interface to be implemented:

class my_algorithm
{
   public:

   //!The mutex type to be used by the rest of Interprocess framework
   typedef implementation_defined   mutex_family;

   //!The pointer type to be used by the rest of Interprocess framework
   typedef implementation_defined   void_pointer;

   //!Constructor. "size" is the total size of the managed memory segment,
   //!"extra_hdr_bytes" indicates the extra bytes after the sizeof(my_algorithm)
   //!that the allocator should not use at all.
   my_algorithm (std::size_t size, std::size_t extra_hdr_bytes);

   //!Obtains the minimum size needed by the algorithm
   static std::size_t get_min_size (std::size_t extra_hdr_bytes);

   //!Allocates bytes, returns 0 if there is not more memory
   void* allocate (std::size_t nbytes);

   //!Deallocates previously allocated bytes
   void  deallocate (void *adr);

   //!Returns the size of the memory segment
   std::size_t get_size()  const;

   //!Increases managed memory in extra_size bytes more
   void grow(std::size_t extra_size);
   /*...*/
};

Let's see the public typedefs to define:

typedef /* . . . */ void_pointer;
typedef /* . . . */ mutex_family;

The void_pointer typedef specifies the pointer type to be used in the Boost.Interprocess framework that uses the algorithm. For example, if we define

typedef void * void_pointer;

all Boost.Interprocess framework using this algorithm will use raw pointers as members. But if we define:

typedef offset_ptr<void> void_pointer;

then all Boost.Interprocess framework will use relative pointers.

The mutex_family is a structure containing typedefs for different interprocess_mutex types to be used in the Boost.Interprocess framework. For example the defined

struct mutex_family
{
   typedef boost::interprocess::interprocess_mutex             mutex_type;
   typedef boost::interprocess::interprocess_recursive_mutex   recursive_mutex_type;
};

defines all interprocess_mutex types using boost::interprocess interprocess_mutex types. The user can specify the desired mutex family.

typedef mutex_family mutex_family;

The new algorithm (let's call it my_algorithm) must implement all the functions that boost::interprocess::rbtree_best_fit class offers:

  • my_algorithm's constructor must take 2 arguments:
    • size indicates the total size of the managed memory segment, and my_algorithm object will be always constructed a at offset 0 of the memory segment.
    • The extra_hdr_bytes parameter indicates the number of bytes after the offset sizeof(my_algorithm) that my_algorithm can't use at all. This extra bytes will be used to store additional data that should not be overwritten. So, my_algorithm will be placed at address XXX of the memory segment, and will manage the (XXX + sizeof(my_algorithm) + extra_hdr_bytes, XXX + size) range of the segment.
  • The get_min_size() function should return the minimum space the algorithm needs to be valid with the passed extra_hdr_bytes parameter. This function will be used to check if the memory segment is big enough to place the algorithm there.
  • The allocate() function must return 0 if there is no more available memory. The memory returned by my_algorithm must be aligned to the most restrictive memory alignment of the system. This function should be executed with the synchronization capabilities offered by typename mutex_family::mutex_type interprocess_mutex. That means, that if we define typedef mutex_family mutex_family; then this function should offer the same synchronization as if it was surrounded by an interprocess_mutex lock/unlock. Normally, this is implemented using a member of type mutex_family::mutex_type, but it could be done using atomic instructions or lock free algorithms.
  • The deallocate() function must make the returned buffer available for new allocations. This function should offer the same synchronization as allocate().
  • The size() function will return the passed size parameter in the constructor. So, my_algorithm should store the size internally.
  • The grow() function will expand the managed memory by my_algorithm in extra_size bytes. So size() function should return the updated size, and the new managed memory range will be (if the address where the algorithm is constructed is XXX): (XXX + sizeof(my_algorithm) + extra_hdr_bytes, XXX + old_size + extra_size). This function should offer the same synchronization as allocate().

That's it. Now we can create new managed shared memory that uses our new algorithm:

//Managed memory segment to allocate named (c-string) objects
//using a user-defined memory allocation algorithm
basic_managed_shared_memory<char,
                         ,my_algorithm
                         ,flat_map_index>
   my_managed_shared_memory;

If provided STL-like allocators don't satisfy user needs, the user can implement another STL compatible allocator using raw memory allocation and named object construction functions. The user can this way implement more suitable allocation schemes on top of basic shared memory allocation schemes, just like more complex allocators are built on top of new/delete functions.

When using a managed memory segment, get_segment_manager() function returns a pointer to the segment manager. With this pointer, the raw memory allocation and named object construction functions can be called directly:

//Create the managed shared memory and initialize resources
managed_shared_memory segment
   (create_only
   ,"/MySharedMemory"   //segment name
   ,65536);             //segment size in bytes

//Obtain the segment manager
managed_shared_memory::segment_manager *segment_mngr
   = segment.get_segment_manager();

//With the segment manager, now we have access to all allocation functions
segment_mngr->deallocate(segment_mngr->allocate(32));
segment_mngr->construct<int>("My_Int")[32](0);
segment_mngr->destroy<int>("My_Int");

//Initialize the custom, managed memory segment compatible
//allocator with the segment manager.
//
//MySTLAllocator uses segment_mngr->xxx functions to
//implement its allocation scheme
MySTLAllocator<int> stl_alloc(segment_mngr);

//Alias a new vector type that uses the custom STL compatible allocator
typedef std::vector<int, MySTLAllocator<int> > MyVect;

//Construct the vector in shared memory with the allocator as constructor parameter
segment.construct<MyVect>("MyVect_instance")(stl_alloc);

The user can create new STL compatible allocators that use the segment manager to access to all memory management/object construction functions. All Boost.Interprocess' STL compatible allocators are based on this approach. Remember that to be compatible with managed memory segments, allocators should define their pointer typedef as the same pointer family as segment_manager::void_pointer typedef. This means that if segment_manager::void_pointer is offset_ptr<void>, MySTLAllocator<int> should define pointer as offset_ptr<int>. The reason for this is that allocators are members of containers, and if we want to put the container in a managed memory segment, the allocator should be ready for that.

The managed memory segment uses a name/object index to speed up object searching and creation. Default specializations of managed memory segments (managed_shared_memory for example), use boost::interprocess::flat_map as index.

However, the index type can be chosen via template parameter, so that the user can define its own index type if he needs that. To construct a new index type, the user must create a class with the following guidelines:

  • The interface of the index must follow the common public interface of std::map and std::tr1::unordered_map including public typedefs. The value_type typedef can be of type:
std::pair<key_type, mapped_type>

or

std::pair<const key_type, mapped_type>

so that ordered arrays or deques can be used as index types. Some known classes following this basic interface are boost::unordered_map, boost::interprocess::flat_map and boost::interprocess::map.

  • The class must be a class template taking only a traits struct of this type:
struct index_traits
{
   typedef /*...*/   key_type;
   typedef /*...*/   mapped_type;
   typedef /*...*/   segment_manager;
};
template <class IndexTraits>
class my_index_type;

The key_type typedef of the passed index_traits will be a specialization of the following class:

//!The key of the named allocation information index. Stores a to
//!a null string and the length of the string to speed up sorting
template<...>
struct index_key
{
   typedef /*...*/                              char_type;
   typedef /*...*/                              const_char_ptr_t;

   //Pointer to the object's name (null terminated)
   const_char_ptr_t                             mp_str;

   //Length of the name buffer (null NOT included)
   std::size_t                                  m_len;

   //!Constructor of the key
   index_key (const CharT *name, std::size_t length);

   //!Less than function for index ordering
   bool operator < (const index_key & right) const;

   //!Equal to function for index ordering
   bool operator == (const index_key & right) const;
};

The mapped_type is not directly modified by the customized index but it is needed to define the index type. The segment_manager will be the type of the segment manager that will manage the index. segment_manager will define interesting internal types like void_pointer or mutex_family.

  • The constructor of the customized index type must take a pointer to segment_manager as constructor argument:
constructor(segment_manager *segment_mngr);
  • The index must provide a memory reservation function, that optimizes the index if the user knows the number of elements to be inserted in the index:
void reserve(std::size_t n);

For example, the index type flat_map_index based in boost::interprocess::flat_map is just defined as:

namespace boost { namespace interprocess {

#ifndef BOOST_INTERPROCESS_DOXYGEN_INVOKED

//!Helper class to define typedefs from IndexTraits
template <class MapConfig>
struct flat_map_index_aux
{
   typedef typename MapConfig::key_type            key_type;
   typedef typename MapConfig::mapped_type         mapped_type;
   typedef typename MapConfig::
      segment_manager_base                   segment_manager_base;
   typedef std::less<key_type>                     key_less;
   typedef std::pair<key_type, mapped_type>        value_type;
   typedef allocator<value_type
                    ,segment_manager_base>   allocator_type;
   typedef flat_map<key_type,  mapped_type,
                    key_less, allocator_type>      index_t;
};

#endif   //#ifndef BOOST_INTERPROCESS_DOXYGEN_INVOKED

//!Index type based in flat_map. Just derives from flat_map and
//!defines the interface needed by managed memory segments.
template <class MapConfig>
class flat_map_index
   //Derive class from flat_map specialization
   : public flat_map_index_aux<MapConfig>::index_t
{
   #if !defined(BOOST_INTERPROCESS_DOXYGEN_INVOKED)
   typedef flat_map_index_aux<MapConfig>  index_aux;
   typedef typename index_aux::index_t    base_type;
   typedef typename index_aux::
      segment_manager_base          segment_manager_base;
   #endif   //#ifndef BOOST_INTERPROCESS_DOXYGEN_INVOKED

   public:
   //!Constructor. Takes a pointer to the segment manager. Can throw
   flat_map_index(segment_manager_base *segment_mngr)
      : base_type(typename index_aux::key_less(),
                  typename index_aux::allocator_type(segment_mngr))
   {}

   //!This reserves memory to optimize the insertion of n elements in the index
   void reserve(typename segment_manager_base::size_type n)
   {  base_type::reserve(n);  }

   //!This frees all unnecessary memory
   void shrink_to_fit()
   {  base_type::shrink_to_fit();   }
};

}}   //namespace boost { namespace interprocess

If the user is defining a node container based index (a container whose iterators are not invalidated when inserting or erasing other elements), Boost.Interprocess can optimize named object destruction when destructing via pointer. Boost.Interprocess can store an iterator next to the object and instead of using the name of the object to erase the index entry, it uses the iterator, which is a faster operation. So if you are creating a new node container based index (for example, a tree), you should define an specialization of boost::interprocess::is_node_index<...> defined in <boost/interprocess/detail/utilities.hpp>:

//!Trait classes to detect if an index is a node
//!index. This allows more efficient operations
//!when deallocating named objects.
template<class MapConfig>
struct is_node_index
   <my_index<MapConfig> >
{
   static const bool value = true;
};

Interprocess also defines other index types:

  • boost::map_index uses boost::interprocess::map as index type.
  • boost::null_index that uses an dummy index type if the user just needs anonymous allocations and wants to save some space and class instantiations.

Defining a new managed memory segment that uses the new index is easy. For example, a new managed shared memory that uses the new index:

//!Defines a managed shared memory with a c-strings as
//!a keys, the red-black tree best fit algorithm (with process-shared mutexes
//!and offset_ptr pointers) as raw shared memory management algorithm
//!and a custom index
typedef
   basic_managed_shared_memory <
                              char,
                              rbtree_best_fit<mutex_family>,
                              my_index_type
                             >
   my_managed_shared_memory;

PrevUpHomeNext