Boost
C++ Libraries
...one of the most highly
regarded and expertly designed C++ library projects in the
world.
— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
This version of Boost is under active development. You are currently in the develop branch. The current version is 1.90.0.
If a basic raw-byte allocation is needed from a managed memory segment, (for
example, a managed shared memory), to implement top-level interprocess communications,
this class offers allocate and deallocate functions. The allocation function comes
with throwing and no throwing versions. Throwing version throws boost::interprocess::bad_alloc
(which derives from std::bad_alloc) if there is no more memory and
the non-throwing version returns 0 pointer.
#include <boost/interprocess/managed_shared_memory.hpp> int main() { using namespace boost::interprocess; //Remove shared memory on construction and destruction struct shm_remove { shm_remove() { shared_memory_object::remove("MyName"); } ~shm_remove(){ shared_memory_object::remove("MyName"); } } remover; //Managed memory segment that allocates portions of a shared memory //segment with the default management algorithm managed_shared_memory managed_shm(create_only,"MyName", 65536); //Allocate 100 bytes of memory from segment, throwing version void *ptr = managed_shm.allocate(100); //Deallocate it managed_shm.deallocate(ptr); //Non throwing version ptr = managed_shm.allocate(100, std::nothrow); //Deallocate it managed_shm.deallocate(ptr); return 0; }
All Boost.Interprocess managed memory segment classes construct in their respective memory segments (shared memory, memory mapped files, heap memory...) some structures to implement the memory management algorithm, named allocations, synchronization objects... All these objects are encapsulated in a single object called segment manager. A managed memory mapped file and a managed shared memory use the same segment manager to implement all managed memory segment features, due to the fact that a segment manager is a class that manages a fixed size memory buffer. Since both shared memory and memory mapped files are accessed though a mapped region, and a mapped region is a fixed size memory buffer, a single segment manager class can manage several managed memory segment types.
Some Boost.Interprocess classes require
a pointer to the segment manager in their constructors, and the segment manager
can be obtained from any managed memory segment using get_segment_manager
member:
managed_shared_memory::segment_manager *seg_manager = managed_shm.get_segment_manager();
The class also offers conversions between absolute addresses that belong to a managed memory segment and a handle that can be passed using any interprocess mechanism. That handle can be transformed again to an absolute address using a managed memory segment that also contains that object. Handles can be used as keys between processes to identify allocated portions of a managed memory segment or objects constructed in the managed segment.
//Process A obtains the offset of the address managed_shared_memory::handle handle = segment.get_handle_from_address(processA_address); //Process A sends this address using any mechanism to process B //Process B obtains the handle and transforms it again to an address managed_shared_memory::handle handle = ... void * processB_address = segment.get_address_from_handle(handle);
Sometimes the programmer must execute some code, and needs to execute it with the guarantee that no other process or thread will create or destroy any named, unique or anonymous object while executing the functor. A user might want to create several named objects and initialize them, but those objects should be available for the rest of processes at once.
To achieve this, the programmer can use the atomic_func() function offered by managed classes:
//This object function will create several named objects create_several_objects_func func(/**/); //While executing the function, no other process will be //able to create or destroy objects managed_memory.atomic_func(func);
Note that atomic_func does
not prevent other processes from allocating raw memory or executing member
functions for already constructed objects (e.g.: another process might be
pushing elements into a vector placed in the segment). The atomic function
only blocks named, unique and anonymous creation, search and destruction
(concurrent calls to construct<>, find<>, find_or_construct<>, destroy<>...) from other processes.
These functions are available to obtain information about the managed memory segments:
Obtain the size of the memory segment:
managed_shm.get_size();
Obtain the number of free bytes of the segment:
managed_shm.get_free_memory();
Clear to zero the free memory:
managed_shm.zero_free_memory();
Know if all memory has been deallocated, false otherwise:
managed_shm.all_memory_deallocated();
Test internal structures of the managed segment. Returns true if no errors are detected:
managed_shm.check_sanity();
Obtain the number of named and unique objects allocated in the segment:
managed_shm.get_num_named_objects(); managed_shm.get_num_unique_objects();
As seen, managed memory segments, when creating named objects, store the name/object association in an index. The index is a map with the name of the object as a key and a pointer to the object as the mapped type. The default specializations, managed_shared_memory and wmanaged_shared_memory, use flat_map_index as the index type.
Each index has its own characteristics, like search-time, insertion time, deletion time, memory use, and memory allocation patterns. Boost.Interprocess offers 3 index types right now:
As an example, if we want to define new managed shared memory class using boost::interprocess::map as the index type we just must specify [boost::interprocess::map_index map_index] as a template parameter:
//This managed memory segment can allocate objects with: // -> a wchar_t string as key // -> boost::interprocess::rbtree_best_fit with process-shared mutexes // as memory allocation algorithm. // -> boost::interprocess::map<...> as the index to store name/object mappings // typedef boost::interprocess::basic_managed_shared_memory < wchar_t , boost::interprocess::rbtree_best_fit<boost::interprocess::mutex_family, offset_ptr<void> > , boost::interprocess::map_index > my_managed_shared_memory;
If these indexes are not enough for you, you can define your own index type. To know how to do this, go to Building custom indexes section.
Once a managed segment is created the managed segment can't be grown. The limitation is not easily solvable: every process attached to the managed segment would need to be stopped, notified of the new size, they would need to remap the managed segment and continue working. Nearly impossible to achieve with a user-level library without the help of the operating system kernel.
On the other hand, Boost.Interprocess offers off-line segment growing. What does this mean? That the segment can be grown if no process has mapped the managed segment. If the application can find a moment where no process is attached it can grow or shrink to fit the managed segment.
Here we have an example showing how to grow and shrink to fit managed_shared_memory:
#include <boost/interprocess/managed_shared_memory.hpp> #include <boost/interprocess/managed_mapped_file.hpp> #include <cassert> class MyClass { //... }; int main() { using namespace boost::interprocess; //Remove shared memory on construction and destruction struct shm_remove { shm_remove() { shared_memory_object::remove("MyName"); } ~shm_remove(){ shared_memory_object::remove("MyName"); } } remover; { //Create a managed shared memory managed_shared_memory shm(create_only, "MyName", 1000); //Check size assert(shm.get_size() == 1000); //Construct a named object MyClass *myclass = shm.construct<MyClass>("MyClass")(); //The managed segment is unmapped here } { //Now that the segment is not mapped grow it adding extra 500 bytes managed_shared_memory::grow("MyName", 500); //Map it again managed_shared_memory shm(open_only, "MyName"); //Check size assert(shm.get_size() == 1500); //Check "MyClass" is still there MyClass *myclass = shm.find<MyClass>("MyClass").first; assert(myclass != 0); //The managed segment is unmapped here } { //Now minimize the size of the segment managed_shared_memory::shrink_to_fit("MyName"); //Map it again managed_shared_memory shm(open_only, "MyName"); //Check size assert(shm.get_size() < 1000); //Check "MyClass" is still there MyClass *myclass = shm.find<MyClass>("MyClass").first; assert(myclass != 0); //The managed segment is unmapped here } return 0; }
managed_mapped_file
also offers a similar function to grow or shrink_to_fit the managed file.
Please, remember that no process should be modifying
the file/shared memory while the growing/shrinking process is performed.
Otherwise, the managed segment will be corrupted.
As mentioned, the managed segment stores the information about named and unique objects in two indexes. Depending on the type of those indexes, the index must reallocate some auxiliary structures when new named or unique allocations are made. For some indexes, if the user knows how many named or unique objects are going to be created it's possible to preallocate some structures to obtain much better performance. (If the index is an ordered vector it can preallocate memory to avoid reallocations. If the index is a hash structure it can preallocate the bucket array).
The following functions reserve memory to make the subsequent allocation
of named or unique objects more efficient. These functions are only useful
for pseudo-intrusive or non-node indexes (like flat_map_index,
iunordered_set_index). These
functions have no effect with the default index (iset_index)
or other indexes (map_index):
managed_shm.reserve_named_objects(1000); managed_shm.reserve_unique_objects(1000);
managed_shm.reserve_named_objects(1000); managed_shm.reserve_unique_objects(1000);
Managed memory segments also offer the possibility to iterate through constructed named and unique objects for debugging purposes. Caution: this iteration is not thread-safe so the user should make sure that no other thread is manipulating named or unique indexes (creating, erasing, reserving...) in the segment. Other operations not involving indexes can be concurrently executed (raw memory allocation/deallocations, for example).
The following functions return constant iterators to the range of named and unique objects stored in the managed segment. Depending on the index type, iterators might be invalidated after a named or unique creation/erasure/reserve operation:
typedef managed_shared_memory::const_named_iterator const_named_it; const_named_it named_beg = managed_shm.named_begin(); const_named_it named_end = managed_shm.named_end(); typedef managed_shared_memory::const_unique_iterator const_unique_it; const_unique_it unique_beg = managed_shm.unique_begin(); const_unique_it unique_end = managed_shm.unique_end(); for(; named_beg != named_end; ++named_beg){ //A pointer to the name of the named object const managed_shared_memory::char_type *name = named_beg->name(); //The length of the name std::size_t name_len = named_beg->name_length(); //A constant void pointer to the named object const void *value = named_beg->value(); } for(; unique_beg != unique_end; ++unique_beg){ //The typeid(T).name() of the unique object const char *typeid_name = unique_beg->name(); //The length of the name std::size_t name_len = unique_beg->name_length(); //A constant void pointer to the unique object const void *value = unique_beg->value(); }
Sometimes it's interesting to be able to allocate aligned fragments of memory because of some hardware or software restrictions. Sometimes, having aligned memory is a feature that can be used to improve several memory algorithms.
This allocation is similar to the previously shown raw memory allocation but it takes an additional parameter specifying the alignment. There is a restriction for the alignment: the alignment must be power of two.
If a user wants to allocate many aligned blocks (for example aligned to 128 bytes), the size that minimizes the memory waste is a value that's is nearly a multiple of that alignment (for example 2*128 - some bytes). The reason for this is that every memory allocation usually needs some additional metadata in the first bytes of the allocated buffer. If the user can know the value of "some bytes" and if the first bytes of a free block of memory are used to fulfill the aligned allocation, the rest of the block can be left also aligned and ready for the next aligned allocation. Note that requesting a size multiple of the alignment is not optimal because lefts the next block of memory unaligned due to the needed metadata.
Once the programmer knows the size of the payload of every memory allocation, he can request a size that will be optimal to allocate aligned chunks of memory maximizing both the size of the request and the possibilities of future aligned allocations. This information is stored in the PayloadPerAllocation constant of managed memory segments.
Here is a small example showing how aligned allocation is used:
#include <boost/interprocess/managed_shared_memory.hpp> #include <cassert> int main() { using namespace boost::interprocess; //Remove shared memory on construction and destruction struct shm_remove { shm_remove() { shared_memory_object::remove("MyName"); } ~shm_remove(){ shared_memory_object::remove("MyName"); } } remover; //Managed memory segment that allocates portions of a shared memory //segment with the default management algorithm managed_shared_memory managed_shm(create_only, "MyName", 65536); const std::size_t Alignment = 128; //Allocate 100 bytes aligned to Alignment from segment, throwing version void *ptr = managed_shm.allocate_aligned(100, Alignment); //Check alignment assert(std::size_t(static_cast<char*>(ptr)-static_cast<char*>(0)) % Alignment == 0); //Deallocate it managed_shm.deallocate(ptr); //Non throwing version ptr = managed_shm.allocate_aligned(100, Alignment, std::nothrow); //Check alignment assert(std::size_t(static_cast<char*>(ptr)-static_cast<char*>(0)) % Alignment == 0); //Deallocate it managed_shm.deallocate(ptr); //If we want to efficiently allocate aligned blocks of memory //use managed_shared_memory::PayloadPerAllocation value assert(Alignment > managed_shared_memory::PayloadPerAllocation); //This allocation will maximize the size of the aligned memory //and will increase the possibility of finding more aligned memory ptr = managed_shm.allocate_aligned (3u*Alignment - managed_shared_memory::PayloadPerAllocation, Alignment); //Check alignment assert(std::size_t(static_cast<char*>(ptr)-static_cast<char*>(0)) % Alignment == 0); //Deallocate it managed_shm.deallocate(ptr); return 0; }
![]() |
Caution |
|---|---|
This feature is experimental, interface and ABI are unstable |
If an application needs to allocate a lot of memory buffers but it needs
to deallocate them independently, the application is normally forced to loop
calling allocate().
Managed memory segments offer an alternative function to pack several allocations
in a single call obtaining memory buffers that:
This allocation method is much faster than calling allocate() in a loop. The downside is that the segment
must provide a contiguous memory segment big enough to hold all the allocations.
Managed memory segments offer this functionality through allocate_many() functions. There are 2 types of allocate_many functions:
//!Allocates n_elements of elem_bytes bytes. //!Throws bad_alloc on failure. chain.size() is not increased on failure. void allocate_many(size_type elem_bytes, size_type n_elements, multiallocation_chain &chain); //!Allocates n_elements, each one of element_lengths[i]*sizeof_element bytes. //!Throws bad_alloc on failure. chain.size() is not increased on failure. void allocate_many(const size_type *element_lengths, size_type n_elements, size_type sizeof_element, multiallocation_chain &chain); //!Allocates n_elements of elem_bytes bytes. //!Non-throwing version. chain.size() is not increased on failure. void allocate_many(std::nothrow_t, size_type elem_bytes, size_type n_elements, multiallocation_chain &chain); //!Allocates n_elements, each one of //!element_lengths[i]*sizeof_element bytes. //!Non-throwing version. chain.size() is not increased on failure. void allocate_many(std::nothrow_t, const size_type *elem_sizes, size_type n_elements, size_type sizeof_element, multiallocation_chain &chain); //!Deallocates all elements contained in chain. //!Never throws. void deallocate_many(multiallocation_chain &chain);
Here is a small example showing all this functionality:
#include <boost/interprocess/managed_shared_memory.hpp> #include <boost/move/utility_core.hpp> //boost::move #include <cassert>//assert #include <cstring>//std::memset #include <new> //std::nothrow #include <vector> //std::vector int main() { using namespace boost::interprocess; typedef managed_shared_memory::multiallocation_chain multiallocation_chain; //Remove shared memory on construction and destruction struct shm_remove { shm_remove() { shared_memory_object::remove("MyName"); } ~shm_remove(){ shared_memory_object::remove("MyName"); } } remover; managed_shared_memory managed_shm(create_only,"MyName", 65536); //Allocate 16 elements of 100 bytes in a single call. Non-throwing version. multiallocation_chain chain; managed_shm.allocate_many(std::nothrow, 100, 16, chain); //Check if the memory allocation was successful if(chain.empty()) return 1; //Allocated buffers std::vector<void*> allocated_buffers; //Initialize our data while(!chain.empty()){ void *buf = chain.pop_front(); allocated_buffers.push_back(buf); //The iterator must be incremented before overwriting memory //because otherwise, the iterator is invalidated. std::memset(buf, 0, 100); } //Now deallocate while(!allocated_buffers.empty()){ managed_shm.deallocate(allocated_buffers.back()); allocated_buffers.pop_back(); } //Allocate 10 buffers of different sizes in a single call. Throwing version managed_shared_memory::size_type sizes[10]; for(std::size_t i = 0; i < 10; ++i) sizes[i] = i*3; managed_shm.allocate_many(sizes, 10, 1, chain); managed_shm.deallocate_many(chain); return 0; }
Allocating N buffers of the same size improves the performance of pools and node containers (for example STL-like lists): when inserting a range of forward iterators in a STL-like list, the insertion function can detect the number of needed elements and allocate in a single call. The nodes still can be deallocated.
Allocating N buffers of different sizes can be used to speed up allocation
in cases where several objects must always be allocated at the same time
but deallocated at different times. For example, a class might perform several
initial allocations (some header data for a network packet, for example)
in its constructor but also allocations of buffers that might be reallocated
in the future (the data to be sent through the network). Instead of allocating
all the data independently, the constructor might use allocate_many() to speed up the initialization, but it
still can deallocate and expand the memory of the variable size element.
In general, allocate_many
is useful with large values of N. Overuse of allocate_many
can increase the effective memory usage, because it can't reuse existing
non-contiguous memory fragments that might be available for some of the elements.
When programming some data structures such as vectors, memory reallocation becomes an important tool to improve performance. Managed memory segments offer an advanced reallocation function that offers:
The expansion can be combined with the allocation of a new buffer if the expansion fails obtaining a function with "expand, if fails allocate a new buffer" semantics.
Apart from this features, the function always returns the real size of the
allocated buffer, because many times, due to alignment issues the allocated
buffer a bit bigger than the requested size. Thus, the programmer can maximize
the memory use using allocation_command.
Here is the declaration of the function:
enum boost::interprocess::allocation_type { //Bitwise OR (|) combinable values boost::interprocess::allocate_new = ..., boost::interprocess::expand_fwd = ..., boost::interprocess::expand_bwd = ..., boost::interprocess::shrink_in_place = ..., boost::interprocess::nothrow_allocation = ... }; template<class T> std::pair<T *, bool> allocation_command( boost::interprocess::allocation_type command , std::size_t limit_size , size_type &prefer_in_recvd_out_size , T *&reuse_ptr);
Preconditions for the function:
boost::interprocess::shrink_in_place
it can't contain any of these values: boost::interprocess::expand_fwd,
boost::interprocess::expand_bwd.
boost::interprocess::expand_fwd
or boost::interprocess::expand_bwd, the parameter reuse_ptr must be non-null and returned
by a previous allocation function.
boost::interprocess::shrink_in_place,
the parameter limit_size
must be equal or greater than the parameter preferred_size.
command
contains any of these values: boost::interprocess::expand_fwd
or boost::interprocess::expand_bwd, the parameter limit_size must be equal or less than
the parameter preferred_size.
Which are the effects of this function:
boost::interprocess::shrink_in_place,
the function will try to reduce the size of the memory block referenced
by pointer reuse_ptr
to the value preferred_size
moving only the end of the block. If it's not possible, it will try to
reduce the size of the memory block as much as possible as long as this
results in size(p) <= limit_size.
Success is reported only if this results in preferred_size
<= size(p) and size(p) <= limit_size.
command
only contains the value boost::interprocess::expand_fwd
(with optional additional boost::interprocess::nothrow_allocation),
the allocator will try to increase the size of the memory block referenced
by pointer reuse moving only the end of the block to the value preferred_size. If it's not possible,
it will try to increase the size of the memory block as much as possible
as long as this results in size(p) >= limit_size. Success is reported only
if this results in limit_size
<= size(p).
command
only contains the value boost::interprocess::expand_bwd
(with optional additional boost::interprocess::nothrow_allocation),
the allocator will try to increase the size of the memory block referenced
by pointer reuse_ptr
only moving the start of the block to a returned new position new_ptr. If it's not possible, it will
try to move the start of the block as much as possible as long as this
results in size(new_ptr)
>= limit_size.
Success is reported only if this results in limit_size
<= size(new_ptr).
command
only contains the value boost::interprocess::allocate_new
(with optional additional boost::interprocess::nothrow_allocation),
the allocator will try to allocate memory for preferred_size
objects. If it's not possible it will try to allocate memory for at least
limit_size objects.
command
only contains a combination of boost::interprocess::expand_fwd
and boost::interprocess::allocate_new, (with optional additional
boost::interprocess::nothrow_allocation) the allocator will
try first the forward expansion. If this fails, it would try a new allocation.
command
only contains a combination of boost::interprocess::expand_bwd
and boost::interprocess::allocate_new (with optional additional
boost::interprocess::nothrow_allocation), the allocator
will try first to obtain preferred_size
objects using both methods if necessary. If this fails, it will try to
obtain limit_size objects
using both methods if necessary.
command
only contains a combination of boost::interprocess::expand_fwd
and boost::interprocess::expand_bwd (with optional additional
boost::interprocess::nothrow_allocation), the allocator
will try first forward expansion. If this fails it will try to obtain
preferred_size objects using backwards expansion or a combination of
forward and backwards expansion. If this fails, it will try to obtain
limit_size objects using
both methods if necessary.
command
only contains a combination of allocation_new, boost::interprocess::expand_fwd
and boost::interprocess::expand_bwd, (with optional additional
boost::interprocess::nothrow_allocation) the allocator will
try first forward expansion. If this fails it will try to obtain preferred_size
objects using new allocation, backwards expansion or a combination of
forward and backwards expansion. If this fails, it will try to obtain
limit_size objects using
the same methods.
received_size.
On failure the allocator writes in received_size
a possibly successful limit_size
parameter for a new call.
Throws an exception if two conditions are met:
boost::interprocess::nothrow_allocation.
This function returns:
boost::interprocess::nothrow_allocation the first member
will be 0 if the allocation/expansion fails or there is an error in preconditions.
Notes:
char
as template argument the returned buffer will be suitably aligned to
hold any type.
char
as template argument and a backwards expansion is performed, although
properly aligned, the returned buffer might not be suitable because the
distance between the new beginning and the old beginning might not multiple
of the type the user wants to construct, since due to internal restrictions
the expansion can be slightly bigger than the requested bytes. When performing backwards expansion, if you have already
constructed objects in the old buffer, make sure to specify correctly
the type.
Here is a small example that shows the use of allocation_command:
#include <boost/interprocess/managed_shared_memory.hpp> #include <cassert> int main() { using namespace boost::interprocess; //Remove shared memory on construction and destruction struct shm_remove { shm_remove() { shared_memory_object::remove("MyName"); } ~shm_remove(){ shared_memory_object::remove("MyName"); } } remover; //Managed memory segment that allocates portions of a shared memory //segment with the default management algorithm managed_shared_memory managed_shm(create_only, "MyName", 10000*sizeof(std::size_t)); //Allocate at least 100 bytes, 1000 bytes if possible managed_shared_memory::size_type min_size = 100; managed_shared_memory::size_type first_received_size = 1000; std::size_t *hint = 0; std::size_t *ptr = managed_shm.allocation_command<std::size_t> (boost::interprocess::allocate_new, min_size, first_received_size, hint); //Received size must be bigger than min_size assert(first_received_size >= min_size); //Get free memory managed_shared_memory::size_type free_memory_after_allocation = managed_shm.get_free_memory(); //Now write the data for(std::size_t i = 0; i < first_received_size; ++i) ptr[i] = i; //Now try to triplicate the buffer. We won't admit an expansion //lower to the double of the original buffer. //This "should" be successful since no other class is allocating //memory from the segment min_size = first_received_size*2; managed_shared_memory::size_type expanded_size = first_received_size*3; std::size_t * ret = managed_shm.allocation_command (boost::interprocess::expand_fwd, min_size, expanded_size, ptr); //Check invariants assert(ptr != 0); assert(ret == ptr); assert(expanded_size >= first_received_size*2); //Get free memory and compare managed_shared_memory::size_type free_memory_after_expansion = managed_shm.get_free_memory(); assert(free_memory_after_expansion < free_memory_after_allocation); //Write new values for(std::size_t i = first_received_size; i < expanded_size; ++i) ptr[i] = i; //Try to shrink approximately to min_size, but the new size //should be smaller than min_size*2. //This "should" be successful since no other class is allocating //memory from the segment managed_shared_memory::size_type shrunk_size = min_size; ret = managed_shm.allocation_command (boost::interprocess::shrink_in_place, min_size*2, shrunk_size, ptr); //Check invariants assert(ptr != 0); assert(ret == ptr); assert(shrunk_size <= min_size*2); assert(shrunk_size >= min_size); //Get free memory and compare managed_shared_memory::size_type free_memory_after_shrinking = managed_shm.get_free_memory(); assert(free_memory_after_shrinking > free_memory_after_expansion); //Deallocate the buffer managed_shm.deallocate(ptr); return 0; }
allocation_command is a very
powerful function that can lead to important performance gains. It's specially
useful when programming vector-like data structures where the programmer
can minimize both the number of allocation requests and the memory waste.
The programmer can open a managed shared memory or mapped file using the
open_copy_on_write option.
This option is similar to open_only
but every change performed on this managed segment is kept private to the
process and those changes are not translated to the underlying device (shared
memory or file).
This copy-on-write approach can reduce memory consumption:
open_copy_on_write, the operating
system initially makes them share the same underlying physical memory
pages. No actual copying of the data occurs at the time of opening.
Opening a managed shared memory or managed mapped file with open_read_only maps the underlying device
in memory with read-only attributes. This
means that any attempt to write to that memory (including locking any mutex)
might result in a page-fault error (and thus, program termination) from
the OS.
Due to this, managed shared memory or managed mapped file operations are quite limited on this mode:
find<> member function avoids using
internal locks and can be used to look for named and unique objects.
Here is an example that shows the use of these two modes:
#include <boost/interprocess/managed_mapped_file.hpp> #include <fstream> //std::fstream #include <iterator>//std::distance int main() { using namespace boost::interprocess; //Define file names const char *ManagedFile = "MyManagedFile"; const char *ManagedFile2 = "MyManagedFile2"; //Try to erase any previous managed segment with the same name file_mapping::remove(ManagedFile); file_mapping::remove(ManagedFile2); remove_file_on_destroy destroyer1(ManagedFile); remove_file_on_destroy destroyer2(ManagedFile2); { //Create an named integer in a managed mapped file managed_mapped_file managed_file(create_only, ManagedFile, 65536); managed_file.construct<int>("MyInt")(0); //Now create a copy on write version managed_mapped_file managed_file_cow(open_copy_on_write, ManagedFile); //Erase the int and create a new one if(!managed_file_cow.destroy<int>("MyInt")) throw int(0); managed_file_cow.construct<int>("MyInt2")(); //Check changes if(managed_file_cow.find<int>("MyInt").first || !managed_file_cow.find<int>("MyInt2").first) throw int(0); //Check the original is intact if(!managed_file.find<int>("MyInt").first || managed_file.find<int>("MyInt2").first) throw int(0); { //Dump the modified copy on write segment to a file std::fstream file(ManagedFile2, std::ios_base::out | std::ios_base::binary); if(!file) throw int(0); file.write(static_cast<const char *>(managed_file_cow.get_address()), (std::streamsize)managed_file_cow.get_size()); } //Now open the modified file and test changes managed_mapped_file managed_file_cow2(open_only, ManagedFile2); if(managed_file_cow2.find<int>("MyInt").first && !managed_file_cow2.find<int>("MyInt2").first) throw int(0); } { //Now create a read-only version managed_mapped_file managed_file_ro(open_read_only, ManagedFile); //Check the original is intact if(!managed_file_ro.find<int>("MyInt").first && managed_file_ro.find<int>("MyInt2").first) throw int(0); //Check the number of named objects using the iterators if(std::distance(managed_file_ro.named_begin(), managed_file_ro.named_end()) != 1 && std::distance(managed_file_ro.unique_begin(), managed_file_ro.unique_end()) != 0 ) throw int(0); } return 0; }