Boost C++ Libraries

...one of the most highly regarded and expertly designed C++ library projects in the world. Herb Sutter and Andrei Alexandrescu, C++ Coding Standards

PrevUpHomeNext

Synchronization

Tutorial
Mutex Concepts
Lock Options
Lock Guard
With Lock Guard
Lock Concepts
Lock Types
Other Lock Types - EXTENSION
Lock functions
Lock Factories - EXTENSION
Mutex Types
Condition Variables
One-time Initialization
Barriers -- EXTENSION
Latches -- EXPERIMENTAL
Executors and Schedulers -- EXPERIMENTAL
Futures

Handling mutexes in C++ is an excellent tutorial. You need just replace std and ting by boost.

Mutex, Lock, Condition Variable Rationale adds rationale for the design decisions made for mutexes, locks and condition variables.

In addition to the C++11 standard locks, Boost.Thread provides other locks and some utilities that help the user to make their code thread-safe.

[Note] Note

This tutorial is an adaptation of chapter Concurrency of the Object-Oriented Programming in the BETA Programming Language and of the paper of Andrei Alexandrescu "Multithreading and the C++ Type System" to the Boost library.

Consider, for example, modeling a bank account class that supports simultaneous deposits and withdrawals from multiple locations (arguably the "Hello, World" of multithreaded programming).

From here a component is a model of the Callable concept.

I C++11 (Boost) concurrent execution of a component is obtained by means of the std::thread(boost::thread):

boost::thread thread1(S);

where S is a model of Callable. The meaning of this expression is that execution of S() will take place concurrently with the current thread of execution executing the expression.

The following example includes a bank account of a person (Joe) and two components, one corresponding to a bank agent depositing money in Joe's account, and one representing Joe. Joe will only be withdrawing money from the account:

class BankAccount;

BankAccount JoesAccount;

void bankAgent()
{
    for (int i =10; i>0; --i) {
        //...
        JoesAccount.Deposit(500);
        //...
    }
}

void Joe() {
    for (int i =10; i>0; --i) {
        //...
        int myPocket = JoesAccount.Withdraw(100);
        std::cout << myPocket << std::endl;
        //...
    }
}

int main() {
    //...
    boost::thread thread1(bankAgent); // start concurrent execution of bankAgent
    boost::thread thread2(Joe); // start concurrent execution of Joe
    thread1.join();
    thread2.join();
    return 0;
}

From time to time, the bankAgent will deposit $500 in JoesAccount. Joe will similarly withdraw $100 from his account. These sentences describe that the bankAgent and Joe are executed concurrently.

The above example works well as long as the components bankAgent and Joe doesn't access JoesAccount at the same time. There is, however, no guarantee that this will not happen. We may use a mutex to guarantee exclusive access to each bank.

class BankAccount {
    boost::mutex mtx_;
    int balance_;
public:
    void Deposit(int amount) {
        mtx_.lock();
        balance_ += amount;
        mtx_.unlock();
    }
    void Withdraw(int amount) {
        mtx_.lock();
        balance_ -= amount;
        mtx_.unlock();
    }
    int GetBalance() {
        mtx_.lock();
        int b = balance_;
        mtx_.unlock();
        return b;
    }
};

Execution of the Deposit and Withdraw operations will no longer be able to make simultaneous access to balance.

A mutex is a simple and basic mechanism for obtaining synchronization. In the above example it is relatively easy to be convinced that the synchronization works correctly (in the absence of exception). In a system with several concurrent objects and several shared objects, it may be difficult to describe synchronization by means of mutexes. Programs that make heavy use of mutexes may be difficult to read and write. Instead, we shall introduce a number of generic classes for handling more complicated forms of synchronization and communication.

With the RAII idiom we can simplify a lot this using the scoped locks. In the code below, guard's constructor locks the passed-in object mtx_, and guard's destructor unlocks mtx_.

class BankAccount {
    boost::mutex mtx_; // explicit mutex declaration 
    int balance_;
public:
    void Deposit(int amount) {
        boost::lock_guard<boost::mutex> guard(mtx_);
        balance_ += amount;
    }
    void Withdraw(int amount) {
        boost::lock_guard<boost::mutex> guard(mtx_);
        balance_ -= amount;
    }
    int GetBalance() {
        boost::lock_guard<boost::mutex> guard(mtx_);
        return balance_;
    }
};

The object-level locking idiom doesn't cover the entire richness of a threading model. For example, the model above is quite deadlock-prone when you try to coordinate multi-object transactions. Nonetheless, object-level locking is useful in many cases, and in combination with other mechanisms can provide a satisfactory solution to many threaded access problems in object-oriented programs.

The BankAccount class above uses internal locking. Basically, a class that uses internal locking guarantees that any concurrent calls to its public member functions don't corrupt an instance of that class. This is typically ensured by having each public member function acquire a lock on the object upon entry. This way, for any given object of that class, there can be only one member function call active at any moment, so the operations are nicely serialized.

This approach is reasonably easy to implement and has an attractive simplicity. Unfortunately, "simple" might sometimes morph into "simplistic."

Internal locking is insufficient for many real-world synchronization tasks. Imagine that you want to implement an ATM withdrawal transaction with the BankAccount class. The requirements are simple. The ATM transaction consists of two withdrawals-one for the actual money and one for the $2 commission. The two withdrawals must appear in strict sequence; that is, no other transaction can exist between them.

The obvious implementation is erratic:

void ATMWithdrawal(BankAccount& acct, int sum) {
    acct.Withdraw(sum);
    // preemption possible
    acct.Withdraw(2);
}

The problem is that between the two calls above, another thread can perform another operation on the account, thus breaking the second design requirement.

In an attempt to solve this problem, let's lock the account from the outside during the two operations:

void ATMWithdrawal(BankAccount& acct, int sum) {
    boost::lock_guard<boost::mutex> guard(acct.mtx_); 1
    acct.Withdraw(sum);
    acct.Withdraw(2);
}

Notice that the code above doesn't compile, the mtx_ field is private. We have two possibilities:

  • make mtx_ public which seems odd
  • make the BankAccount lockable by adding the lock/unlock functions

We can add these functions explicitly

class BankAccount {
    boost::mutex mtx_;
    int balance_;
public:
    void Deposit(int amount) {
        boost::lock_guard<boost::mutex> guard(mtx_);
        balance_ += amount;
    }
    void Withdraw(int amount) {
        boost::lock_guard<boost::mutex> guard(mtx_);
        balance_ -= amount;
    }
    void lock() {
        mtx_.lock();
    }
    void unlock() {
        mtx_.unlock();
    }
};

or inheriting from a class which add these lockable functions.

The basic_lockable_adapter class helps to define the BankAccount class as

class BankAccount
: public basic_lockable_adapter<mutex>
{
    int balance_;
public:
    void Deposit(int amount) {
        boost::lock_guard<BankAccount> guard(*this);
        balance_ += amount;
    }
    void Withdraw(int amount) {
        boost::lock_guard<BankAccount> guard(*this);
        balance_ -= amount;
    }
    int GetBalance() {
        boost::lock_guard<BankAccount> guard(*this);
        return balance_;
    }
};

and the code that doesn't compiles becomes

void ATMWithdrawal(BankAccount& acct, int sum) {
    boost::lock_guard<BankAccount> guard(acct);
    acct.Withdraw(sum);
    acct.Withdraw(2);
}

Notice that now acct is being locked by Withdraw after it has already been locked by guard. When running such code, one of two things happens.

  • Your mutex implementation might support the so-called recursive mutex semantics. This means that the same thread can lock the same mutex several times successfully. In this case, the implementation works but has a performance overhead due to unnecessary locking. (The locking/unlocking sequence in the two Withdraw calls is not needed but performed anyway-and that costs time.)
  • Your mutex implementation might not support recursive locking, which means that as soon as you try to acquire it the second time, it blocks-so the ATMWithdrawal function enters the dreaded deadlock.

As boost::mutex is not recursive, we need to use its recursive version boost::recursive_mutex.

class BankAccount
: public basic_lockable_adapter<recursive_mutex>
{

    // ...
};

The caller-ensured locking approach is more flexible and the most efficient, but very dangerous. In an implementation using caller-ensured locking, BankAccount still holds a mutex, but its member functions don't manipulate it at all. Deposit and Withdraw are not thread-safe anymore. Instead, the client code is responsible for locking BankAccount properly.

class BankAccount
    : public basic_lockable_adapter<boost:mutex> {
    int balance_;
public:
    void Deposit(int amount) {
        balance_ += amount;
    }
    void Withdraw(int amount) {
        balance_ -= amount;
    }
};

Obviously, the caller-ensured locking approach has a safety problem. BankAccount's implementation code is finite, and easy to reach and maintain, but there's an unbounded amount of client code that manipulates BankAccount objects. In designing applications, it's important to differentiate between requirements imposed on bounded code and unbounded code. If your class makes undue requirements on unbounded code, that's usually a sign that encapsulation is out the window.

To conclude, if in designing a multi-threaded class you settle on internal locking, you expose yourself to inefficiency or deadlocks. On the other hand, if you rely on caller-provided locking, you make your class error-prone and difficult to use. Finally, external locking completely avoids the issue by leaving it all to the client code.

[Note] Note

This tutorial is an adaptation of the paper by Andrei Alexandrescu "Multithreading and the C++ Type System" to the Boost library.

So what to do? Ideally, the BankAccount class should do the following:

  • Support both locking models (internal and external).
  • Be efficient; that is, use no unnecessary locking.
  • Be safe; that is, BankAccount objects cannot be manipulated without appropriate locking.

Let's make a worthwhile observation: Whenever you lock a BankAccount, you do so by using a lock_guard<BankAccount> object. Turning this statement around, wherever there's a lock_guard<BankAccount>, there's also a locked BankAccount somewhere. Thus, you can think of-and use-a lock_guard<BankAccount> object as a permit. Owning a lock_guard<BankAccount> gives you rights to do certain things. The lock_guard<BankAccount> object should not be copied or aliased (it's not a transmissible permit).

  1. As long as a permit is still alive, the BankAccount object stays locked.
  2. When the lock_guard<BankAccount> is destroyed, the BankAccount's mutex is released.

The net effect is that at any point in your code, having access to a lock_guard<BankAccount> object guarantees that a BankAccount is locked. (You don't know exactly which BankAccount is locked, however-an issue that we'll address soon.)

For now, let's make a couple of enhancements to the lock_guard class template defined in Boost.Thread. We'll call the enhanced version strict_lock. Essentially, a strict_lock's role is only to live on the stack as an automatic variable. strict_lock must adhere to a non-copy and non-alias policy. strict_lock disables copying by making the copy constructor and the assignment operator private.

template <typename Lockable>
class strict_lock  {
public:
    typedef Lockable lockable_type;


    explicit strict_lock(lockable_type& obj) : obj_(obj) {
        obj.lock(); // locks on construction
    }
    strict_lock() = delete;
    strict_lock(strict_lock const&) = delete;
    strict_lock& operator=(strict_lock const&) = delete;

    ~strict_lock() { obj_.unlock(); } //  unlocks on destruction 

    bool owns_lock(mutex_type const* l) const noexcept // strict lockers specific function 
    {
      return l == &obj_;
    }
private:
    lockable_type& obj_;
};

Silence can be sometimes louder than words-what's forbidden to do with a strict_lock is as important as what you can do. Let's see what you can and what you cannot do with a strict_lock instantiation:

  • You can create a strict_lock<T> only starting from a valid T object. Notice that there is no other way you can create a strict_lock<T>.
BankAccount myAccount("John Doe", "123-45-6789");
strict_lock<BankAccount> myLock(myAccount); // ok
  • You cannot copy strict_locks to one another. In particular, you cannot pass strict_locks by value to functions or have them returned by functions:
extern strict_lock<BankAccount> Foo(); // compile-time error
extern void Bar(strict_lock<BankAccount>); // compile-time error
  • However, you still can pass strict_locks by reference to and from functions:
// ok, Foo returns a reference to strict_lock<BankAccount>
extern strict_lock<BankAccount>& Foo();
// ok, Bar takes a reference to strict_lock<BankAccount>
extern void Bar(strict_lock<BankAccount>&);

All these rules were put in place with one purpose-enforcing that owning a strict_lock<T> is a reasonably strong guarantee that

  1. you locked a T object, and
  2. that object will be unlocked at a later point.

Now that we have such a strict strict_lock, how do we harness its power in defining a safe, flexible interface for BankAccount? The idea is as follows:

  • Each of BankAccount's interface functions (in our case, Deposit and Withdraw) comes in two overloaded variants.
  • One version keeps the same signature as before, and the other takes an additional argument of type strict_lock<BankAccount>. The first version is internally locked; the second one requires external locking. External locking is enforced at compile time by requiring client code to create a strict_lock<BankAccount> object.
  • BankAccount avoids code bloating by having the internal locked functions forward to the external locked functions, which do the actual job.

A little code is worth 1,000 words, a (hacked into) saying goes, so here's the new BankAccount class:

class BankAccount
: public basic_lockable_adapter<boost::mutex>
{
    int balance_;
public:
    void Deposit(int amount, strict_lock<BankAccount>&) {
        // Externally locked
        balance_ += amount;
    }
    void Deposit(int amount) {
        strict_lock<BankAccount> guard(*this); // Internally locked
        Deposit(amount, guard);
    }
    void Withdraw(int amount, strict_lock<BankAccount>&) {
        // Externally locked
        balance_ -= amount;
    }
    void Withdraw(int amount) {
        strict_lock<BankAccount> guard(*this); // Internally locked
        Withdraw(amount, guard);
    }
};

Now, if you want the benefit of internal locking, you simply call Deposit(int) and Withdraw(int). If you want to use external locking, you lock the object by constructing a strict_lock<BankAccount> and then you call Deposit(int, strict_lock<BankAccount>&) and Withdraw(int, strict_lock<BankAccount>&). For example, here's the ATMWithdrawal function implemented correctly:

void ATMWithdrawal(BankAccount& acct, int sum) {
    strict_lock<BankAccount> guard(acct);
    acct.Withdraw(sum, guard);
    acct.Withdraw(2, guard);
}

This function has the best of both worlds-it's reasonably safe and efficient at the same time.

It's worth noting that strict_lock being a template gives extra safety compared to a straight polymorphic approach. In such a design, BankAccount would derive from a Lockable interface. strict_lock would manipulate Lockable references so there's no need for templates. This approach is sound; however, it provides fewer compile-time guarantees. Having a strict_lock object would only tell that some object derived from Lockable is currently locked. In the templated approach, having a strict_lock<BankAccount> gives a stronger guarantee-it's a BankAccount that stays locked.

There's a weasel word in there-I mentioned that ATMWithdrawal is reasonably safe. It's not really safe because there's no enforcement that the strict_lock<BankAccount> object locks the appropriate BankAccount object. The type system only ensures that some BankAccount object is locked. For example, consider the following phony implementation of ATMWithdrawal:

void ATMWithdrawal(BankAccount& acct, int sum) {
    BankAccount fakeAcct("John Doe", "123-45-6789");
    strict_lock<BankAccount> guard(fakeAcct);
    acct.Withdraw(sum, guard);
    acct.Withdraw(2, guard);
}

This code compiles warning-free but obviously doesn't do the right thing-it locks one account and uses another.

It's important to understand what can be enforced within the realm of the C++ type system and what needs to be enforced at runtime. The mechanism we've put in place so far ensures that some BankAccount object is locked during the call to BankAccount::Withdraw(int, strict_lock<BankAccount>&). We must enforce at runtime exactly what object is locked.

If our scheme still needs runtime checks, how is it useful? An unwary or malicious programmer can easily lock the wrong object and manipulate any BankAccount without actually locking it.

First, let's get the malice issue out of the way. C is a language that requires a lot of attention and discipline from the programmer. C++ made some progress by asking a little less of those, while still fundamentally trusting the programmer. These languages are not concerned with malice (as Java is, for example). After all, you can break any C/C++ design simply by using casts "appropriately" (if appropriately is an, er, appropriate word in this context).

The scheme is useful because the likelihood of a programmer forgetting about any locking whatsoever is much greater than the likelihood of a programmer who does remember about locking, but locks the wrong object.

Using strict_lock permits compile-time checking of the most common source of errors, and runtime checking of the less frequent problem.

Let's see how to enforce that the appropriate BankAccount object is locked. First, we need to add a member function to the strict_lock class template. The bool strict_lock<T>::owns_lock(Lockable*) function returns a reference to the locked object.

template <class Lockable> class strict_lock {
    ... as before ...
public:
    bool owns_lock(Lockable* mtx) const { return mtx==&obj_; }
};

Second, BankAccount needs to use this function compare the locked object against this:

class BankAccount {
: public basic_lockable_adapter<boost::mutex>
    int balance_;
public:
    void Deposit(int amount, strict_lock<BankAccount>& guard) {
        // Externally locked
        if (!guard.owns_lock(*this))
            throw "Locking Error: Wrong Object Locked";
        balance_ += amount;
    }
// ...
};

The overhead incurred by the test above is much lower than locking a recursive mutex for the second time.

Now let's assume that BankAccount doesn't use its own locking at all, and has only a thread-neutral implementation:

class BankAccount {
    int balance_;
public:
    void Deposit(int amount) {
        balance_ += amount;
    }
    void Withdraw(int amount) {
        balance_ -= amount;
    }
};

Now you can use BankAccount in single-threaded and multi-threaded applications alike, but you need to provide your own synchronization in the latter case.

Say we have an AccountManager class that holds and manipulates a BankAccount object:

class AccountManager
: public basic_lockable_adapter<boost::mutex>
{
    BankAccount checkingAcct_;
    BankAccount savingsAcct_;
    ...
};

Let's also assume that, by design, AccountManager must stay locked while accessing its BankAccount members. The question is, how can we express this design constraint using the C++ type system? How can we state "You have access to this BankAccount object only after locking its parent AccountManager object"?

The solution is to use a little bridge template externally_locked that controls access to a BankAccount.

template <typename  T, typename Lockable>
class externally_locked {
    BOOST_CONCEPT_ASSERT((LockableConcept<Lockable>));

public:
    externally_locked(T& obj, Lockable& lockable)
        : obj_(obj)
        , lockable_(lockable)
    {}

    externally_locked(Lockable& lockable)
        : obj_()
        , lockable_(lockable)
    {}

    T& get(strict_lock<Lockable>& lock) {

#ifdef BOOST_THREAD_THROW_IF_PRECONDITION_NOT_SATISFIED
        if (!lock.owns_lock(&lockable_)) throw lock_error(); //run time check throw if not locks the same
#endif
        return obj_;
    }
    void set(const T& obj, Lockable& lockable) {
        obj_ = obj;
        lockable_=lockable;
    }
private:
    T obj_;
    Lockable& lockable_;
};

externally_locked cloaks an object of type T, and actually provides full access to that object through the get and set member functions, provided you pass a reference to a strict_lock<Owner> object.

Instead of making checkingAcct_ and savingsAcct_ of type BankAccount, AccountManager holds objects of type externally_locked<BankAccount, AccountManager>:

class AccountManager
    : public basic_lockable_adapter<boost::mutex>
{
public:
    typedef basic_lockable_adapter<boost::mutex> lockable_base_type;
    AccountManager()
        : checkingAcct_(*this)
        , savingsAcct_(*this)
    {}
    inline void Checking2Savings(int amount);
    inline void AMoreComplicatedChecking2Savings(int amount);
private:

    externally_locked<BankAccount, AccountManager> checkingAcct_;
    externally_locked<BankAccount, AccountManager> savingsAcct_;
};

The pattern is the same as before - to access the BankAccount object cloaked by checkingAcct_, you need to call get. To call get, you need to pass it a strict_lock<AccountManager>. The one thing you have to take care of is to not hold pointers or references you obtained by calling get. If you do that, make sure that you don't use them after the strict_lock has been destroyed. That is, if you alias the cloaked objects, you're back from "the compiler takes care of that" mode to "you must pay attention" mode.

Typically, you use externally_locked as shown below. Suppose you want to execute an atomic transfer from your checking account to your savings account:

void AccountManager::Checking2Savings(int amount) {
    strict_lock<AccountManager> guard(*this);
    checkingAcct_.get(guard).Withdraw(amount);
    savingsAcct_.get(guard).Deposit(amount);
}

We achieved two important goals. First, the declaration of checkingAcct_ and savingsAcct_ makes it clear to the code reader that that variable is protected by a lock on an AccountManager. Second, the design makes it impossible to manipulate the two accounts without actually locking a BankAccount. externally_locked is what could be called active documentation.

Now imagine that the AccountManager function needs to take a unique_lock in order to reduce the critical regions. And at some time it needs to access to the checkingAcct_. As unique_lock is not a strict lock the following code doesn't compile:

void AccountManager::AMoreComplicatedChecking2Savings(int amount) {
    unique_lock<AccountManager> guard(*this, defer_lock);
    if (some_condition()) {
        guard.lock();
    }
    checkingAcct_.get(guard).Withdraw(amount); // COMPILE ERROR
    savingsAcct_.get(guard).Deposit(amount);  // COMPILE ERROR
    do_something_else();
}

We need a way to transfer the ownership from the unique_lock to a strict_lock during the time we are working with savingsAcct_ and then restore the ownership on unique_lock.

void AccountManager::AMoreComplicatedChecking2Savings(int amount) {
    unique_lock<AccountManager> guard1(*this, defer_lock);
    if (some_condition()) {
        guard1.lock();
    }
    {
        strict_lock<AccountManager> guard(guard1);
        checkingAcct_.get(guard).Withdraw(amount);
        savingsAcct_.get(guard).Deposit(amount);
    }
    guard1.unlock();
}

In order to make this code compilable we need to store either a Lockable or a unique_lock<Lockable> reference depending on the constructor. We also need to store which kind of reference we have stored, and in the destructor call either to the Lockable unlock or restore the ownership.

This seems too complicated to me. Another possibility is to define a nested strict lock class. The drawback is that instead of having only one strict lock we have two and we need either to duplicate every function taking a strict_lock or make these function templates. The problem with template functions is that we don't profit anymore of the C++ type system. We must add some static metafunction that checks that the Locker parameter is a strict lock. The problem is that we can not really check this or can we?. The is_strict_lock metafunction must be specialized by the strict lock developer. We need to believe it "sur parole". The advantage is that now we can manage with more than two strict locks without changing our code. This is really nice.

Now we need to state that both classes are strict_locks.

template <typename Locker>
struct is_strict_lock : mpl::false_ {};

template <typename Lockable>
struct is_strict_lock<strict_lock<Lockable> > : mpl::true_ {}

template <typename Locker>
struct is_strict_lock<nested_strict_lock<Locker> > : mpl::true_ {}

Well let me show what this nested_strict_lock class looks like and the impacts on the externally_locked class and the AccountManager::AMoreComplicatedFunction function.

First nested_strict_lock class will store on a temporary lock the Locker, and transfer the lock ownership on the constructor. On destruction it will restore the ownership. Note the use of lock_traits and that the Locker needs to have a reference to the mutex otherwise an exception is thrown.

template <typename Locker >
class nested_strict_lock
    {
      BOOST_CONCEPT_ASSERT((MovableLockerConcept<Locker>));
public:
    typedef typename lockable_type<Locker>::type lockable_type;
    typedef typename syntactic_lock_traits<lockable_type>::lock_error lock_error;

    nested_strict_lock(Locker& lock)
        : lock_(lock)  // Store reference to locker
        , tmp_lock_(lock.move()) // Move ownership to temporary locker 
    {
        #ifdef BOOST_THREAD_THROW_IF_PRECONDITION_NOT_SATISFIED
        if (tmp_lock_.mutex()==0) {
            lock_=tmp_lock_.move(); // Rollback for coherency purposes 
            throw lock_error();
        }
        #endif
        if (!tmp_lock_) tmp_lock_.lock(); // ensures it is locked 
    }
    ~nested_strict_lock() {
        lock_=tmp_lock_.move(); // Move ownership to nesting locker 
    }
    bool owns_lock() const { return true; }
    lockable_type* mutex() const { return tmp_lock_.mutex(); }
    bool owns_lock(lockable_type* l) const { return l==mutex(); }


private:
    Locker& lock_;
    Locker tmp_lock_;
};

The externally_locked get function is now a template function taking a Locker as parameters instead of a strict_lock. We can add test in debug mode that ensure that the Lockable object is locked.

template <typename  T, typename Lockable>
class externally_locked {
public:
    // ...
    template <class Locker>
    T& get(Locker& lock) {
        BOOST_CONCEPT_ASSERT((StrictLockerConcept<Locker>));

        BOOST_STATIC_ASSERT((is_strict_lock<Locker>::value)); // locker is a strict locker "sur parole" 
        BOOST_STATIC_ASSERT((is_same<Lockable,
                typename lockable_type<Locker>::type>::value)); // that locks the same type 
#ifndef BOOST_THREAD_EXTERNALLY_LOCKED_DONT_CHECK_OWNERSHIP  // define BOOST_THREAD_EXTERNALLY_LOCKED_NO_CHECK_OWNERSHIP if you don't want to check locker ownership
        if (! lock ) throw lock_error(); // run time check throw if no locked 
#endif
#ifdef BOOST_THREAD_THROW_IF_PRECONDITION_NOT_SATISFIED
        if (!lock.owns_lock(&lockable_)) throw lock_error();
#endif
        return obj_;
    }
};

The AccountManager::AMoreComplicatedFunction function needs only to replace the strict_lock by a nested_strict_lock.

void AccountManager::AMoreComplicatedChecking2Savings(int amount) {
    unique_lock<AccountManager> guard1(*this);
    if (some_condition()) {
        guard1.lock();
    }
    {
        nested_strict_lock<unique_lock<AccountManager> > guard(guard1);
        checkingAcct_.get(guard).Withdraw(amount);
        savingsAcct_.get(guard).Deposit(amount);
    }
    guard1.unlock();
}

In particular, the library provides a way to lock around the execution of a function.

template <class Lockable, class Function, class... Args>
auto with_lock_guard(
    Lockable& m,
    Function&& func,
    Args&&... args
) -> decltype(func(boost::forward<Args>(args)...)) {
  boost::lock_guard<Lockable> lock(m);
  return func(boost::forward<Args>(args)...);
}

that can be used with regular functions:

int func(int, int&);
//...
boost::mutex m;
int a;
int result = boost::with_lock_guard(m, func, 1, boost::ref(a));

with boost::bind:

int result = boost::with_lock_guard(
    m, boost::bind(func, 2, boost::ref(a))
);

or with lambda expression:

int a;
int result = boost::with_lock_guard(
    m,
    [&a](int x) {
      // this scope is protected by mutex m
      a = 3;
      return x + 4;
    },
    5
);

A mutex object facilitates protection against data races and allows thread-safe synchronization of data between threads. A thread obtains ownership of a mutex object by calling one of the lock functions and relinquishes ownership by calling the corresponding unlock function. Mutexes may be either recursive or non-recursive, and may grant simultaneous ownership to one or many threads. Boost.Thread supplies recursive and non-recursive mutexes with exclusive ownership semantics, along with a shared ownership (multiple-reader / single-writer) mutex.

Boost.Thread supports four basic concepts for lockable objects: Lockable, TimedLockable, SharedLockable and UpgradeLockable. Each mutex type implements one or more of these concepts, as do the various lock types.

// #include <boost/thread/lockable_concepts.hpp> 

namespace boost
{

  template<typename L>
  class BasicLockable; // EXTENSION
}

The BasicLockable concept models exclusive ownership. A type L meets the BasicLockable requirements if the following expressions are well-formed and have the specified semantics (m denotes a value of type L):

Lock ownership acquired through a call to lock() must be released through a call to unlock().

Requires:

The calling thread doesn't owns the mutex if the mutex is not recursive.

Effects:

The current thread blocks until ownership can be obtained for the current thread.

Synchronization:

Prior unlock() operations on the same object synchronizes with this operation.

Postcondition:

The current thread owns m.

Return type:

void.

Throws:

lock_error if an error occurs.

Error Conditions:

operation_not_permitted: if the thread does not have the privilege to perform the operation.

resource_deadlock_would_occur: if the implementation detects that a deadlock would occur.

device_or_resource_busy: if the mutex is already locked and blocking is not possible.

Thread safety:

If an exception is thrown then a lock shall not have been acquired for the current thread.

Requires:

The current thread owns m.

Synchronization:

This operation synchronizes with subsequent lock operations that obtain ownership on the same object.

Effects:

Releases a lock on m by the current thread.

Return type:

void.

Throws:

Nothing.

// #include <boost/thread/lockable_traits.hpp> 

namespace boost
{
  namespace sync
  {
    template<typename L>
    class is_basic_lockable;// EXTENSION
  }
}

Some of the algorithms on mutexes use this trait via SFINAE.

This trait is true_type if the parameter L meets the Lockable requirements.

[Warning] Warning

If BOOST_THREAD_NO_AUTO_DETECT_MUTEX_TYPES is defined you will need to specialize this traits for the models of BasicLockable you could build.

// #include <boost/thread/lockable_concepts.hpp> 
namespace boost
{
  template<typename L>
  class Lockable;
}

A type L meets the Lockable requirements if it meets the BasicLockable requirements and the following expressions are well-formed and have the specified semantics (m denotes a value of type L):

Lock ownership acquired through a call to try_lock() must be released through a call to unlock().

Requires:

The calling thread doesn't owns the mutex if the mutex is not recursive.

Effects:

Attempt to obtain ownership for the current thread without blocking.

Synchronization:

If try_lock() returns true, prior unlock() operations on the same object synchronize with this operation.

Note:

Since lock() does not synchronize with a failed subsequent try_lock(), the visibility rules are weak enough that little would be known about the state after a failure, even in the absence of spurious failures.

Return type:

bool.

Returns:

true if ownership was obtained for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread owns the m.

Throws:

Nothing.

// #include <boost/thread/lockable_traits.hpp> 
namespace boost
{
  namespace sync
  {
    template<typename L>
    class is_lockable;// EXTENSION
  }
}

Some of the algorithms on mutexes use this trait via SFINAE.

This trait is true_type if the parameter L meets the Lockable requirements.

[Warning] Warning

If BOOST_THREAD_NO_AUTO_DETECT_MUTEX_TYPES is defined you will need to specialize this traits for the models of Lockable you could build.

The user could require that the mutex passed to an algorithm is a recursive one. Whether a lockable is recursive or not can not be checked using template meta-programming. This is the motivation for the following trait.

// #include <boost/thread/lockable_traits.hpp> 

namespace boost
{
  namespace sync
  {
    template<typename L>
    class is_recursive_mutex_sur_parole: false_type; // EXTENSION
    template<>
    class is_recursive_mutex_sur_parole<recursive_mutex>: true_type; // EXTENSION
    template<>
    class is_recursive_mutex_sur_parole<timed_recursive_mutex>: true_type; // EXTENSION
  }
}

The trait is_recursive_mutex_sur_parole is false_type by default and is specialized for the provide recursive_mutex and timed_recursive_mutex.

It should be specialized by the user providing other model of recursive lockable.

// #include <boost/thread/lockable_traits.hpp> 
namespace boost
{
  namespace sync
  {
    template<typename L>
    class is_recursive_basic_lockable;// EXTENSION
  }
}

This traits is true_type if is_basic_lockable and is_recursive_mutex_sur_parole.

// #include <boost/thread/lockable_traits.hpp> 
namespace boost
{
  namespace sync
  {
    template<typename L>
    class is_recursive_lockable;// EXTENSION
  }
}

This traits is true_type if is_lockable and is_recursive_mutex_sur_parole.

// #include <boost/thread/lockable_concepts.hpp> 

namespace boost
{
  template<typename L>
  class TimedLockable; // EXTENSION
}

The TimedLockable concept refines the Lockable concept to add support for timeouts when trying to acquire the lock.

A type L meets the TimedLockable requirements if it meets the Lockable requirements and the following expressions are well-formed and have the specified semantics.

Variables:

  • m denotes a value of type L,
  • rel_time denotes a value of an instantiation of chrono::duration, and
  • abs_time denotes a value of an instantiation of chrono::time_point:

Expressions:

Lock ownership acquired through a call to try_lock_for or try_lock_until must be released through a call to unlock.

Requires:

The calling thread doesn't owns the mutex if the mutex is not recursive.

Effects:

Attempt to obtain ownership for the current thread. Blocks until ownership can be obtained, or the specified time is reached. If the specified time has already passed, behaves as try_lock().

Synchronization:

If try_lock_until() returns true, prior unlock() operations on the same object synchronize with this operation.

Return type:

bool.

Returns:

true if ownership was obtained for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread owns m.

Throws:

Nothing.

Requires:

The calling thread doesn't owns the mutex if the mutex is not recursive.

Effects:

As-if try_lock_until(chrono::steady_clock::now() + rel_time).

Synchronization:

If try_lock_for() returns true, prior unlock() operations on the same object synchronize with this operation.

[Warning] Warning

DEPRECATED since 4.00. The following expressions were required on version 2, but are now deprecated.

Use instead try_lock_for, try_lock_until.

Variables:

  • rel_time denotes a value of an instantiation of an unspecified DurationType arithmetic compatible with boost::system_time, and
  • abs_time denotes a value of an instantiation of boost::system_time:

Expressions:

Lock ownership acquired through a call to timed_lock() must be released through a call to unlock().

Effects:

Attempt to obtain ownership for the current thread. Blocks until ownership can be obtained, or the specified time is reached. If the specified time has already passed, behaves as try_lock().

Returns:

true if ownership was obtained for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread owns m.

Throws:

lock_error if an error occurs.

// #include <boost/thread/lockable_concepts.hpp> 

namespace boost
{
  template<typename L>
  class SharedLockable;  // C++14
}

The SharedLockable concept is a refinement of the TimedLockable concept that allows for shared ownership as well as exclusive ownership. This is the standard multiple-reader / single-write model: at most one thread can have exclusive ownership, and if any thread does have exclusive ownership, no other threads can have shared or exclusive ownership. Alternatively, many threads may have shared ownership.

A type L meets the SharedLockable requirements if it meets the TimedLockable requirements and the following expressions are well-formed and have the specified semantics.

Variables:

  • m denotes a value of type L,
  • rel_time denotes a value of an instantiation of chrono::duration, and
  • abs_time denotes a value of an instantiation of chrono::time_point:

Expressions:

Lock ownership acquired through a call to lock_shared(), try_lock_shared(), try_lock_shared_for or try_lock_shared_until must be released through a call to unlock_shared().

Effects:

The current thread blocks until shared ownership can be obtained for the current thread.

Postcondition:

The current thread has shared ownership of m.

Throws:

lock_error if an error occurs.

Effects:

Attempt to obtain shared ownership for the current thread without blocking.

Returns:

true if shared ownership was obtained for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread has shared ownership of m.

Throws:

lock_error if an error occurs.

Effects:

Attempt to obtain shared ownership for the current thread. Blocks until shared ownership can be obtained, or the specified duration is elapsed. If the specified duration is already elapsed, behaves as try_lock_shared().

Returns:

true if shared ownership was acquired for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread has shared ownership of m.

Throws:

lock_error if an error occurs.

Effects:

Attempt to obtain shared ownership for the current thread. Blocks until shared ownership can be obtained, or the specified time is reached. If the specified time has already passed, behaves as try_lock_shared().

Returns:

true if shared ownership was acquired for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread has shared ownership of m.

Throws:

lock_error if an error occurs.

Precondition:

The current thread has shared ownership of m.

Effects:

Releases shared ownership of m by the current thread.

Postcondition:

The current thread no longer has shared ownership of m.

Throws:

Nothing

[Warning] Warning

DEPRECATED since 3.00. The following expressions were required on version 2, but are now deprecated.

Use instead try_lock_shared_for, try_lock_shared_until.

Variables:

  • abs_time denotes a value of an instantiation of boost::system_time:

Expressions:

  • m.timed_lock_shared(abs_time);

Lock ownership acquired through a call to timed_lock_shared() must be released through a call to unlock_shared().

Effects:

Attempt to obtain shared ownership for the current thread. Blocks until shared ownership can be obtained, or the specified time is reached. If the specified time has already passed, behaves as try_lock_shared().

Returns:

true if shared ownership was acquired for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread has shared ownership of m.

Throws:

lock_error if an error occurs.

// #include <boost/thread/lockable_concepts.hpp> 

namespace boost
{
  template<typename L>
  class UpgradeLockable; // EXTENSION
}

The UpgradeLockable concept is a refinement of the SharedLockable concept that allows for upgradable ownership as well as shared ownership and exclusive ownership. This is an extension to the multiple-reader / single-write model provided by the SharedLockable concept: a single thread may have upgradable ownership at the same time as others have shared ownership. The thread with upgradable ownership may at any time attempt to upgrade that ownership to exclusive ownership. If no other threads have shared ownership, the upgrade is completed immediately, and the thread now has exclusive ownership, which must be relinquished by a call to unlock(), just as if it had been acquired by a call to lock().

If a thread with upgradable ownership tries to upgrade whilst other threads have shared ownership, the attempt will fail and the thread will block until exclusive ownership can be acquired.

Ownership can also be downgraded as well as upgraded: exclusive ownership of an implementation of the UpgradeLockable concept can be downgraded to upgradable ownership or shared ownership, and upgradable ownership can be downgraded to plain shared ownership.

A type L meets the UpgradeLockable requirements if it meets the SharedLockable requirements and the following expressions are well-formed and have the specified semantics.

Variables:

  • m denotes a value of type L,
  • rel_time denotes a value of an instantiation of chrono::duration, and
  • abs_time denotes a value of an instantiation of chrono::time_point:

Expressions:

If `BOOST_THREAD_PROVIDES_SHARED_MUTEX_UPWARDS_CONVERSIONS is defined the following expressions are also required:

Lock ownership acquired through a call to lock_upgrade() must be released through a call to unlock_upgrade(). If the ownership type is changed through a call to one of the unlock_xxx_and_lock_yyy() functions, ownership must be released through a call to the unlock function corresponding to the new level of ownership.

Precondition:

The calling thread has no ownership of the mutex.

Effects:

The current thread blocks until upgrade ownership can be obtained for the current thread.

Postcondition:

The current thread has upgrade ownership of m.

Synchronization:

Prior unlock_upgrade() operations on the same object synchronize with this operation.

Throws:

lock_error if an error occurs.

Precondition:

The current thread has upgrade ownership of m.

Effects:

Releases upgrade ownership of m by the current thread.

Postcondition:

The current thread no longer has upgrade ownership of m.

Synchronization:

This operation synchronizes with subsequent lock operations that obtain ownership on the same object.

Throws:

Nothing

Precondition:

The calling thread has no ownership of the mutex.

Effects:

Attempts to obtain upgrade ownership of the mutex for the calling thread without blocking. If upgrade ownership is not obtained, there is no effect and try_lock_upgrade() immediately returns.

Returns:

true if upgrade ownership was acquired for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread has upgrade ownership of m.

Synchronization:

If try_lock_upgrade() returns true, prior unlock_upgrade() operations on the same object synchronize with this operation.

Throws:

Nothing

Precondition:

The calling thread has no ownership of the mutex.

Effects:

If the tick period of rel_time is not exactly convertible to the native tick period, the duration shall be rounded up to the nearest native tick period. Attempts to obtain upgrade lock ownership for the calling thread within the relative timeout specified by rel_time. If the time specified by rel_time is less than or equal to rel_time.zero(), the function attempts to obtain ownership without blocking (as if by calling try_lock_upgrade()). The function returns within the timeout specified by rel_time only if it has obtained upgrade ownership of the mutex object.

Returns:

true if upgrade ownership was acquired for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread has upgrade ownership of m.

Synchronization:

If try_lock_upgrade_for(rel_time) returns true, prior unlock_upgrade() operations on the same object synchronize with this operation.

Throws:

Nothing

Notes:

Available only if BOOST_THREAD_PROVIDES_GENERIC_SHARED_MUTEX_ON_WIN is defined on Windows platform

Precondition:

The calling thread has no ownership of the mutex.

Effects:

The function attempts to obtain upgrade ownership of the mutex. If abs_time has already passed, the function attempts to obtain upgrade ownership without blocking (as if by calling try_lock_upgrade()). The function returns before the absolute timeout specified by abs_time only if it has obtained upgrade ownership of the mutex object.

Returns:

true if upgrade ownership was acquired for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread has upgrade ownership of m.

Synchronization:

If try_lock_upgrade_until(abs_time) returns true, prior unlock_upgrade() operations on the same object synchronize with this operation.

Throws:

Nothing

Notes:

Available only if BOOST_THREAD_PROVIDES_GENERIC_SHARED_MUTEX_ON_WIN is defined on Windows platform

Precondition:

The calling thread must hold a shared lock on the mutex.

Effects:

The function attempts to atomically convert the ownership from shared to exclusive for the calling thread without blocking. For this conversion to be successful, this thread must be the only thread holding any ownership of the lock. If the conversion is not successful, the shared ownership of m is retained.

Returns:

true if exclusive ownership was acquired for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread has exclusive ownership of m.

Synchronization:

If try_unlock_shared_and_lock() returns true, prior unlock() and subsequent lock operations on the same object synchronize with this operation.

Throws:

Nothing

Notes:

Available only if BOOST_THREAD_PROVIDES_SHARED_MUTEX_UPWARDS_CONVERSIONS and BOOST_THREAD_PROVIDES_GENERIC_SHARED_MUTEX_ON_WIN is defined on Windows platform

Precondition:

The calling thread shall hold a shared lock on the mutex.

Effects:

If the tick period of rel_time is not exactly convertible to the native tick period, the duration shall be rounded up to the nearest native tick period. The function attempts to atomically convert the ownership from shared to exclusive for the calling thread within the relative timeout specified by rel_time. If the time specified by rel_time is less than or equal to rel_time.zero(), the function attempts to obtain exclusive ownership without blocking (as if by calling try_unlock_shared_and_lock()). The function shall return within the timeout specified by rel_time only if it has obtained exclusive ownership of the mutex object. For this conversion to be successful, this thread must be the only thread holding any ownership of the lock at the moment of conversion. If the conversion is not successful, the shared ownership of the mutex is retained.

Returns:

true if exclusive ownership was acquired for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread has exclusive ownership of m.

Synchronization:

If try_unlock_shared_and_lock_for(rel_time) returns true, prior unlock() and subsequent lock operations on the same object synchronize with this operation.

Throws:

Nothing

Notes:

Available only if BOOST_THREAD_PROVIDES_SHARED_MUTEX_UPWARDS_CONVERSIONS and BOOST_THREAD_PROVIDES_GENERIC_SHARED_MUTEX_ON_WIN is defined on Windows platform

Precondition:

The calling thread shall hold a shared lock on the mutex.

Effects:

The function attempts to atomically convert the ownership from shared to exclusive for the calling thread within the absolute timeout specified by abs_time. If abs_time has already passed, the function attempts to obtain exclusive ownership without blocking (as if by calling try_unlock_shared_and_lock()). The function shall return before the absolute timeout specified by abs_time only if it has obtained exclusive ownership of the mutex object. For this conversion to be successful, this thread must be the only thread holding any ownership of the lock at the moment of conversion. If the conversion is not successful, the shared ownership of the mutex is retained.

Returns:

true if exclusive ownership was acquired for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread has exclusive ownership of m.

Synchronization:

If try_unlock_shared_and_lock_until(rel_time) returns true, prior unlock() and subsequent lock operations on the same object synchronize with this operation.

Throws:

Nothing

Notes:

Available only if BOOST_THREAD_PROVIDES_SHARED_MUTEX_UPWARDS_CONVERSIONS and BOOST_THREAD_PROVIDES_GENERIC_SHARED_MUTEX_ON_WIN is defined on Windows platform

Precondition:

The calling thread shall hold an exclusive lock on m.

Effects:

Atomically converts the ownership from exclusive to shared for the calling thread.

Postcondition:

The current thread has shared ownership of m.

Synchronization:

This operation synchronizes with subsequent lock operations that obtain ownership of the same object.

Throws:

Nothing

Precondition:

The calling thread shall hold a shared lock on the mutex.

Effects:

The function attempts to atomically convert the ownership from shared to upgrade for the calling thread without blocking. For this conversion to be successful, there must be no thread holding upgrade ownership of this object. If the conversion is not successful, the shared ownership of the mutex is retained.

Returns:

true if upgrade ownership was acquired for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread has upgrade ownership of m.

Synchronization:

If try_unlock_shared_and_lock_upgrade() returns true, prior unlock_upgrade() and subsequent lock operations on the same object synchronize with this operation.

Throws:

Nothing

Notes:

Available only if BOOST_THREAD_PROVIDES_SHARED_MUTEX_UPWARDS_CONVERSIONS and BOOST_THREAD_PROVIDES_GENERIC_SHARED_MUTEX_ON_WIN is defined on Windows platform

Precondition:

The calling thread shall hold a shared lock on the mutex.

Effects:

If the tick period of rel_time is not exactly convertible to the native tick period, the duration shall be rounded up to the nearest native tick period. The function attempts to atomically convert the ownership from shared to upgrade for the calling thread within the relative timeout specified by rel_time. If the time specified by rel_time is less than or equal to rel_time.zero(), the function attempts to obtain upgrade ownership without blocking (as if by calling try_unlock_shared_and_lock_upgrade()). The function shall return within the timeout specified by rel_time only if it has obtained exclusive ownership of the mutex object. For this conversion to be successful, there must be no thread holding upgrade ownership of this object at the moment of conversion. If the conversion is not successful, the shared ownership of m is retained.

Returns:

true if upgrade ownership was acquired for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread has upgrade ownership of m.

Synchronization:

If try_unlock_shared_and_lock_upgrade_for(rel_time) returns true, prior unlock_upgrade() and subsequent lock operations on the same object synchronize with this operation.

Throws:

Nothing

Notes:

Available only if BOOST_THREAD_PROVIDES_SHARED_MUTEX_UPWARDS_CONVERSIONS and BOOST_THREAD_PROVIDES_GENERIC_SHARED_MUTEX_ON_WIN is defined on Windows platform

Precondition:

The calling thread shall hold a shared lock on the mutex.

Effects:

The function attempts to atomically convert the ownership from shared to upgrade for the calling thread within the absolute timeout specified by abs_time. If abs_time has already passed, the function attempts to obtain upgrade ownership without blocking (as if by calling try_unlock_shared_and_lock_upgrade()). The function shall return before the absolute timeout specified by abs_time only if it has obtained upgrade ownership of the mutex object. For this conversion to be successful, there must be no thread holding upgrade ownership of this object at the moment of conversion. If the conversion is not successful, the shared ownership of the mutex is retained.

Returns:

true if upgrade ownership was acquired for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread has upgrade ownership of m.

Synchronization:

If try_unlock_shared_and_lock_upgrade_until(rel_time) returns true, prior unlock_upgrade() and subsequent lock operations on the same object synchronize with this operation.

Throws:

Nothing

Notes:

Available only if BOOST_THREAD_PROVIDES_SHARED_MUTEX_UPWARDS_CONVERSIONS and BOOST_THREAD_PROVIDES_GENERIC_SHARED_MUTEX_ON_WIN is defined on Windows platform

Precondition:

The current thread has exclusive ownership of m.

Effects:

Atomically releases exclusive ownership of m by the current thread and acquires upgrade ownership of m without blocking.

Postcondition:

The current thread has upgrade ownership of m.

Synchronization:

This operation synchronizes with subsequent lock operations that obtain ownership of the same object.

Throws:

Nothing

Precondition:

The current thread has upgrade ownership of m.

Effects:

Atomically releases upgrade ownership of m by the current thread and acquires exclusive ownership of m. If any other threads have shared ownership, blocks until exclusive ownership can be acquired.

Postcondition:

The current thread has exclusive ownership of m.

Synchronization:

This operation synchronizes with prior unlock_shared()() and subsequent lock operations that obtain ownership of the same object.

Throws:

Nothing

Precondition:

The calling thread shall hold an upgrade lock on the mutex.

Effects:

The function attempts to atomically convert the ownership from upgrade to exclusive for the calling thread without blocking. For this conversion to be successful, this thread must be the only thread holding any ownership of the lock. If the conversion is not successful, the upgrade ownership of m is retained.

Returns:

true if exclusive ownership was acquired for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread has exclusive ownership of m.

Synchronization:

If try_unlock_upgrade_and_lock() returns true, prior unlock() and subsequent lock operations on the same object synchronize with this operation.

Throws:

Nothing

Notes:

Available only if BOOST_THREAD_PROVIDES_GENERIC_SHARED_MUTEX_ON_WIN is defined on Windows platform

Precondition:

The calling thread shall hold an upgrade lock on the mutex.

Effects:

If the tick period of rel_time is not exactly convertible to the native tick period, the duration shall be rounded up to the nearest native tick period. The function attempts to atomically convert the ownership from upgrade to exclusive for the calling thread within the relative timeout specified by rel_time. If the time specified by rel_time is less than or equal to rel_time.zero(), the function attempts to obtain exclusive ownership without blocking (as if by calling try_unlock_upgrade_and_lock()). The function shall return within the timeout specified by rel_time only if it has obtained exclusive ownership of the mutex object. For this conversion to be successful, this thread shall be the only thread holding any ownership of the lock at the moment of conversion. If the conversion is not successful, the upgrade ownership of m is retained.

Returns:

true if exclusive ownership was acquired for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread has exclusive ownership of m.

Synchronization:

If try_unlock_upgrade_and_lock_for(rel_time) returns true, prior unlock() and subsequent lock operations on the same object synchronize with this operation.

Throws:

Nothing

Notes:

Available only if BOOST_THREAD_PROVIDES_GENERIC_SHARED_MUTEX_ON_WIN is defined on Windows platform

Precondition:

The calling thread shall hold an upgrade lock on the mutex.

Effects:

The function attempts to atomically convert the ownership from upgrade to exclusive for the calling thread within the absolute timeout specified by abs_time. If abs_time has already passed, the function attempts to obtain exclusive ownership without blocking (as if by calling try_unlock_upgrade_and_lock()). The function shall return before the absolute timeout specified by abs_time only if it has obtained exclusive ownership of the mutex object. For this conversion to be successful, this thread shall be the only thread holding any ownership of the lock at the moment of conversion. If the conversion is not successful, the upgrade ownership of m is retained.

Returns:

true if exclusive ownership was acquired for the current thread, false otherwise.

Postcondition:

If the call returns true, the current thread has exclusive ownership of m.

Synchronization:

If try_unlock_upgrade_and_lock_for(rel_time) returns true, prior unlock() and subsequent lock operations on the same object synchronize with this operation.

Throws:

Nothing

Notes:

Available only if BOOST_THREAD_PROVIDES_GENERIC_SHARED_MUTEX_ON_WIN is defined on Windows platform

Precondition:

The current thread has upgrade ownership of m.

Effects:

Atomically releases upgrade ownership of m by the current thread and acquires shared ownership of m without blocking.

Postcondition:

The current thread has shared ownership of m.

Synchronization:

This operation synchronizes with prior unlock_shared() and subsequent lock operations that obtain ownership of the same object.

Throws:

Nothing

// #include <boost/thread/locks.hpp> 
// #include <boost/thread/locks_options.hpp> 

namespace boost
{
  struct defer_lock_t {};
  struct try_to_lock_t {};
  struct adopt_lock_t {};
  constexpr defer_lock_t defer_lock;
  constexpr try_to_lock_t try_to_lock;
  constexpr adopt_lock_t adopt_lock;
#include <boost/thread/locks.hpp>
#include <boost/thread/locks_options.hpp>

struct defer_lock_t {};
struct try_to_lock_t {};
struct adopt_lock_t {};
const defer_lock_t defer_lock;
const try_to_lock_t try_to_lock;
const adopt_lock_t adopt_lock;

These tags are used in scoped locks constructors to specify a specific behavior.

  • defer_lock_t: is used to construct the scoped lock without locking it.
  • try_to_lock_t: is used to construct the scoped lock trying to lock it.
  • adopt_lock_t: is used to construct the scoped lock without locking it but adopting ownership.
// #include <boost/thread/locks.hpp> 
// #include <boost/thread/lock_guard.hpp> 

namespace boost
{

  template<typename Lockable>
  class lock_guard
#if ! defined BOOST_THREAD_NO_MAKE_LOCK_GUARD
  template <typename Lockable>
  lock_guard<Lockable> make_lock_guard(Lockable& mtx); // EXTENSION
  template <typename Lockable>
  lock_guard<Lockable> make_lock_guard(Lockable& mtx, adopt_lock_t); // EXTENSION
#endif
}
// #include <boost/thread/locks.hpp>
// #include <boost/thread/lock_guard.hpp> 

template<typename Lockable>
class lock_guard
{
public:
    explicit lock_guard(Lockable& m_);
    lock_guard(Lockable& m_,boost::adopt_lock_t);

    ~lock_guard();
};

boost::lock_guard is very simple: on construction it acquires ownership of the implementation of the Lockable concept supplied as the constructor parameter. On destruction, the ownership is released. This provides simple RAII-style locking of a Lockable object, to facilitate exception-safe locking and unlocking. In addition, the lock_guard(Lockable & m,boost::adopt_lock_t) constructor allows the boost::lock_guard object to take ownership of a lock already held by the current thread.

Effects:

Stores a reference to m. Invokes m.lock().

Throws:

Any exception thrown by the call to m.lock().

Precondition:

The current thread owns a lock on m equivalent to one obtained by a call to m.lock().

Effects:

Stores a reference to m. Takes ownership of the lock state of m.

Throws:

Nothing.

Effects:

Invokes m.unlock() on the Lockable object passed to the constructor.

Throws:

Nothing.

template <typename Lockable>
lock_guard<Lockable> make_lock_guard(Lockable& m); // EXTENSION

Returns:

a lock_guard as if initialized with {m}.

Throws:

Any exception thrown by the call to m.lock().

template <typename Lockable>
lock_guard<Lockable> make_lock_guard(Lockable& m, adopt_lock_t); // EXTENSION

Returns:

a lock_guard as if initialized with {m, adopt_lock}.

Throws:

Any exception thrown by the call to m.lock().

// #include <boost/thread/with_lock_guard.hpp>

namespace boost
{
  template <class Lockable, class Function, class... Args>
  auto with_lock_guard(Lockable& m, Function&& func, Args&&... args) -> decltype(func(boost::forward<Args>(args)...));
}
template <class Lockable, class Function, class... Args>
auto with_lock_guard(
    Lockable& m,
    Function&& func,
    Args&&... args
) -> decltype(func(boost::forward<Args>(args)...));

Precondition:

m must be in unlocked state

Effects:

call func in scope locked by m

Returns:

Result of func(args...) call

Throws:

Any exception thrown by the call to m.lock and func(args...)

Postcondition:

m is in unlocked state

Limitations:

Without c++11 variadic templates support number of arguments is limited to 4

Without rvalue references support calling class method with boost::bind must be const

For correct work with lambda macro BOOST_RESULT_OF_USE_DECLTYPE may be needed to define

// #include <boost/thread/lock_concepts.hpp> 

namespace boost
{

  template<typename Lock>
  class StrictLock;
}

A StrictLock is a lock that ensures that the associated mutex is locked during the lifetime of the lock.

A type L meets the StrictLock requirements if the following expressions are well-formed and have the specified semantics

  • L::mutex_type
  • is_strict_lock<L>
  • cl.owns_lock(m);

and BasicLockable<L::mutex_type>

where

  • cl denotes a value of type L const&,
  • m denotes a value of type L::mutex_type const*,

The type L::mutex_type denotes the mutex that is locked by this lock.

As the semantic "ensures that the associated mutex is locked during the lifetime of the lock. " can not be described by syntactic requirements a is_strict_lock_sur_parole trait must be specialized by the user defining the lock so that the following assertion is true:

is_strict_lock_sur_parole<L>::value == true

Return Type:

bool

Returns:

Whether the strict lock is locking the mutex m

Throws:

Nothing.

The following classes are models of StrictLock:

  • strict_lock: ensured by construction,
  • nested_strict_lock: "sur parole" as the user could use adopt_lock_t on unique_lock constructor overload without having locked the mutex,
  • boost::lock_guard: "sur parole" as the user could use adopt_lock_t constructor overload without having locked the mutex.
// #include <boost/thread/locks.hpp> 
// #include <boost/thread/lock_types.hpp> 

namespace boost
{

  template<typename Lockable>
  class unique_lock;
  template<typename Mutex>
  void swap(unique_lock <Mutex>& lhs, unique_lock <Mutex>& rhs);
  template<typename Lockable>
  class shared_lock; // C++14
  template<typename Mutex>
  void swap(shared_lock<Mutex>& lhs,shared_lock<Mutex>& rhs); // C++14
  template<typename Lockable>
  class upgrade_lock; // EXTENSION
  template<typename Mutex>
  void swap(upgrade_lock <Mutex>& lhs, upgrade_lock <Mutex>& rhs); // EXTENSION
  template <class Mutex>
  class upgrade_to_unique_lock; // EXTENSION
}
// #include <boost/thread/locks.hpp>
// #include <boost/thread/lock_types.hpp> 

template<typename Lockable>
class unique_lock
{
public:
    typedef Lockable mutex_type;
    unique_lock() noexcept;
    explicit unique_lock(Lockable& m_);
    unique_lock(Lockable& m_,adopt_lock_t);
    unique_lock(Lockable& m_,defer_lock_t) noexcept;
    unique_lock(Lockable& m_,try_to_lock_t);

#ifdef BOOST_THREAD_PROVIDES_SHARED_MUTEX_UPWARDS_CONVERSIONS
    unique_lock(shared_lock<mutex_type>&& sl, try_to_lock_t); // C++14 
    template <class Clock, class Duration>
    unique_lock(shared_lock<mutex_type>&& sl,
                const chrono::time_point<Clock, Duration>& abs_time); // C++14
    template <class Rep, class Period>
    unique_lock(shared_lock<mutex_type>&& sl,
                const chrono::duration<Rep, Period>& rel_time); // C++14
#endif

    template <class Clock, class Duration>
    unique_lock(Mutex& mtx, const chrono::time_point<Clock, Duration>& t);
    template <class Rep, class Period>
    unique_lock(Mutex& mtx, const chrono::duration<Rep, Period>& d);
    ~unique_lock();

    unique_lock(unique_lock const&) = delete;
    unique_lock& operator=(unique_lock const&) = delete;
    unique_lock(unique_lock<Lockable>&& other) noexcept;
    explicit unique_lock(upgrade_lock<Lockable>&& other) noexcept; // EXTENSION

    unique_lock& operator=(unique_lock<Lockable>&& other) noexcept;

    void swap(unique_lock& other) noexcept;
    Lockable* release() noexcept;

    void lock();
    bool try_lock();

    template <class Rep, class Period>
    bool try_lock_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_lock_until(const chrono::time_point<Clock, Duration>& abs_time);

    void unlock();

    explicit operator bool() const noexcept;
    bool owns_lock() const noexcept;

    mutex_type* mutex() const noexcept;

#if defined BOOST_THREAD_USE_DATE_TIME || defined BOOST_THREAD_DONT_USE_CHRONO
    unique_lock(Lockable& m_,system_time const& target_time);
    template<typename TimeDuration>
    bool timed_lock(TimeDuration const& relative_time);
    bool timed_lock(::boost::system_time const& absolute_time);
#endif

};

boost::unique_lock is more complex than boost::lock_guard: not only does it provide for RAII-style locking, it also allows for deferring acquiring the lock until the lock() member function is called explicitly, or trying to acquire the lock in a non-blocking fashion, or with a timeout. Consequently, unlock() is only called in the destructor if the lock object has locked the Lockable object, or otherwise adopted a lock on the Lockable object.

Specializations of boost::unique_lock model the TimedLockable concept if the supplied Lockable type itself models TimedLockable concept (e.g. boost::unique_lock<boost::timed_mutex>), or the Lockable concept if the supplied Lockable type itself models Lockable concept (e.g. boost::unique_lock<boost::mutex>), or the BasicLockable concept if the supplied Lockable type itself models BasicLockable concept.

An instance of boost::unique_lock is said to own the lock state of a Lockable m if mutex() returns a pointer to m and owns_lock() returns true. If an object that owns the lock state of a Lockable object is destroyed, then the destructor will invoke mutex()->unlock().

The member functions of boost::unique_lock are not thread-safe. In particular, boost::unique_lock is intended to model the ownership of a Lockable object by a particular thread, and the member functions that release ownership of the lock state (including the destructor) must be called by the same thread that acquired ownership of the lock state.

Effects:

Creates a lock object with no associated mutex.

Postcondition:

owns_lock() returns false. mutex() returns NULL.

Throws:

Nothing.

Effects:

Stores a reference to m. Invokes m.lock().

Postcondition:

owns_lock() returns true. mutex() returns &m.

Throws:

Any exception thrown by the call to m.lock().

Precondition:

The current thread owns an exclusive lock on m.

Effects:

Stores a reference to m. Takes ownership of the lock state of m.

Postcondition:

owns_lock() returns true. mutex() returns &m.

Throws:

Nothing.

Effects:

Stores a reference to m.

Postcondition:

owns_lock() returns false. mutex() returns &m.

Throws:

Nothing.

Effects:

Stores a reference to m. Invokes m.try_lock(), and takes ownership of the lock state if the call returns true.

Postcondition:

mutex() returns &m. If the call to try_lock() returned true, then owns_lock() returns true, otherwise owns_lock() returns false.

Throws:

Nothing.

Requires:

The supplied Mutex type must implement try_unlock_shared_and_lock().

Effects:

Constructs an object of type boost::unique_lock. Let pm be the pointer to the mutex and owns the ownership state. Initializes pm with nullptr and owns with false. If sl. owns_lock()() returns false, sets pm to the return value of sl.release(). Else sl. owns_lock()() returns true, and in this case if sl.mutex()->try_unlock_shared_and_lock() returns true, sets pm to the value returned by sl.release() and sets owns to true.

Note:

If sl.owns_lock() returns true and sl.mutex()->try_unlock_shared_and_lock() returns false, sl is not modified.

Throws:

Nothing.

Notes:

Available only if BOOST_THREAD_PROVIDES_SHARED_MUTEX_UPWARDS_CONVERSIONS and BOOST_THREAD_PROVIDES_GENERIC_SHARED_MUTEX_ON_WIN is defined on Windows platform

template <class Clock, class Duration>
unique_lock(shared_lock<mutex_type>&& sl,
            const chrono::time_point<Clock, Duration>& abs_time);

Requires:

The supplied Mutex type shall implement try_unlock_shared_and_lock_until(abs_time).

Effects:

Constructs an object of type boost::unique_lock, initializing pm with nullptr and owns with false. If sl. owns_lock()() returns false, sets pm to the return value of sl.release(). Else sl. owns_lock()() returns true, and in this case if sl.mutex()->try_unlock_shared_and_lock_until(abs_time) returns true, sets pm to the value returned by sl.release() and sets owns to true.

Note:

If sl.owns_lock() returns true and sl.mutex()-> try_unlock_shared_and_lock_until(abs_time) returns false, sl is not modified.

Throws:

Nothing.

Notes:

Available only if BOOST_THREAD_PROVIDES_SHARED_MUTEX_UPWARDS_CONVERSIONS and BOOST_THREAD_PROVIDES_GENERIC_SHARED_MUTEX_ON_WIN is defined on Windows platform

template <class Rep, class Period>
unique_lock(shared_lock<mutex_type>&& sl,
            const chrono::duration<Rep, Period>& rel_time)

Requires:

The supplied Mutex type shall implement try_unlock_shared_and_lock_for(rel_time).

Effects:

Constructs an object of type boost::unique_lock, initializing pm with nullptr and owns with false. If sl. owns_lock()() returns false, sets pm to the return value of sl.release(). Else sl.owns_lock() returns true, and in this case if sl.mutex()-> try_unlock_shared_and_lock_for(rel_time) returns true, sets pm to the value returned by sl.release() and sets owns to true.

Note:

If sl.owns_lock() returns true and sl.mutex()-> try_unlock_shared_and_lock_for(rel_time) returns false, sl is not modified.

Postcondition:

.

Throws:

Nothing.

Notes:

Available only if BOOST_THREAD_PROVIDES_SHARED_MUTEX_UPWARDS_CONVERSIONS and BOOST_THREAD_PROVIDES_GENERIC_SHARED_MUTEX_ON_WIN is defined on Windows platform

Effects:

Stores a reference to m. Invokes m.timed_lock(abs_time), and takes ownership of the lock state if the call returns true.

Postcondition:

mutex() returns &m. If the call to timed_lock() returned true, then owns_lock() returns true, otherwise owns_lock() returns false.

Throws:

Any exceptions thrown by the call to m.timed_lock(abs_time).

Effects:

Stores a reference to m. Invokes m.try_lock_until(abs_time), and takes ownership of the lock state if the call returns true.

Postcondition:

mutex() returns &m. If the call to try_lock_until returned true, then owns_lock() returns true, otherwise owns_lock() returns false.

Throws:

Any exceptions thrown by the call to m.try_lock_until(abs_time).

Effects:

Stores a reference to m. Invokes m.try_lock_for(rel_time), and takes ownership of the lock state if the call returns true.

Postcondition:

mutex() returns &m. If the call to try_lock_for returned true, then owns_lock() returns true, otherwise owns_lock() returns false.

Throws:

Any exceptions thrown by the call to m.try_lock_for(rel_time).

Effects:

Invokes mutex()-> unlock() if owns_lock() returns true.

Throws:

Nothing.

Returns:

true if the *this owns the lock on the Lockable object associated with *this.

Throws:

Nothing.

Returns:

A pointer to the Lockable object associated with *this, or NULL if there is no such object.

Throws:

Nothing.

Returns:

owns_lock()().

Throws:

Nothing.

Effects:

The association between *this and the Lockable object is removed, without affecting the lock state of the Lockable object. If owns_lock() would have returned true, it is the responsibility of the calling code to ensure that the Lockable is correctly unlocked.

Returns:

A pointer to the Lockable object associated with *this at the point of the call, or NULL if there is no such object.

Throws:

Nothing.

Postcondition:

*this is no longer associated with any Lockable object. mutex() returns NULL and owns_lock() returns false.

// #include <boost/thread/locks.hpp>
// #include <boost/thread/lock_types.hpp> 

template<typename Lockable>
class shared_lock
{
public:
    typedef Lockable mutex_type;

    // Shared locking
    shared_lock();
    explicit shared_lock(Lockable& m_);
    shared_lock(Lockable& m_,adopt_lock_t);
    shared_lock(Lockable& m_,defer_lock_t);
    shared_lock(Lockable& m_,try_to_lock_t);
    template <class Clock, class Duration>
    shared_lock(Mutex& mtx, const chrono::time_point<Clock, Duration>& t);
    template <class Rep, class Period>
    shared_lock(Mutex& mtx, const chrono::duration<Rep, Period>& d);
    ~shared_lock();

    shared_lock(shared_lock const&) = delete;
    shared_lock& operator=(shared_lock const&) = delete;

    shared_lock(shared_lock<Lockable> && other);
    shared_lock& operator=(shared_lock<Lockable> && other);

    void lock();
    bool try_lock();
    template <class Rep, class Period>
    bool try_lock_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_lock_until(const chrono::time_point<Clock, Duration>& abs_time);
    void unlock();

    // Conversion from upgrade locking
    explicit shared_lock(upgrade_lock<Lockable> && other); // EXTENSION

    // Conversion from exclusive locking
    explicit shared_lock(unique_lock<Lockable> && other);

    // Setters
    void swap(shared_lock& other);
    mutex_type* release() noexcept;

    // Getters
    explicit operator bool() const;
    bool owns_lock() const;
    mutex_type mutex() const;

#if defined BOOST_THREAD_USE_DATE_TIME || defined BOOST_THREAD_DONT_USE_CHRONO
    shared_lock(Lockable& m_,system_time const& target_time);
    bool timed_lock(boost::system_time const& target_time);
#endif
};

Like boost::unique_lock, boost::shared_lock models the Lockable concept, but rather than acquiring unique ownership of the supplied Lockable object, locking an instance of boost::shared_lock acquires shared ownership.

Like boost::unique_lock, not only does it provide for RAII-style locking, it also allows for deferring acquiring the lock until the lock() member function is called explicitly, or trying to acquire the lock in a non-blocking fashion, or with a timeout. Consequently, unlock() is only called in the destructor if the lock object has locked the Lockable object, or otherwise adopted a lock on the Lockable object.

An instance of boost::shared_lock is said to own the lock state of a Lockable m if mutex() returns a pointer to m and owns_lock() returns true. If an object that owns the lock state of a Lockable object is destroyed, then the destructor will invoke mutex()->unlock_shared().

The member functions of boost::shared_lock are not thread-safe. In particular, boost::shared_lock is intended to model the shared ownership of a Lockable object by a particular thread, and the member functions that release ownership of the lock state (including the destructor) must be called by the same thread that acquired ownership of the lock state.

Effects:

Creates a lock object with no associated mutex.

Postcondition:

owns_lock() returns false. mutex() returns NULL.

Throws:

Nothing.

Effects:

Stores a reference to m. Invokes m.lock_shared().

Postcondition:

owns_lock() returns true. mutex() returns &m.

Throws:

Any exception thrown by the call to m.lock_shared().

Precondition:

The current thread owns an exclusive lock on m.

Effects:

Stores a reference to m. Takes ownership of the lock state of m.

Postcondition:

owns_lock() returns true. mutex() returns &m.

Throws:

Nothing.

Effects:

Stores a reference to m.

Postcondition:

owns_lock() returns false. mutex() returns &m.

Throws:

Nothing.

Effects:

Stores a reference to m. Invokes m.try_lock_shared(), and takes ownership of the lock state if the call returns true.

Postcondition:

mutex() returns &m. If the call to try_lock_shared() returned true, then owns_lock() returns true, otherwise owns_lock() returns false.

Throws:

Nothing.

Effects:

Stores a reference to m. Invokes m.timed_lock(abs_time), and takes ownership of the lock state if the call returns true.

Postcondition:

mutex() returns &m. If the call to timed_lock_shared() returned true, then owns_lock() returns true, otherwise owns_lock() returns false.

Throws:

Any exceptions thrown by the call to m.timed_lock(abs_time).

Effects:

Invokes mutex()-> unlock_shared() if owns_lock() returns true.

Throws:

Nothing.

Returns:

true if the *this owns the lock on the Lockable object associated with *this.

Throws:

Nothing.

Returns:

A pointer to the Lockable object associated with *this, or NULL if there is no such object.

Throws:

Nothing.

Returns:

owns_lock().

Throws:

Nothing.

Effects:

The association between *this and the Lockable object is removed, without affecting the lock state of the Lockable object. If owns_lock() would have returned true, it is the responsibility of the calling code to ensure that the Lockable is correctly unlocked.

Returns:

A pointer to the Lockable object associated with *this at the point of the call, or NULL if there is no such object.

Throws:

Nothing.

Postcondition:

*this is no longer associated with any Lockable object. mutex() returns NULL and owns_lock() returns false.

// #include <boost/thread/locks.hpp>
// #include <boost/thread/lock_types.hpp> 

template<typename Lockable>
class upgrade_lock
{
public:
    typedef Lockable mutex_type;

    // Upgrade locking

    upgrade_lock();
    explicit upgrade_lock(mutex_type& m_);
    upgrade_lock(mutex_type& m, defer_lock_t) noexcept;
    upgrade_lock(mutex_type& m, try_to_lock_t);
    upgrade_lock(mutex_type& m, adopt_lock_t);
    template <class Clock, class Duration>
    upgrade_lock(mutex_type& m,
                 const chrono::time_point<Clock, Duration>& abs_time);
    template <class Rep, class Period>
    upgrade_lock(mutex_type& m,
                 const chrono::duration<Rep, Period>& rel_time);
    ~upgrade_lock();

    upgrade_lock(const upgrade_lock& other) = delete;
    upgrade_lock& operator=(const upgrade_lock<Lockable> & other) = delete;

    upgrade_lock(upgrade_lock<Lockable> && other);
    upgrade_lock& operator=(upgrade_lock<Lockable> && other);

    void lock();
    bool try_lock();
    template <class Rep, class Period>
    bool try_lock_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_lock_until(const chrono::time_point<Clock, Duration>& abs_time);
    void unlock();

#ifdef BOOST_THREAD_PROVIDES_SHARED_MUTEX_UPWARDS_CONVERSIONS
   // Conversion from shared locking
    upgrade_lock(shared_lock<mutex_type>&& sl, try_to_lock_t);
    template <class Clock, class Duration>
    upgrade_lock(shared_lock<mutex_type>&& sl,
                   const chrono::time_point<Clock, Duration>& abs_time);
    template <class Rep, class Period>
    upgrade_lock(shared_lock<mutex_type>&& sl,
                   const chrono::duration<Rep, Period>& rel_time);
#endif

    // Conversion from exclusive locking
    explicit upgrade_lock(unique_lock<Lockable> && other);

    // Setters
    void swap(upgrade_lock& other);
    mutex_type* release() noexcept;

    // Getters
    explicit operator bool() const;
    bool owns_lock() const;
    mutex_type mutex() const;
};

Like boost::unique_lock, boost::upgrade_lock models the Lockable concept, but rather than acquiring unique ownership of the supplied Lockable object, locking an instance of boost::upgrade_lock acquires upgrade ownership.

Like boost::unique_lock, not only does it provide for RAII-style locking, it also allows for deferring acquiring the lock until the lock() member function is called explicitly, or trying to acquire the lock in a non-blocking fashion, or with a timeout. Consequently, unlock() is only called in the destructor if the lock object has locked the Lockable object, or otherwise adopted a lock on the Lockable object.

An instance of boost::upgrade_lock is said to own the lock state of a Lockable m if mutex() returns a pointer to m and owns_lock() returns true. If an object that owns the lock state of a Lockable object is destroyed, then the destructor will invoke mutex()->unlock_upgrade().

The member functions of boost::upgrade_lock are not thread-safe. In particular, boost::upgrade_lock is intended to model the upgrade ownership of a UpgradeLockable object by a particular thread, and the member functions that release ownership of the lock state (including the destructor) must be called by the same thread that acquired ownership of the lock state.

// #include <boost/thread/locks.hpp>
// #include <boost/thread/lock_types.hpp> 

template <class Lockable>
class upgrade_to_unique_lock
{
public:
    typedef Lockable mutex_type;
    explicit upgrade_to_unique_lock(upgrade_lock<Lockable>& m_);
    ~upgrade_to_unique_lock();

    upgrade_to_unique_lock(upgrade_to_unique_lock const& other) = delete;
    upgrade_to_unique_lock& operator=(upgrade_to_unique_lock<Lockable> const& other) = delete;

    upgrade_to_unique_lock(upgrade_to_unique_lock<Lockable> && other);
    upgrade_to_unique_lock& operator=(upgrade_to_unique_lock<Lockable> && other);

    void swap(upgrade_to_unique_lock& other);

    explicit operator bool() const;
    bool owns_lock() const;
    mutex_type* mutex() const;

};

boost::upgrade_to_unique_lock allows for a temporary upgrade of an boost::upgrade_lock to exclusive ownership. When constructed with a reference to an instance of boost::upgrade_lock, if that instance has upgrade ownership on some Lockable object, that ownership is upgraded to exclusive ownership. When the boost::upgrade_to_unique_lock instance is destroyed, the ownership of the Lockable is downgraded back to upgrade ownership.

class MutexType::scoped_try_lock
{
private:
    MutexType::scoped_try_lock(MutexType::scoped_try_lock<MutexType>& other);
    MutexType::scoped_try_lock& operator=(MutexType::scoped_try_lock<MutexType>& other);
public:
    MutexType::scoped_try_lock();
    explicit MutexType::scoped_try_lock(MutexType& m);
    MutexType::scoped_try_lock(MutexType& m_,adopt_lock_t);
    MutexType::scoped_try_lock(MutexType& m_,defer_lock_t);
    MutexType::scoped_try_lock(MutexType& m_,try_to_lock_t);

    MutexType::scoped_try_lock(MutexType::scoped_try_lock<MutexType>&& other);
    MutexType::scoped_try_lock& operator=(MutexType::scoped_try_lock<MutexType>&& other);

    void swap(MutexType::scoped_try_lock&& other);

    void lock();
    bool try_lock();
    void unlock();

    MutexType* mutex() const;
    MutexType* release();

    explicit operator bool() const;
    bool owns_lock() const;
};

The member typedef scoped_try_lock is provided for each distinct MutexType as a typedef to a class with the preceding definition. The semantics of each constructor and member function are identical to those of boost::unique_lock<MutexType> for the same MutexType, except that the constructor that takes a single reference to a mutex will call m.try_lock() rather than m.lock().

// #include <boost/thread/locks.hpp> 
// #include <boost/thread/strict_lock.hpp> 

namespace boost
{

  template<typename Lockable>
  class strict_lock;
  template <typename Lock>
  class nested_strict_lock;
  template <typename Lockable>
  struct is_strict_lock_sur_parole<strict_lock<Lockable> >;
  template <typename Lock>
  struct is_strict_lock_sur_parole<nested_strict_lock<Lock> >;

#if ! defined BOOST_THREAD_NO_MAKE_STRICT_LOCK
  template <typename Lockable>
  strict_lock<Lockable> make_strict_lock(Lockable& mtx);
#endif
#if ! defined BOOST_THREAD_NO_MAKE_NESTED_STRICT_LOCK
  template <typename Lock>
  nested_strict_lock<Lock> make_nested_strict_lock(Lock& lk);
#endif

}
// #include <boost/thread/locks.hpp>
// #include <boost/thread/strict_lock.hpp> 

template<typename BasicLockable>
class strict_lock
{
public:
    typedef BasicLockable mutex_type;
    strict_lock(strict_lock const& m_) = delete;
    strict_lock& operator=(strict_lock const& m_) = delete;
    explicit strict_lock(mutex_type& m_);
    ~strict_lock();

    bool owns_lock(mutex_type const* l) const noexcept;
};

strict_lock is a model of StrictLock.

strict_lock is the simplest StrictLock: on construction it acquires ownership of the implementation of the BasicLockable concept supplied as the constructor parameter. On destruction, the ownership is released. This provides simple RAII-style locking of a BasicLockable object, to facilitate exception-safe locking and unlocking.

See also boost::lock_guard

Effects:

Stores a reference to m. Invokes m.lock().

Throws:

Any exception thrown by the call to m.lock().

Effects:

Invokes m.unlock() on the Lockable object passed to the constructor.

Throws:

Nothing.

// #include <boost/thread/locks.hpp>
// #include <boost/thread/strict_lock.hpp> 

template<typename Lock>
class nested_strict_lock
{
public:
    typedef BasicLockable mutex_type;
    nested_strict_lock(nested_strict_lock const& m_) = delete;
    nested_strict_lock& operator=(nested_strict_lock const& m_) = delete;
    explicit nested_strict_lock(Lock& lk),
    ~nested_strict_lock() noexcept;

    bool owns_lock(mutex_type const* l) const noexcept;
};

nested_strict_lock is a model of StrictLock.

A nested strict lock is a scoped lock guard ensuring a mutex is locked on its scope, by taking ownership of a nesting lock, locking the mutex on construction if not already locked and restoring the ownership to the nesting lock on destruction.

See also strict_lock, boost::unique_lock

Requires:

lk.mutex() != null_ptr.

Effects:

Stores the reference to the lock parameter lk and takes ownership on it. If the lock doesn't owns the mutex lock it.

Postcondition:

owns_lock(lk.mutex()).

Throws:

- lock_error when BOOST_THREAD_THROW_IF_PRECONDITION_NOT_SATISFIED is defined and lk.mutex() == null_ptr

- Any exception that @c lk.lock() can throw.

Effects:

Restores ownership to the nesting lock.

Return:

Whether if this lock is locking that mutex.

template <typename Lockable>
strict_lock<Lockable> make_strict_lock(Lockable& m); // EXTENSION

Returns:

a strict_lock as if initialized with {m}.

Throws:

Any exception thrown by the call to m.lock().

template <typename Lock>
nested_strict_lock<Lock> make_nested_strict_lock(Lock& lk); // EXTENSION

Returns:

a nested_strict_lock as if initialized with {lk}.

Throws:

Any exception thrown by the call to lk.lock().

// #include <boost/thread/synchroniezd_value.hpp> 
// #include <boost/thread/strict_lock_ptr.hpp> 

namespace boost
{

  template<typename T, typename Lockable = mutex>
  class strict_lock_ptr;
  template<typename T, typename Lockable = mutex>
  class const_strict_lock_ptr;
}
// #include <boost/thread/synchroniezd_value.hpp> 
// #include <boost/thread/strict_lock_ptr.hpp> 


template <typename T, typename Lockable = mutex>
class const_strict_lock_ptr
{
public:
  typedef T value_type;
  typedef Lockable mutex_type;

  const_strict_lock_ptr(const_strict_lock_ptr const& m_) = delete;
  const_strict_lock_ptr& operator=(const_strict_lock_ptr const& m_) = delete;

  const_strict_lock_ptr(T const& val, Lockable & mtx);
  const_strict_lock_ptr(T const& val, Lockable & mtx, adopt_lock_t tag);

  ~const_strict_lock_ptr();

  const T* operator->() const;
  const T& operator*() const;

};
const_strict_lock_ptr(T const& val, Lockable & m);

Effects:

Invokes m.lock(), stores a reference to it and to the value type val.

Throws:

Any exception thrown by the call to m.lock().

const_strict_lock_ptr(T const& val, Lockable & m, adopt_lock_t tag);

Effects:

Stores a reference to it and to the value type val.

Throws:

Nothing.

~const_strict_lock_ptr();

Effects:

Invokes m.unlock() on the Lockable object passed to the constructor.

Throws:

Nothing.

const T* operator->() const;

Return:

return a constant pointer to the protected value.

Throws:

Nothing.

const T& operator*() const;

Return:

return a constant reference to the protected value.

Throws:

Nothing.

// #include <boost/thread/synchroniezd_value.hpp> 
// #include <boost/thread/strict_lock_ptr.hpp> 

template <typename T, typename Lockable = mutex>
class strict_lock_ptr : public const_strict_lock_ptr<T,Lockable>
{
public:
  strict_lock_ptr(strict_lock_ptr const& m_) = delete;
  strict_lock_ptr& operator=(strict_lock_ptr const& m_) = delete;

  strict_lock_ptr(T & val, Lockable & mtx);
  strict_lock_ptr(T & val, Lockable & mtx, adopt_lock_t tag);
  ~strict_lock_ptr();

  T* operator->();
  T& operator*();

};
strict_lock_ptr(T const& val, Lockable & m);

Effects:

Invokes m.lock(), stores a reference to it and to the value type val.

Throws:

Any exception thrown by the call to m.lock().

strict_lock_ptr(T const& val, Lockable & m, adopt_lock_t tag);

Effects:

Stores a reference to it and to the value type val.

Throws:

Nothing.

~ strict_lock_ptr();

Effects:

Invokes m.unlock() on the Lockable object passed to the constructor.

Throws:

Nothing.

T* operator->();

Return:

return a pointer to the protected value.

Throws:

Nothing.

T& operator*();

Return:

return a reference to the protected value.

Throws:

Nothing.

// #include <boost/thread/externally_locked.hpp>
template <class T, typename MutexType = boost::mutex>
class externally_locked;
template <class T, typename MutexType>
class externally_locked<T&, MutexType>;

template <typename T, typename MutexType>
void swap(externally_locked<T, MutexType> & lhs, externally_locked<T, MutexType> & rhs);
// #include <boost/thread/externally_locked.hpp>

template <class T, typename MutexType>
class externally_locked
{
  //BOOST_CONCEPT_ASSERT(( CopyConstructible<T> ));
  BOOST_CONCEPT_ASSERT(( BasicLockable<MutexType> ));

public:
  typedef MutexType mutex_type;

  externally_locked(mutex_type& mtx, const T& obj);
  externally_locked(mutex_type& mtx,T&& obj);
  explicit externally_locked(mutex_type& mtx);
  externally_locked(externally_locked const& rhs);
  externally_locked(externally_locked&& rhs);
  externally_locked& operator=(externally_locked const& rhs);
  externally_locked& operator=(externally_locked&& rhs);

  // observers
  T& get(strict_lock<mutex_type>& lk);
  const T& get(strict_lock<mutex_type>& lk) const;

  template <class Lock>
  T& get(nested_strict_lock<Lock>& lk);
  template <class Lock>
  const T& get(nested_strict_lock<Lock>& lk) const;

  template <class Lock>
  T& get(Lock& lk);
  template <class Lock>
  T const& get(Lock& lk) const;

 mutex_type* mutex() const noexcept;

  // modifiers
  void lock();
  void unlock();
  bool try_lock();
  void swap(externally_locked&);
};

externally_locked is a model of Lockable, it cloaks an object of type T, and actually provides full access to that object through the get and set member functions, provided you pass a reference to a strict lock object.

Only the specificities respect to Lockable are described here.

externally_locked(mutex_type& mtx, const T& obj);

Requires:

T is a model of CopyConstructible.

Effects:

Constructs an externally locked object copying the cloaked type.

Throws:

Any exception thrown by the call to T(obj).

externally_locked(mutex_type& mtx,T&& obj);

Requires:

T is a model of Movable.

Effects:

Constructs an externally locked object by moving the cloaked type.

Throws:

Any exception thrown by the call to T(obj).

externally_locked(mutex_type& mtx);

Requires:

T is a model of DefaultConstructible.

Effects:

Constructs an externally locked object by default constructing the cloaked type.

Throws:

Any exception thrown by the call to T().

externally_locked(externally_locked&& rhs);

Requires:

T is a model of Movable.

Effects:

Move constructs an externally locked object by moving the cloaked type and copying the mutex reference

Throws:

Any exception thrown by the call to T(T&&).

externally_locked(externally_locked& rhs);

Requires:

T is a model of Copyable.

Effects:

Copy constructs an externally locked object by copying the cloaked type and copying the mutex reference

Throws:

Any exception thrown by the call to T(T&).

externally_locked& operator=(externally_locked&& rhs);

Requires:

T is a model of Movable.

Effects:

Move assigns an externally locked object by moving the cloaked type and copying the mutex reference

Throws:

Any exception thrown by the call to T::operator=(T&&).

externally_locked& operator=(externally_locked const& rhs);

Requires:

T is a model of Copyable.

Effects:

Copy assigns an externally locked object by copying the cloaked type and copying the mutex reference

Throws:

Any exception thrown by the call to T::operator=(T&).

T& get(strict_lock<mutex_type>& lk);
const T& get(strict_lock<mutex_type>& lk) const;

Requires:

The lk parameter must be locking the associated mutex.

Returns:

A reference to the cloaked object

Throws:

lock_error if BOOST_THREAD_THROW_IF_PRECONDITION_NOT_SATISFIED is defined and the run-time preconditions are not satisfied .

template <class Lock>
T& get(nested_strict_lock<Lock>& lk);
template <class Lock>
const T& get(nested_strict_lock<Lock>& lk) const;

Requires:

is_same<mutex_type, typename Lock::mutex_type> and the lk parameter must be locking the associated mutex.

Returns:

A reference to the cloaked object

Throws:

lock_error if BOOST_THREAD_THROW_IF_PRECONDITION_NOT_SATISFIED is defined and the run-time preconditions are not satisfied .

template <class Lock>
T& get(Lock& lk);
template <class Lock>
T const& get(Lock& lk) const;

Requires:

Lock is a model of StrictLock, is_same<mutex_type, typename Lock::mutex_type> and the lk parameter must be locking the associated mutex.

Returns:

A reference to the cloaked object

Throws:

lock_error if BOOST_THREAD_THROW_IF_PRECONDITION_NOT_SATISFIED is defined and the run-time preconditions are not satisfied .

// #include <boost/thread/externally_locked.hpp>

template <class T, typename MutexType>
class externally_locked<T&, MutexType>
{
  //BOOST_CONCEPT_ASSERT(( CopyConstructible<T> ));
  BOOST_CONCEPT_ASSERT(( BasicLockable<MutexType> ));

public:
  typedef MutexType mutex_type;

  externally_locked(mutex_type& mtx, T& obj);
  explicit externally_locked(mutex_type& mtx);
  externally_locked(externally_locked const& rhs) noexcept;
  externally_locked(externally_locked&& rhs) noexcept;
  externally_locked& operator=(externally_locked const& rhs) noexcept;
  externally_locked& operator=(externally_locked&& rhs) noexcept;

  // observers
  T& get(strict_lock<mutex_type>& lk);
  const T& get(strict_lock<mutex_type>& lk) const;

  template <class Lock>
  T& get(nested_strict_lock<Lock>& lk);
  template <class Lock>
  const T& get(nested_strict_lock<Lock>& lk) const;

  template <class Lock>
  T& get(Lock& lk);
  template <class Lock>
  T const& get(Lock& lk) const;

 mutex_type* mutex() const noexcept;

  // modifiers
  void lock();
  void unlock();
  bool try_lock();
  void swap(externally_locked&) noexcept;
};

externally_locked is a model of Lockable, it cloaks an object of type T, and actually provides full access to that object through the get and set member functions, provided you pass a reference to a strict lock object.

Only the specificities respect to Lockable are described here.

externally_locked<T&>(mutex_type& mtx, T& obj) noexcept;

Effects:

Constructs an externally locked object copying the cloaked reference.

externally_locked(externally_locked&& rhs) noexcept;

Effects:

Moves an externally locked object by moving the cloaked type and copying the mutex reference

externally_locked& operator=(externally_locked&& rhs);

Effects:

Move assigns an externally locked object by copying the cloaked reference and copying the mutex reference

externally_locked& operator=(externally_locked const& rhs);

Requires:

T is a model of Copyable.

Effects:

Copy assigns an externally locked object by copying the cloaked reference and copying the mutex reference

Throws:

Any exception thrown by the call to T::operator=(T&).

T& get(strict_lock<mutex_type>& lk);
const T& get(strict_lock<mutex_type>& lk) const;

Requires:

The lk parameter must be locking the associated mutex.

Returns:

A reference to the cloaked object

Throws:

lock_error if BOOST_THREAD_THROW_IF_PRECONDITION_NOT_SATISFIED is defined and the run-time preconditions are not satisfied .

template <class Lock>
T& get(nested_strict_lock<Lock>& lk);
template <class Lock>
const T& get(nested_strict_lock<Lock>& lk) const;

Requires:

is_same<mutex_type, typename Lock::mutex_type> and the lk parameter must be locking the associated mutex.

Returns:

A reference to the cloaked object

Throws:

lock_error if BOOST_THREAD_THROW_IF_PRECONDITION_NOT_SATISFIED is defined and the run-time preconditions are not satisfied .

template <class Lock>
T& get(Lock& lk);
template <class Lock>
T const& get(Lock& lk) const;

Requires:

Lock is a model of StrictLock, is_same<mutex_type, typename Lock::mutex_type> and the lk parameter must be locking the associated mutex.

Returns:

A reference to the cloaked object

Throws:

lock_error if BOOST_THREAD_THROW_IF_PRECONDITION_NOT_SATISFIED is defined and the run-time preconditions are not satisfied .

template <typename T, typename MutexType>
void swap(externally_locked<T, MutexType> & lhs, externally_locked<T, MutexType> & rhs)
// #include <boost/thread/shared_lock_guard.hpp>
namespace boost
{
  template<typename SharedLockable>
  class shared_lock_guard
  {
  public:
      shared_lock_guard(shared_lock_guard const&) = delete;
      shared_lock_guard& operator=(shared_lock_guard const&) = delete;

      explicit shared_lock_guard(SharedLockable& m_);
      shared_lock_guard(SharedLockable& m_,boost::adopt_lock_t);

      ~shared_lock_guard();
  };
}

shared_lock_guard is very simple: on construction it acquires shared ownership of the implementation of the SharedLockable concept supplied as the constructor parameter. On destruction, the ownership is released. This provides simple RAII-style locking of a SharedLockable object, to facilitate exception-safe shared locking and unlocking. In addition, the shared_lock_guard(SharedLockable &m, boost::adopt_lock_t) constructor allows the shared_lock_guard object to take shared ownership of a lock already held by the current thread.

Effects:

Stores a reference to m. Invokes m.lock_shared()().

Throws:

Any exception thrown by the call to m.lock_shared()().

Precondition:

The current thread owns a lock on m equivalent to one obtained by a call to m.lock_shared()().

Effects:

Stores a reference to m. Takes ownership of the lock state of m.

Throws:

Nothing.

Effects:

Invokes m.unlock_shared()() on the SharedLockable object passed to the constructor.

Throws:

Nothing.

// #include <boost/thread/reverse_lock.hpp>
namespace boost
{

  template<typename Lock>
  class reverse_lock
  {
  public:
      reverse_lock(reverse_lock const&) = delete;
      reverse_lock& operator=(reverse_lock const&) = delete;

      explicit reverse_lock(Lock& m_);
      ~reverse_lock();
  };
}

reverse_lock reverse the operations of a lock: it provide for RAII-style, that unlocks the lock at construction time and lock it at destruction time. In addition, it transfer ownership temporarily, so that the mutex can not be locked using the Lock.

An instance of reverse_lock doesn't own the lock never.

Effects:

Stores a reference to m. Invokes m.unlock() if m owns his lock and then stores the mutex by calling m.release().

Postcondition:

!m. owns_lock()() && m.mutex()==0.

Throws:

Any exception thrown by the call to m.unlock().

Effects:

Let be mtx the stored mutex*. If not 0 Invokes mtx->lock() and gives again the mtx to the Lock using the adopt_lock_t overload.

Throws:

Any exception thrown by mtx->lock().

Remarks:

Note that if mtx->lock() throws an exception while unwinding the program will terminate, so don't use reverse_lock if an exception can be thrown.

// #include <boost/thread/locks.hpp>
// #include <boost/thread/lock_algorithms.hpp>
namespace boost
{

  template<typename Lockable1,typename Lockable2>
  void lock(Lockable1& l1,Lockable2& l2);

  template<typename Lockable1,typename Lockable2,typename Lockable3>
  void lock(Lockable1& l1,Lockable2& l2,Lockable3& l3);

  template<typename Lockable1,typename Lockable2,typename Lockable3,typename Lockable4>
  void lock(Lockable1& l1,Lockable2& l2,Lockable3& l3,Lockable4& l4);

  template<typename Lockable1,typename Lockable2,typename Lockable3,typename Lockable4,typename Lockable5>
  void lock(Lockable1& l1,Lockable2& l2,Lockable3& l3,Lockable4& l4,Lockable5& l5);

}

Effects:

Locks the Lockable objects supplied as arguments in an unspecified and indeterminate order in a way that avoids deadlock. It is safe to call this function concurrently from multiple threads for any set of mutexes (or other lockable objects) in any order without risk of deadlock. If any of the lock() or try_lock() operations on the supplied Lockable objects throws an exception any locks acquired by the function will be released before the function exits.

Throws:

Any exceptions thrown by calling lock() or try_lock() on the supplied Lockable objects.

Postcondition:

All the supplied Lockable objects are locked by the calling thread.

template<typename ForwardIterator>
void lock(ForwardIterator begin,ForwardIterator end);

Preconditions:

The value_type of ForwardIterator must implement the Lockable concept

Effects:

Locks all the Lockable objects in the supplied range in an unspecified and indeterminate order in a way that avoids deadlock. It is safe to call this function concurrently from multiple threads for any set of mutexes (or other lockable objects) in any order without risk of deadlock. If any of the lock() or try_lock() operations on the Lockable objects in the supplied range throws an exception any locks acquired by the function will be released before the function exits.

Throws:

Any exceptions thrown by calling lock() or try_lock() on the supplied Lockable objects.

Postcondition:

All the Lockable objects in the supplied range are locked by the calling thread.

template<typename Lockable1,typename Lockable2>
int try_lock(Lockable1& l1,Lockable2& l2);

template<typename Lockable1,typename Lockable2,typename Lockable3>
int try_lock(Lockable1& l1,Lockable2& l2,Lockable3& l3);

template<typename Lockable1,typename Lockable2,typename Lockable3,typename Lockable4>
int try_lock(Lockable1& l1,Lockable2& l2,Lockable3& l3,Lockable4& l4);

template<typename Lockable1,typename Lockable2,typename Lockable3,typename Lockable4,typename Lockable5>
int try_lock(Lockable1& l1,Lockable2& l2,Lockable3& l3,Lockable4& l4,Lockable5& l5);

Effects:

Calls try_lock() on each of the Lockable objects supplied as arguments. If any of the calls to try_lock() returns false then all locks acquired are released and the zero-based index of the failed lock is returned.

If any of the try_lock() operations on the supplied Lockable objects throws an exception any locks acquired by the function will be released before the function exits.

Returns:

-1 if all the supplied Lockable objects are now locked by the calling thread, the zero-based index of the object which could not be locked otherwise.

Throws:

Any exceptions thrown by calling try_lock() on the supplied Lockable objects.

Postcondition:

If the function returns -1, all the supplied Lockable objects are locked by the calling thread. Otherwise any locks acquired by this function will have been released.

template<typename ForwardIterator>
ForwardIterator try_lock(ForwardIterator begin,ForwardIterator end);

Preconditions:

The value_type of ForwardIterator must implement the Lockable concept

Effects:

Calls try_lock() on each of the Lockable objects in the supplied range. If any of the calls to try_lock() returns false then all locks acquired are released and an iterator referencing the failed lock is returned.

If any of the try_lock() operations on the supplied Lockable objects throws an exception any locks acquired by the function will be released before the function exits.

Returns:

end if all the supplied Lockable objects are now locked by the calling thread, an iterator referencing the object which could not be locked otherwise.

Throws:

Any exceptions thrown by calling try_lock() on the supplied Lockable objects.

Postcondition:

If the function returns end then all the Lockable objects in the supplied range are locked by the calling thread, otherwise all locks acquired by the function have been released.

namespace boost
{

  template <typename Lockable>
  unique_lock<Lockable> make_unique_lock(Lockable& mtx); // EXTENSION

  template <typename Lockable>
  unique_lock<Lockable> make_unique_lock(Lockable& mtx, adopt_lock_t); // EXTENSION
  template <typename Lockable>
  unique_lock<Lockable> make_unique_lock(Lockable& mtx, defer_lock_t); // EXTENSION
  template <typename Lockable>
  unique_lock<Lockable> make_unique_lock(Lockable& mtx, try_to_lock_t); // EXTENSION

#if ! defined(BOOST_THREAD_NO_MAKE_UNIQUE_LOCKS)
  template <typename ...Lockable>
  std::tuple<unique_lock<Lockable> ...> make_unique_locks(Lockable& ...mtx); // EXTENSION
#endif
}
template <typename Lockable>
unique_lock<Lockable> make_unique_lock(Lockable& mtx); // EXTENSION

Returns:

a boost::unique_lock as if initialized with unique_lock<Lockable>(mtx).

Throws:

Any exception thrown by the call to boost::unique_lock<Lockable>(mtx).

template <typename Lockable>
unique_lock<Lockable> make_unique_lock(Lockable& mtx, adopt_lock_t tag); // EXTENSION

template <typename Lockable>
unique_lock<Lockable> make_unique_lock(Lockable& mtx, defer_lock_t tag); // EXTENSION

template <typename Lockable>
unique_lock<Lockable> make_unique_lock(Lockable& mtx, try_to_lock_t tag); // EXTENSION

Returns:

a boost::unique_lock as if initialized with unique_lock<Lockable>(mtx, tag).

Throws:

Any exception thrown by the call to boost::unique_lock<Lockable>(mtx, tag).

template <typename ...Lockable>
std::tuple<unique_lock<Lockable> ...> make_unique_locks(Lockable& ...mtx); // EXTENSION

Effect:

Locks all the mutexes.

Returns:

a std::tuple of unique boost::unique_lock owning each one of the mutex.

Throws:

Any exception thrown by boost::lock(mtx...).

#include <boost/thread/mutex.hpp>

class mutex:
    boost::noncopyable
{
public:
    mutex();
    ~mutex();

    void lock();
    bool try_lock();
    void unlock();

    typedef platform-specific-type native_handle_type;
    native_handle_type native_handle();

    typedef unique_lock<mutex> scoped_lock;
    typedef unspecified-type scoped_try_lock;
};

boost::mutex implements the Lockable concept to provide an exclusive-ownership mutex. At most one thread can own the lock on a given instance of boost::mutex at any time. Multiple concurrent calls to lock(), try_lock() and unlock() shall be permitted.

typedef platform-specific-type native_handle_type;
native_handle_type native_handle();

Effects:

Returns an instance of native_handle_type that can be used with platform-specific APIs to manipulate the underlying implementation. If no such instance exists, native_handle() and native_handle_type are not present.

Throws:

Nothing.

#include <boost/thread/mutex.hpp>

typedef mutex try_mutex;

boost::try_mutex is a typedef to boost::mutex, provided for backwards compatibility with previous releases of boost.

#include <boost/thread/mutex.hpp>

class timed_mutex:
    boost::noncopyable
{
public:
    timed_mutex();
    ~timed_mutex();

    void lock();
    void unlock();
    bool try_lock();

    template <class Rep, class Period>
    bool try_lock_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_lock_until(const chrono::time_point<Clock, Duration>& t);

    typedef platform-specific-type native_handle_type;
    native_handle_type native_handle();

    typedef unique_lock<timed_mutex> scoped_timed_lock;
    typedef unspecified-type scoped_try_lock;
    typedef scoped_timed_lock scoped_lock;

#if defined BOOST_THREAD_PROVIDES_DATE_TIME || defined BOOST_THREAD_DONT_USE_CHRONO
    bool timed_lock(system_time const & abs_time);
    template<typename TimeDuration>
    bool timed_lock(TimeDuration const & relative_time);
#endif

};

boost::timed_mutex implements the TimedLockable concept to provide an exclusive-ownership mutex. At most one thread can own the lock on a given instance of boost::timed_mutex at any time. Multiple concurrent calls to lock(), try_lock(), timed_lock(), timed_lock() and unlock() shall be permitted.

typedef platform-specific-type native_handle_type;
native_handle_type native_handle();

Effects:

Returns an instance of native_handle_type that can be used with platform-specific APIs to manipulate the underlying implementation. If no such instance exists, native_handle() and native_handle_type are not present.

Throws:

Nothing.

#include <boost/thread/recursive_mutex.hpp>

class recursive_mutex:
    boost::noncopyable
{
public:
    recursive_mutex();
    ~recursive_mutex();

    void lock();
    bool try_lock() noexcept;
    void unlock();

    typedef platform-specific-type native_handle_type;
    native_handle_type native_handle();

    typedef unique_lock<recursive_mutex> scoped_lock;
    typedef unspecified-type scoped_try_lock;
};

boost::recursive_mutex implements the Lockable concept to provide an exclusive-ownership recursive mutex. At most one thread can own the lock on a given instance of boost::recursive_mutex at any time. Multiple concurrent calls to lock(), try_lock() and unlock() shall be permitted. A thread that already has exclusive ownership of a given boost::recursive_mutex instance can call lock() or try_lock() to acquire an additional level of ownership of the mutex. unlock() must be called once for each level of ownership acquired by a single thread before ownership can be acquired by another thread.

typedef platform-specific-type native_handle_type;
native_handle_type native_handle();

Effects:

Returns an instance of native_handle_type that can be used with platform-specific APIs to manipulate the underlying implementation. If no such instance exists, native_handle() and native_handle_type are not present.

Throws:

Nothing.

#include <boost/thread/recursive_mutex.hpp>

typedef recursive_mutex recursive_try_mutex;

boost::recursive_try_mutex is a typedef to boost::recursive_mutex, provided for backwards compatibility with previous releases of boost.

#include <boost/thread/recursive_mutex.hpp>

class recursive_timed_mutex:
    boost::noncopyable
{
public:
    recursive_timed_mutex();
    ~recursive_timed_mutex();

    void lock();
    bool try_lock() noexcept;
    void unlock();


    template <class Rep, class Period>
    bool try_lock_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_lock_until(const chrono::time_point<Clock, Duration>& t);

    typedef platform-specific-type native_handle_type;
    native_handle_type native_handle();

    typedef unique_lock<recursive_timed_mutex> scoped_lock;
    typedef unspecified-type scoped_try_lock;
    typedef scoped_lock scoped_timed_lock;

#if defined BOOST_THREAD_PROVIDES_DATE_TIME || defined BOOST_THREAD_DONT_USE_CHRONO
    bool timed_lock(system_time const & abs_time);
    template<typename TimeDuration>
    bool timed_lock(TimeDuration const & relative_time);
#endif

};

boost::recursive_timed_mutex implements the TimedLockable concept to provide an exclusive-ownership recursive mutex. At most one thread can own the lock on a given instance of boost::recursive_timed_mutex at any time. Multiple concurrent calls to lock(), try_lock(), timed_lock(), timed_lock() and unlock() shall be permitted. A thread that already has exclusive ownership of a given boost::recursive_timed_mutex instance can call lock(), timed_lock(), timed_lock() or try_lock() to acquire an additional level of ownership of the mutex. unlock() must be called once for each level of ownership acquired by a single thread before ownership can be acquired by another thread.

typedef platform-specific-type native_handle_type;
native_handle_type native_handle();

Effects:

Returns an instance of native_handle_type that can be used with platform-specific APIs to manipulate the underlying implementation. If no such instance exists, native_handle() and native_handle_type are not present.

Throws:

Nothing.

#include <boost/thread/shared_mutex.hpp>

class shared_mutex
{
public:
    shared_mutex(shared_mutex const&) = delete;
    shared_mutex& operator=(shared_mutex const&) = delete;

    shared_mutex();
    ~shared_mutex();

    void lock_shared();
    bool try_lock_shared();
    template <class Rep, class Period>
    bool try_lock_shared_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_lock_shared_until(const chrono::time_point<Clock, Duration>& abs_time);
    void unlock_shared();

    void lock();
    bool try_lock();
    template <class Rep, class Period>
    bool try_lock_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_lock_until(const chrono::time_point<Clock, Duration>& abs_time);
    void unlock();

#if defined BOOST_THREAD_PROVIDES_DEPRECATED_FEATURES_SINCE_V3_0_0
    // use upgrade_mutex instead.
    void lock_upgrade(); // EXTENSION
    void unlock_upgrade(); // EXTENSION

    void unlock_upgrade_and_lock(); // EXTENSION
    void unlock_and_lock_upgrade(); // EXTENSION
    void unlock_and_lock_shared(); // EXTENSION
    void unlock_upgrade_and_lock_shared(); // EXTENSION
#endif

#if defined BOOST_THREAD_USES_DATETIME
    bool timed_lock_shared(system_time const& timeout); // DEPRECATED
    bool timed_lock(system_time const& timeout); // DEPRECATED
#endif

};

The class boost::shared_mutex provides an implementation of a multiple-reader / single-writer mutex. It implements the SharedLockable concept.

Multiple concurrent calls to lock(), try_lock(), try_lock_for(), try_lock_until(), timed_lock(), lock_shared(), try_lock_shared_for(), try_lock_shared_until(), try_lock_shared() and timed_lock_shared() are permitted.

Note the the lack of reader-writer priority policies in shared_mutex. This is due to an algorithm credited to Alexander Terekhov which lets the OS decide which thread is the next to get the lock without caring whether a unique lock or shared lock is being sought. This results in a complete lack of reader or writer starvation. It is simply fair.

#include <boost/thread/shared_mutex.hpp>

class upgrade_mutex
{
public:
    upgrade_mutex(upgrade_mutex const&) = delete;
    upgrade_mutex& operator=(upgrade_mutex const&) = delete;

    upgrade_mutex();
    ~upgrade_mutex();

    void lock_shared();
    bool try_lock_shared();
    template <class Rep, class Period>
    bool try_lock_shared_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_lock_shared_until(const chrono::time_point<Clock, Duration>& abs_time);
    void unlock_shared();

    void lock();
    bool try_lock();
    template <class Rep, class Period>
    bool try_lock_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_lock_until(const chrono::time_point<Clock, Duration>& abs_time);
    void unlock();

    void lock_upgrade();
    template <class Rep, class Period>
    bool try_lock_upgrade_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_lock_upgrade_until(const chrono::time_point<Clock, Duration>& abs_time);
    void unlock_upgrade();

    // Shared <-> Exclusive

#ifdef BOOST_THREAD_PROVIDES_SHARED_MUTEX_UPWARDS_CONVERSIONS
    bool try_unlock_shared_and_lock();
    template <class Rep, class Period>
    bool try_unlock_shared_and_lock_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_unlock_shared_and_lock_until(const chrono::time_point<Clock, Duration>& abs_time);
#endif
    void unlock_and_lock_shared();

    // Shared <-> Upgrade

#ifdef BOOST_THREAD_PROVIDES_SHARED_MUTEX_UPWARDS_CONVERSIONS
    bool try_unlock_shared_and_lock_upgrade();
    template <class Rep, class Period>
    bool try_unlock_shared_and_lock_upgrade_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_unlock_shared_and_lock_upgrade_until(const chrono::time_point<Clock, Duration>& abs_time);
#endif
    void unlock_upgrade_and_lock_shared();

    // Upgrade <-> Exclusive

    void unlock_upgrade_and_lock();
#if    defined(BOOST_THREAD_PLATFORM_PTHREAD)
    || defined(BOOST_THREAD_PROVIDES_GENERIC_SHARED_MUTEX_ON_WIN)
    bool try_unlock_upgrade_and_lock();
    template <class Rep, class Period>
    bool try_unlock_upgrade_and_lock_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_unlock_upgrade_and_lock_until(const chrono::time_point<Clock, Duration>& abs_time);
#endif
    void unlock_and_lock_upgrade();
};

The class boost::upgrade_mutex provides an implementation of a multiple-reader / single-writer mutex. It implements the UpgradeLockable concept.

Multiple concurrent calls to lock(), try_lock(), try_lock_for(), try_lock_until(), timed_lock(), lock_shared(), try_lock_shared_for(), try_lock_shared_until(), try_lock_shared() and timed_lock_shared() are permitted.

#include <boost/thread/null_mutex.hpp>

class null_mutex
{
public:
    null_mutex(null_mutex const&) = delete;
    null_mutex& operator=(null_mutex const&) = delete;

    null_mutex();
    ~null_mutex();

    void lock_shared();
    bool try_lock_shared();
 #ifdef BOOST_THREAD_USES_CHRONO
    template <class Rep, class Period>
    bool try_lock_shared_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_lock_shared_until(const chrono::time_point<Clock, Duration>& abs_time);
 #endif
    void unlock_shared();

    void lock();
    bool try_lock();
 #ifdef BOOST_THREAD_USES_CHRONO
    template <class Rep, class Period>
    bool try_lock_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_lock_until(const chrono::time_point<Clock, Duration>& abs_time);
 #endif
    void unlock();

    void lock_upgrade();
 #ifdef BOOST_THREAD_USES_CHRONO
    template <class Rep, class Period>
    bool try_lock_upgrade_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_lock_upgrade_until(const chrono::time_point<Clock, Duration>& abs_time);
 #endif
    void unlock_upgrade();

    // Shared <-> Exclusive

    bool try_unlock_shared_and_lock();
 #ifdef BOOST_THREAD_USES_CHRONO
    template <class Rep, class Period>
    bool try_unlock_shared_and_lock_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_unlock_shared_and_lock_until(const chrono::time_point<Clock, Duration>& abs_time);
 #endif
    void unlock_and_lock_shared();

    // Shared <-> Upgrade

    bool try_unlock_shared_and_lock_upgrade();
 #ifdef BOOST_THREAD_USES_CHRONO
    template <class Rep, class Period>
    bool try_unlock_shared_and_lock_upgrade_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_unlock_shared_and_lock_upgrade_until(const chrono::time_point<Clock, Duration>& abs_time);
 #endif
    void unlock_upgrade_and_lock_shared();

    // Upgrade <-> Exclusive

    void unlock_upgrade_and_lock();
    bool try_unlock_upgrade_and_lock();
 #ifdef BOOST_THREAD_USES_CHRONO
    template <class Rep, class Period>
    bool try_unlock_upgrade_and_lock_for(const chrono::duration<Rep, Period>& rel_time);
    template <class Clock, class Duration>
    bool try_unlock_upgrade_and_lock_until(const chrono::time_point<Clock, Duration>& abs_time);
 #endif
    void unlock_and_lock_upgrade();
};

The class boost::null_mutex provides a no-op implementation of a multiple-reader / single-writer mutex. It is a model of the UpgradeLockable concept.

Synopsis
namespace boost
{
  enum class cv_status;
  {
    no_timeout,
    timeout
  };
  class condition_variable;
  class condition_variable_any;
  void notify_all_at_thread_exit(condition_variable& cond, unique_lock<mutex> lk);
}

The classes condition_variable and condition_variable_any provide a mechanism for one thread to wait for notification from another thread that a particular condition has become true. The general usage pattern is that one thread locks a mutex and then calls wait on an instance of condition_variable or condition_variable_any. When the thread is woken from the wait, then it checks to see if the appropriate condition is now true, and continues if so. If the condition is not true, then the thread then calls wait again to resume waiting. In the simplest case, this condition is just a boolean variable:

boost::condition_variable cond;
boost::mutex mut;
bool data_ready;

void process_data();

void wait_for_data_to_process()
{
    boost::unique_lock<boost::mutex> lock(mut);
    while(!data_ready)
    {
        cond.wait(lock);
    }
    process_data();
}

Notice that the lock is passed to wait: wait will atomically add the thread to the set of threads waiting on the condition variable, and unlock the mutex. When the thread is woken, the mutex will be locked again before the call to wait returns. This allows other threads to acquire the mutex in order to update the shared data, and ensures that the data associated with the condition is correctly synchronized.

In the mean time, another thread sets the condition to true, and then calls either notify_one or notify_all on the condition variable to wake one waiting thread or all the waiting threads respectively.

void retrieve_data();
void prepare_data();

void prepare_data_for_processing()
{
    retrieve_data();
    prepare_data();
    {
        boost::lock_guard<boost::mutex> lock(mut);
        data_ready=true;
    }
    cond.notify_one();
}

Note that the same mutex is locked before the shared data is updated, but that the mutex does not have to be locked across the call to notify_one.

This example uses an object of type condition_variable, but would work just as well with an object of type condition_variable_any: condition_variable_any is more general, and will work with any kind of lock or mutex, whereas condition_variable requires that the lock passed to wait is an instance of boost::unique_lock<boost::mutex>. This enables condition_variable to make optimizations in some cases, based on the knowledge of the mutex type; condition_variable_any typically has a more complex implementation than condition_variable.

//#include <boost/thread/condition_variable.hpp>

namespace boost
{
    class condition_variable
    {
    public:
        condition_variable();
        ~condition_variable();

        void notify_one() noexcept;
        void notify_all() noexcept;

        void wait(boost::unique_lock<boost::mutex>& lock);

        template<typename predicate_type>
        void wait(boost::unique_lock<boost::mutex>& lock,predicate_type predicate);

        template <class Clock, class Duration>
        typename cv_status::type
        wait_until(
            unique_lock<mutex>& lock,
            const chrono::time_point<Clock, Duration>& t);

        template <class Clock, class Duration, class Predicate>
        bool
        wait_until(
            unique_lock<mutex>& lock,
            const chrono::time_point<Clock, Duration>& t,
            Predicate pred);

        template <class Rep, class Period>
        typename cv_status::type
        wait_for(
            unique_lock<mutex>& lock,
            const chrono::duration<Rep, Period>& d);

        template <class Rep, class Period, class Predicate>
        bool
        wait_for(
            unique_lock<mutex>& lock,
            const chrono::duration<Rep, Period>& d,
            Predicate pred);

    #if defined BOOST_THREAD_USES_DATETIME
        bool timed_wait(boost::unique_lock<boost::mutex>& lock,boost::system_time const& abs_time);
        template<typename duration_type>
        bool timed_wait(boost::unique_lock<boost::mutex>& lock,duration_type const& rel_time);
        template<typename predicate_type>
        bool timed_wait(boost::unique_lock<boost::mutex>& lock,boost::system_time const& abs_time,predicate_type predicate);
        template<typename duration_type,typename predicate_type>
        bool timed_wait(boost::unique_lock<boost::mutex>& lock,duration_type const& rel_time,predicate_type predicate);
        bool timed_wait(boost::unique_lock<boost::mutex>& lock,boost::xtime const& abs_time);

        template<typename predicate_type>
        bool timed_wait(boost::unique_lock<boost::mutex>& lock,boost::xtime const& abs_time,predicate_type predicate);
    #endif

    };
}

Effects:

Constructs an object of class condition_variable.

Throws:

boost::thread_resource_error if an error occurs.

Precondition:

All threads waiting on *this have been notified by a call to notify_one or notify_all (though the respective calls to wait or timed_wait need not have returned).

Effects:

Destroys the object.

Throws:

Nothing.

Effects:

If any threads are currently blocked waiting on *this in a call to wait or timed_wait, unblocks one of those threads.

Throws:

Nothing.

Effects:

If any threads are currently blocked waiting on *this in a call to wait or timed_wait, unblocks all of those threads.

Throws:

Nothing.

Precondition:

lock is locked by the current thread, and either no other thread is currently waiting on *this, or the execution of the mutex() member function on the lock objects supplied in the calls to wait or timed_wait in all the threads currently waiting on *this would return the same value as lock->mutex() for this call to wait.

Effects:

Atomically call lock.unlock() and blocks the current thread. The thread will unblock when notified by a call to this->notify_one() or this->notify_all(), or spuriously. When the thread is unblocked (for whatever reason), the lock is reacquired by invoking lock.lock() before the call to wait returns. The lock is also reacquired by invoking lock.lock() if the function exits with an exception.

Postcondition:

lock is locked by the current thread.

Throws:

boost::thread_resource_error if an error occurs. boost::thread_interrupted if the wait was interrupted by a call to interrupt() on the boost::thread object associated with the current thread of execution.

Precondition:

lock is locked by the current thread, and either no other thread is currently waiting on *this, or the execution of the mutex() member function on the lock objects supplied in the calls to wait or timed_wait in all the threads currently waiting on *this would return the same value as lock->mutex() for this call to wait.

Effects:

Atomically call lock.unlock() and blocks the current thread. The thread will unblock when notified by a call to this->notify_one() or this->notify_all(), when the time as reported by boost::get_system_time() would be equal to or later than the specified abs_time, or spuriously. When the thread is unblocked (for whatever reason), the lock is reacquired by invoking lock.lock() before the call to wait returns. The lock is also reacquired by invoking lock.lock() if the function exits with an exception.

Returns:

false if the call is returning because the time specified by abs_time was reached, true otherwise.

Postcondition:

lock is locked by the current thread.

Throws:

boost::thread_resource_error if an error occurs. boost::thread_interrupted if the wait was interrupted by a call to interrupt() on the boost::thread object associated with the current thread of execution.

Precondition:

lock is locked by the current thread, and either no other thread is currently waiting on *this, or the execution of the mutex() member function on the lock objects supplied in the calls to wait or timed_wait in all the threads currently waiting on *this would return the same value as lock->mutex() for this call to wait.

Effects:

Atomically call lock.unlock() and blocks the current thread. The thread will unblock when notified by a call to this->notify_one() or this->notify_all(), after the period of time indicated by the rel_time argument has elapsed, or spuriously. When the thread is unblocked (for whatever reason), the lock is reacquired by invoking lock.lock() before the call to wait returns. The lock is also reacquired by invoking lock.lock() if the function exits with an exception.

Returns:

false if the call is returning because the time period specified by rel_time has elapsed, true otherwise.

Postcondition:

lock is locked by the current thread.

Throws:

boost::thread_resource_error if an error occurs. boost::thread_interrupted if the wait was interrupted by a call to interrupt() on the boost::thread object associated with the current thread of execution.

[Note] Note

The duration overload of timed_wait is difficult to use correctly. The overload taking a predicate should be preferred in most cases.

Effects:

As-if

while(!pred())
{
    if(!timed_wait(lock,abs_time))
    {
        return pred();
    }
}
return true;

Precondition:

lock is locked by the current thread, and either no other thread is currently waiting on *this, or the execution of the mutex() member function on the lock objects supplied in the calls to wait or wait_for or wait_until in all the threads currently waiting on *this would return the same value as lock->mutex() for this call to wait.

Effects:

Atomically call lock.unlock() and blocks the current thread. The thread will unblock when notified by a call to this->notify_one() or this->notify_all(), when the time as reported by Clock::now() would be equal to or later than the specified abs_time, or spuriously. When the thread is unblocked (for whatever reason), the lock is reacquired by invoking lock.lock() before the call to wait returns. The lock is also reacquired by invoking lock.lock() if the function exits with an exception.

Returns:

cv_status::timeout if the call is returning because the time specified by abs_time was reached, cv_status::no_timeout otherwise.

Postcondition:

lock is locked by the current thread.

Throws:

boost::thread_resource_error if an error occurs. boost::thread_interrupted if the wait was interrupted by a call to interrupt() on the boost::thread object associated with the current thread of execution.

Precondition:

lock is locked by the current thread, and either no other thread is currently waiting on *this, or the execution of the mutex() member function on the lock objects supplied in the calls to wait or wait_until or wait_for in all the threads currently waiting on *this would return the same value as lock->mutex() for this call to wait.

Effects:

Atomically call lock.unlock() and blocks the current thread. The thread will unblock when notified by a call to this->notify_one() or this->notify_all(), after the period of time indicated by the rel_time argument has elapsed, or spuriously. When the thread is unblocked (for whatever reason), the lock is reacquired by invoking lock.lock() before the call to wait returns. The lock is also reacquired by invoking lock.lock() if the function exits with an exception.

Returns:

cv_status::timeout if the call is returning because the time period specified by rel_time has elapsed, cv_status::no_timeout otherwise.

Postcondition:

lock is locked by the current thread.

Throws:

boost::thread_resource_error if an error occurs. boost::thread_interrupted if the wait was interrupted by a call to interrupt() on the boost::thread object associated with the current thread of execution.

[Note] Note

The duration overload of timed_wait is difficult to use correctly. The overload taking a predicate should be preferred in most cases.

//#include <boost/thread/condition_variable.hpp>

namespace boost
{
    class condition_variable_any
    {
    public:
        condition_variable_any();
        ~condition_variable_any();

        void notify_one();
        void notify_all();

        template<typename lock_type>
        void wait(lock_type& lock);

        template<typename lock_type,typename predicate_type>
        void wait(lock_type& lock,predicate_type predicate);

        template <class lock_type, class Clock, class Duration>
        cv_status wait_until(
            lock_type& lock,
            const chrono::time_point<Clock, Duration>& t);

        template <class lock_type, class Clock, class Duration, class Predicate>
        bool wait_until(
            lock_type& lock,
            const chrono::time_point<Clock, Duration>& t,
            Predicate pred);


        template <class lock_type, class Rep, class Period>
        cv_status wait_for(
            lock_type& lock,
            const chrono::duration<Rep, Period>& d);

        template <class lock_type, class Rep, class Period, class Predicate>
        bool wait_for(
            lock_type& lock,
            const chrono::duration<Rep, Period>& d,
            Predicate pred);

    #if defined BOOST_THREAD_USES_DATETIME
        template<typename lock_type>
        bool timed_wait(lock_type& lock,boost::system_time const& abs_time);
        template<typename lock_type,typename duration_type>
        bool timed_wait(lock_type& lock,duration_type const& rel_time);
        template<typename lock_type,typename predicate_type>
        bool timed_wait(lock_type& lock,boost::system_time const& abs_time,predicate_type predicate);
        template<typename lock_type,typename duration_type,typename predicate_type>
        bool timed_wait(lock_type& lock,duration_type const& rel_time,predicate_type predicate);
        template<typename lock_type>
        bool timed_wait(lock_type>& lock,boost::xtime const& abs_time);
        template<typename lock_type,typename predicate_type>
        bool timed_wait(lock_type& lock,boost::xtime const& abs_time,predicate_type predicate);
    #endif
    };
}

Effects:

Constructs an object of class condition_variable_any.

Throws:

boost::thread_resource_error if an error occurs.

Precondition:

All threads waiting on *this have been notified by a call to notify_one or notify_all (though the respective calls to wait or timed_wait need not have returned).

Effects:

Destroys the object.

Throws:

Nothing.

Effects:

If any threads are currently blocked waiting on *this in a call to wait or timed_wait, unblocks one of those threads.

Throws:

Nothing.

Effects:

If any threads are currently blocked waiting on *this in a call to wait or timed_wait, unblocks all of those threads.

Throws:

Nothing.

Effects:

Atomically call lock.unlock() and blocks the current thread. The thread will unblock when notified by a call to this->notify_one() or this->notify_all(), or spuriously. When the thread is unblocked (for whatever reason), the lock is reacquired by invoking lock.lock() before the call to wait returns. The lock is also reacquired by invoking lock.lock() if the function exits with an exception.

Postcondition:

lock is locked by the current thread.

Throws:

boost::thread_resource_error if an error occurs. boost::thread_interrupted if the wait was interrupted by a call to interrupt() on the boost::thread object associated with the current thread of execution.

Effects:

Atomically call lock.unlock() and blocks the current thread. The thread will unblock when notified by a call to this->notify_one() or this->notify_all(), when the time as reported by boost::get_system_time() would be equal to or later than the specified abs_time, or spuriously. When the thread is unblocked (for whatever reason), the lock is reacquired by invoking lock.lock() before the call to wait returns. The lock is also reacquired by invoking lock.lock() if the function exits with an exception.

Returns:

false if the call is returning because the time specified by abs_time was reached, true otherwise.

Postcondition:

lock is locked by the current thread.

Throws:

boost::thread_resource_error if an error occurs. boost::thread_interrupted if the wait was interrupted by a call to interrupt() on the boost::thread object associated with the current thread of execution.

Effects:

Atomically call lock.unlock() and blocks the current thread. The thread will unblock when notified by a call to this->notify_one() or this->notify_all(), after the period of time indicated by the rel_time argument has elapsed, or spuriously. When the thread is unblocked (for whatever reason), the lock is reacquired by invoking lock.lock() before the call to wait returns. The lock is also reacquired by invoking lock.lock() if the function exits with an exception.

Returns:

false if the call is returning because the time period specified by rel_time has elapsed, true otherwise.

Postcondition:

lock is locked by the current thread.

Throws:

boost::thread_resource_error if an error occurs. boost::thread_interrupted if the wait was interrupted by a call to interrupt() on the boost::thread object associated with the current thread of execution.

[Note] Note

The duration overload of timed_wait is difficult to use correctly. The overload taking a predicate should be preferred in most cases.

Effects:

As-if

while(!pred())
{
    if(!timed_wait(lock,abs_time))
    {
        return pred();
    }
}
return true;

Effects:

Atomically call lock.unlock() and blocks the current thread. The thread will unblock when notified by a call to this->notify_one() or this->notify_all(), when the time as reported by Clock::now() would be equal to or later than the specified abs_time, or spuriously. When the thread is unblocked (for whatever reason), the lock is reacquired by invoking lock.lock() before the call to wait returns. The lock is also reacquired by invoking lock.lock() if the function exits with an exception.

Returns:

cv_status::timeout if the call is returning because the time specified by abs_time was reached, cv_status::no_timeout otherwise.

Postcondition:

lock is locked by the current thread.

Throws:

boost::thread_resource_error if an error occurs. boost::thread_interrupted if the wait was interrupted by a call to interrupt() on the boost::thread object associated with the current thread of execution.

Effects:

Atomically call lock.unlock() and blocks the current thread. The thread will unblock when notified by a call to this->notify_one() or this->notify_all(), after the period of time indicated by the rel_time argument has elapsed, or spuriously. When the thread is unblocked (for whatever reason), the lock is reacquired by invoking lock.lock() before the call to wait returns. The lock is also reacquired by invoking lock.lock() if the function exits with an exception.

Returns:

cv_status::timeout if the call is returning because the time specified by abs_time was reached, cv_status::no_timeout otherwise.

Postcondition:

lock is locked by the current thread.

Throws:

boost::thread_resource_error if an error occurs. boost::thread_interrupted if the wait was interrupted by a call to interrupt() on the boost::thread object associated with the current thread of execution.

[Note] Note

The duration overload of timed_wait is difficult to use correctly. The overload taking a predicate should be preferred in most cases.

// #include <boost/thread/condition.hpp>
namespace boost
{

  typedef condition_variable_any condition;

}

The typedef condition is provided for backwards compatibility with previous boost releases.

// #include <boost/thread/condition_variable.hpp>

namespace boost
{
  void notify_all_at_thread_exit(condition_variable& cond, unique_lock<mutex> lk);
}

Requires:

lk is locked by the calling thread and either no other thread is waiting on cond, or lk.mutex() returns the same value for each of the lock arguments supplied by all concurrently waiting (via wait, wait_for, or wait_until) threads.

Effects:

transfers ownership of the lock associated with lk into internal storage and schedules cond to be notified when the current thread exits, after all objects of thread storage duration associated with the current thread have been destroyed. This notification shall be as if

lk.unlock();
cond.notify_all();

#include <boost/thread/once.hpp>

namespace boost
{
  struct once_flag;
  template<typename Function, class ...ArgTypes>
  inline void call_once(once_flag& flag, Function&& f, ArgTypes&&... args);

#if defined BOOST_THREAD_PROVIDES_DEPRECATED_FEATURES_SINCE_V3_0_0
  void call_once(void (*func)(),once_flag& flag);
#endif

}
[Warning] Warning

the variadic prototype is provided only on C++11 compilers supporting variadic templates, otherwise the interface is limited up to 3 parameters.

[Warning] Warning

the move semantics is ensured only on C++11 compilers supporting SFINAE expression, decltype N3276 and auto. Waiting for a boost::bind that is move aware.

boost::call_once provides a mechanism for ensuring that an initialization routine is run exactly once without data races or deadlocks.

#ifdef BOOST_THREAD_PROVIDES_ONCE_CXX11
struct once_flag
{
  constexpr once_flag() noexcept;
  once_flag(const once_flag&) = delete;
  once_flag& operator=(const once_flag&) = delete;
};
#else
typedef platform-specific-type once_flag;
#define BOOST_ONCE_INIT platform-specific-initializer
#endif

Objects of type boost::once_flag shall be initialized with BOOST_ONCE_INIT if BOOST_THREAD_PROVIDES_ONCE_CXX11 is not defined

boost::once_flag f=BOOST_ONCE_INIT;
template<typename Function, class ...ArgTypes>
inline void call_once(once_flag& flag, Function&& f, ArgTypes&&... args);

Requires:

Function and each or the ArgTypes are MoveConstructible and invoke(decay_copy(boost::forward<Function>(f)), decay_copy(boost::forward<ArgTypes>(args))...) shall be well formed.

Effects:

Calls to call_once on the same once_flag object are serialized. If there has been no prior effective call_once on the same once_flag object, the argument func is called as-if by invoking invoke(decay_copy(boost::forward<Function>(f)), decay_copy(boost::forward<ArgTypes>(args))...), and the invocation of call_once is effective if and only if invoke(decay_copy(boost::forward<Function>(f)), decay_copy(boost::forward<ArgTypes>(args))...) returns without exception. If an exception is thrown, the exception is propagated to the caller. If there has been a prior effective call_once on the same once_flag object, the call_once returns without invoking func.

Synchronization:

The completion of an effective call_once invocation on a once_flag object, synchronizes with all subsequent call_once invocations on the same once_flag object.

Throws:

thread_resource_error when the effects cannot be achieved or any exception propagated from func.

Note:

The function passed to call_once must not also call call_once passing the same once_flag object. This may cause deadlock, or invoking the passed function a second time. The alternative is to allow the second call to return immediately, but that assumes the code knows it has been called recursively, and can proceed even though the call to call_once didn't actually call the function, in which case it could also avoid calling call_once recursively.

Note:

On some compilers this function has some restrictions, e.g. if variadic templates are not supported the number of arguments is limited to 3; .

void call_once(void (*func)(),once_flag& flag);

This second overload is provided for backwards compatibility and is deprecated. The effects of call_once(func,flag) shall be the same as those of call_once(flag,func).

A barrier is a simple concept. Also known as a rendezvous, it is a synchronization point between multiple threads. The barrier is configured for a particular number of threads (n), and as threads reach the barrier they must wait until all n threads have arrived. Once the n-th thread has reached the barrier, all the waiting threads can proceed, and the barrier is reset.

#include <boost/thread/barrier.hpp>

class barrier
{
public:
    barrier(barrier const&) = delete;
    barrier& operator=(barrier const&) = delete;

    barrier(unsigned int count);
    template <typename F>
    barrier(unsigned int count, F&&);

    ~barrier();

    bool wait();
    void count_down_and_wait();
};

Instances of boost::barrier are not copyable or movable.

barrier(unsigned int count);

Effects:

Construct a barrier for count threads.

Throws:

boost::thread_resource_error if an error occurs.

barrier(unsigned int count, F&& completion);

Requires:

The result type of the completion function call completion() is void or unsigned int.

Effects:

Construct a barrier for count threads and a completion function completion.

Throws:

boost::thread_resource_error if an error occurs.

~barrier();

Precondition:

No threads are waiting on *this.

Effects:

Destroys *this.

Throws:

Nothing.

bool wait();

Effects:

Block until count threads have called wait or count_down_and_wait on *this. When the count-th thread calls wait, the barrier is reset and all waiting threads are unblocked. The reset depends on whether the barrier was constructed with a completion function or not. If there is no completion function or if the completion function result is void, the reset consists in restoring the original count. Otherwise the rest consist in assigning the result of the completion function (which must not be 0).

Returns:

true for exactly one thread from each batch of waiting threads, false otherwise.

Throws:

- boost::thread_resource_error if an error occurs.

- boost::thread_interrupted if the wait was interrupted by a call to interrupt() on the boost::thread object associated with the current thread of execution.

Notes:

wait() is an interruption point.

void count_down_and_wait();

Effects:

Block until count threads have called wait or count_down_and_wait on *this. When the count-th thread calls wait, the barrier is reset and all waiting threads are unblocked. The reset depends on whether the barrier was constructed with a completion function or not. If there is no completion function or if the completion function result is void, the reset consists in restoring the original count. Otherwise the rest consist in assigning the result of the completion function (which must not be 0).

Throws:

- boost::thread_resource_error if an error occurs.

- boost::thread_interrupted if the wait was interrupted by a call to interrupt() on the boost::thread object associated with the current thread of execution.

Notes:

count_down_and_wait() is an interruption point.

Latches are a thread co-ordination mechanism that allow one or more threads to block until one or more threads have reached a point.

Sample use cases for the latch include:

  • Setting multiple threads to perform a task, and then waiting until all threads have reached a common point.
  • Creating multiple threads, which wait for a signal before advancing beyond a common point.

An example of the first use case would be as follows:

void DoWork(thread_pool* pool) {
  latch completion_latch(NTASKS);
  for (int i = 0; i < NTASKS; ++i) {
    pool->submit([&] {
      // perform work
      ...
      completion_latch.count_down();
    }));
  }
  // Block until work is done
  completion_latch.wait();
}

An example of the second use case is shown below. We need to load data and then process it using a number of threads. Loading the data is I/O bound, whereas starting threads and creating data structures is CPU bound. By running these in parallel, throughput can be increased.

void DoWork() {
  latch start_latch(1);
  vector<thread*> workers;
  for (int i = 0; i < NTHREADS; ++i) {
    workers.push_back(new thread([&] {
      // Initialize data structures. This is CPU bound.
      ...
      start_latch.wait();
      // perform work
      ...
    }));
  }
  // Load input data. This is I/O bound.
  ...
  // Threads can now start processing
  start_latch.count_down();
  }
#include <boost/thread/latch.hpp>

class latch
{
public:
    latch(latch const&) = delete;
    latch& operator=(latch const&) = delete;

    latch(std::size_t count);
    ~latch();

    void wait();
    bool try_wait();
    template <class Rep, class Period>
    cv_status wait_for(const chrono::duration<Rep, Period>& rel_time);
    template <class lock_type, class Clock, class Duration>
    cv_status wait_until(const chrono::time_point<Clock, Duration>& abs_time);
    void count_down();
    void count_down_and_wait();

};

A latch maintains an internal counter that is initialized when the latch is created. One or more threads may block waiting until the counter is decremented to 0.

Instances of latch are not copyable or movable.

latch(std::size_t count);

Effects:

Construct a latch with is initial value for the internal counter.

Note:

The counter could be zero.

Throws:

Nothing.

~latch();

Precondition:

No threads are waiting or invoking count_down on *this.

Effects:

Destroys *this latch.

Throws:

Nothing.

void wait();

Effects:

Block the calling thread until the internal count reaches the value zero. Then all waiting threads are unblocked.

Throws:

- boost::thread_resource_error if an error occurs.

- boost::thread_interrupted if the wait was interrupted by a call to interrupt() on the boost::thread object associated with the current thread of execution.

Notes:

wait() is an interruption point.

bool try_wait();

Returns:

Returns true if the internal count is 0, and false otherwise. Does not block the calling thread.

Throws:

- boost::thread_resource_error if an error occurs.

template <class Rep, class Period>
cv_status wait_for(const chrono::duration<Rep, Period>& rel_time);

Effects:

Block the calling thread until the internal count reaches the value zero or the duration has been elapsed. If no timeout, all waiting threads are unblocked.

Returns:

cv_status::no_timeout if the internal count is 0, and cv_status::timeout if duration has been elapsed.

Throws:

- boost::thread_resource_error if an error occurs.

- boost::thread_interrupted if the wait was interrupted by a call to interrupt() on the boost::thread object associated with the current thread of execution.

Notes:

wait_for() is an interruption point.

template <class lock_type, class Clock, class Duration>
cv_status wait_until(const chrono::time_point<Clock, Duration>& abs_time);

Effects:

Block the calling thread until the internal count reaches the value zero or the time_point has been reached. If no timeout, all waiting threads are unblocked.

Returns:

cv_status::no_timeout if the internal count is 0, and cv_status::timeout if time_point has been reached.

Throws:

- boost::thread_resource_error if an error occurs.

- boost::thread_interrupted if the wait was interrupted by a call to interrupt() on the boost::thread object associated with the current thread of execution.

Notes:

wait_until() is an interruption point.

void count_down();

Requires:

The internal counter is non zero.

Effects:

Decrements the internal count by 1, and returns. If the count reaches 0, any threads blocked in wait() will be released.

Throws:

- boost::thread_resource_error if an error occurs.

- boost::thread_interrupted if the wait was interrupted by a call to interrupt() on the boost::thread object associated with the current thread of execution.

Notes:

count_down() is an interruption point.

void count_down_and_wait();

Requires:

The internal counter is non zero.

Effects:

Decrements the internal count by 1. If the resulting count is not 0, blocks the calling thread until the internal count is decremented to 0 by one or more other threads calling count_down() or count_down_and_wait().

Throws:

- boost::thread_resource_error if an error occurs.

- boost::thread_interrupted if the wait was interrupted by a call to interrupt() on the boost::thread object associated with the current thread of execution.

Notes:

count_down_and_wait() is an interruption point.

[

reset( size_t );

Requires:

This function may only be invoked when there are no other threads currently inside the waiting functions.

Returns:

Resets the latch with a new value for the initial thread count.

Throws:

- boost::thread_resource_error if an error occurs.

]

[Warning] Warning

These features are experimental and subject to change in future versions. There are not too much tests yet, so it is possible that you can find out some trivial bugs :(

[Note] Note

These features are based on the N3785 - Executors and Schedulers revision 3 C++1y proposal from Chris Mysen, Niklas Gustafsson, Matt Austern, Jeffrey Yasskin. The text that follows has been adapted from this paper to show the differences.

Executors are objects that can execute units of work packaged as function objects. Boost.Thread differs from N3785 mainly in the an Executor doesn't needs to inherit from an abstract class Executor. Static polymorphism is used instead and type erasure is used internally.

Multithreaded programs often involve discrete (sometimes small) units of work that are executed asynchronously. This often involves passing work units to some component that manages execution. We already have boost::async, which potentially executes a function asynchronously and eventually returns its result in a future. (“As if” by launching a new thread.)

If there is a regular stream of small work items then we almost certainly don’t want to launch a new thread for each, and it’s likely that we want at least some control over which thread(s) execute which items. It is often convenient to represent that control as multiple executor objects. This allows programs to start executors when necessary, switch from one executor to another to control execution policy, and use multiple executors to prevent interference and thread exhaustion. Several possible implementations exist of the executor class and in practice there are a number of main groups of executors which have been found to be useful in real-world code (more implementations exist, this is simply a high level classification of them). These differ along a couple main dimensions, how many execution contexts will be used, how they are selected, and how they are prioritized.

  1. Thread Pools
    1. Simple unbounded thread pool, which can queue up an unbounded amount of work and maintains a dedicated set of threads (up to some maximum) which dequeue and execute work as available.
    2. Bounded thread pools, which can be implemented as a specialization of the previous ones with a bounded queue or semaphore, which limits the amount of queuing in an attempt to bound the time spent waiting to execute and/or limit resource utilization for work tasks which hold state which is expensive to hold.
    3. Thread-spawning executors, in which each work always executes in a new thread.
    4. Prioritized thread pools, which have works which are not equally prioritized such that work can move to the front of the execution queue if necessary. This requires a special comparator or prioritization function to allow for work ordering and normally is implemented as a blocking priority queue in front of the pool instead of a blocking queue. This has many uses but is a somewhat specialized in nature and would unnecessarily clutter the initial interface.
    5. Work stealing thread pools, this is a specialized use case and is encapsulated in the ForkJoinPool in java, which allows lightweight work to be created by tasks in the pool and either run by the same thread for invocation efficiency or stolen by another thread without additional work. These have been left out until there is a more concrete fork-join proposal or until there is a more clear need as these can be complicated to implement
  2. Mutual exclusion executors
    1. Serial executors, which guarantee all work to be executed such that no two works will execute concurrently. This allows for a sequence of operations to be queued in sequence and that sequential order is maintained and work can be queued on a separate thread but with no mutual exclusion required.
    2. Loop executor, in which one thread donates itself to the executor to execute all queued work. This is related to the serial executor in that it guarantees mutual exclusion, but instead guarantees a particular thread will execute the work. These are particularly useful for testing purposes where code assumes an executor but testing code desires control over execution.
    3. GUI thread executor, where a GUI framework can expose an executor interface to allow other threads to queue up work to be executed as part of the GUI thread. This behaves similarly to a loop executor, but must be implemented as a custom interface as part of the framework.
  3. Inline executors, which execute inline to the thread which calls submit(). This has no queuing and behaves like a normal executor, but always uses the caller’s thread to execute. This allows parallel execution of works, though. This type of executor is often useful when there is an executor required by an interface, but when for performance reasons it’s better not to queue work or switch threads. This is often very useful as an optimization for work continuations which should execute immediately or quickly and can also be useful for optimizations when an interface requires an executor but the work tasks are too small to justify the overhead of a full thread pool.

A question arises of which of these executors (or others) be included in this library. There are use cases for these and many other executors. Often it is useful to have more than one implemented executor (e.g. the thread pool) to have more precise control of where the work is executed due to the existence of a GUI thread, or for testing purposes. A few core executors are frequently useful and these have been outlined here as the core of what should be in this library, if common use cases arise for alternative executor implementations, they can be added in the future. The current set provided here are: a basic thread pool basic_thread_pool, a serial executor serial_executor, a loop executor loop_executor, an inline executor inline_executor and a thread-spawning executor thread_executor.

#include <boost/thread/executors/basic_thread_pool.hpp>
#include <boost/thread/future.hpp>
#include <numeric>
#include <algorithm>
#include <functional>
#include <iostream>
#include <list>

template<typename T>
struct sorter
{
    boost::basic_thread_pool pool;
    typedef std::list<T> return_type;

    std::list<T> do_sort(std::list<T> chunk_data)
    {
        if(chunk_data.empty()) {
            return chunk_data;
        }

        std::list<T> result;
        result.splice(result.begin(),chunk_data, chunk_data.begin());
        T const& partition_val=*result.begin();

        typename std::list<T>::iterator divide_point =
            std::partition(chunk_data.begin(), chunk_data.end(),
                           [&](T const& val){return val<partition_val;});

        std::list<T> new_lower_chunk;
        new_lower_chunk.splice(new_lower_chunk.end(), chunk_data,
                               chunk_data.begin(), divide_point);
        boost::future<std::list<T> > new_lower =
             boost::async(pool, &sorter::do_sort, this, std::move(new_lower_chunk));
        std::list<T> new_higher(do_sort(chunk_data));
        result.splice(result.end(),new_higher);
        while(!new_lower.is_ready()) {
            pool.schedule_one_or_yield();
        }
        result.splice(result.begin(),new_lower.get());
        return result;
    }
};

template<typename T>
std::list<T> parallel_quick_sort(std::list<T>& input) {
    if(input.empty()) {
        return input;
    }
    sorter<T> s;
    return s.do_sort(input);
}

The authors of Boost.Thread have taken a different approach respect to N3785. Instead of basing all the design on an abstract executor class we make executor concepts. We believe that this is the good direction as a static polymorphic executor can be seen as a dynamic polymorphic executor using a simple adaptor. We believe also that it would make the library more usable, and more convenient for users.

The major design decisions concern deciding what a unit of work is, how to manage with units of work and time related functions in a polymorphic way.

An Executor is an object that schedules the closures that have been submitted to it, usually asynchronously. There could be multiple models of the Executor class. Some specific design notes:

  • Thread pools are well know models of the Executor concept, and this library does indeed include a basic_thread_pool class, but other implementations also exist, including the ability to schedule work on GUI threads, scheduling work on a donor thread, as well as several specializations of thread pools.
  • The choice of which executor to use is explicit. This is important for reasons described in the Motivation section. In particular, consider the common case of an asynchronous operation that itself spawns asynchronous operations. If both operations ran on the same executor, and if that executor had a bounded number of worker threads, then we could get deadlock. Programs often deal with such issues by splitting different kinds of work between different executors.
  • Even if there could be a strong value in having a default executor, that can be used when detailed control is unnecessary, the authors don't know how to implement it in a portable and robust way.
  • The library provides Executors based on static and dynamic polymorphism. The static polymorphism interface is intended to be used on contexts that need to have the best performances. The dynamic polymorphism interface has the advantage to been able to change the executor a function is using without making it a template and is possible to pass executors across a binary interface. For some applications, the cost of an additional virtual dispatch could be almost certainly negligible compared to the other operations involved.
  • Conceptually, an executor puts closures on a queue and at some point executes them. The queue is always unbounded, so adding a closure to an executor never blocks. (Defining “never blocks” formally is challenging, but informally we just mean that submit() is an ordinary function that executes something and returns, rather than waiting for the completion of some potentially long running operation in another thread.)
Closure

One important question is just what a closure is. This library has a very simple answer: a closure is a Callable with no parameters and returning void.

N3785 choose the more specific std::function<void()> as it provides only dynamic polymorphism and states that in practice the implementation of a template based approach or another approach is impractical. The authors of this library think that the template based approach is compatible with a dynamic based approach. They give some arguments:

The first one is that a virtual function can not be a template. This is true but it is also true that the executor interface can provide the template functions that call to the virtual public functions. Another reason they give is that "a template parameter would complicate the interface without adding any real generality. In the end an executor class is going to need some kind of type erasure to handle all the different kinds of function objects with void() signature, and that’s exactly what std::function already does". We think that it is up to the executor to manage with this implementation details, not to the user.

We share all the argument they give related to the void() interface of the work unit. A work unit is a closure that takes no arguments and returns no value. This is indeed a limitation on user code, but combined with boost::async taking executors as parameters the user has all what she needs.

The third one is related to performance. They assert that "any mechanism for storing closures on an executor’s queue will have to use some form of type erasure. There’s no reason to believe that a custom closure mechanism, written just for std::executor and used nowhere else within the standard library, would be better in that respect than std::function<void()>". We believe that the implementation can do better that storing the closure on a std::function<void()>. e.g. the implementation can use intrusive data to store the closure and the pointers to other nodes needed to store the closures in a given order.

In addition std::function<void()> can not be constructed by moving the closure, so e.g. std::packaged_task could not be a Closure.

Scheduled work

The approach of this library respect to scheduled work of the N3785 proposal is quite different. Instead of adding the scheduled operations to a specific scheduled_executor polymorphic interface, we opt by adding a specific scheduler class that is not an executor and knows how to manage with the scheduling of timed tasks submit_at/submit_after.

scheduler provides executor factories at/after given a specific time_point or a duration. The built executors wrap a reference to this scheduler and the time at which the submitted task will be executed.

If we want to schedule these operations on an existing executor (as serial_executor does), these classes provide a on factory taking another executor as parameter and wraps both instance on the returned executor.

sch.on(tp).after(seconds(i)).submit(boost::bind(fn,i));

This has several advantages:

  • The scheduled operations are available for all the executors via wrappers.
  • The template functions could accept any chrono time_point and duration respectively as we are not working with virtual functions.

In order to manage with all the clocks, this library propose generic solution. scheduler<Clock> know how to manage with the submit_at/submit_after Clock::time_point/Clock::duration tasks. Note that the durations on different clocks differ.

Not Handled Exceptions

As in N3785 and based on the same design decision than std/boost::thread if a user closure throws an exception, the executor must call the std::terminate function. Note that when we combine boost::async and Executors, the exception will be caught by the closure associated to the returned future, so that the exception is stored on the returned future, as for the other async overloads.

At thread entry

It is common idiom to set some thread local variable at the beginning of a thread. As Executors could instantiate threads internally these Executors shall have the ability to call a user specific function at thread entry on the executor constructor.

For executors that don't instantiate any thread and that would use the current thread this function shall be called only for the thread calling the at_thread_entry member function.

Cancelation

The library does not provision yet for the ability to cancel/interrupt work, though this is a commonly requested feature.

This could be managed externally by an additional cancelation object that can be shared between the creator of the unit of work and the unit of work.

We can think also of a cancelable closure that could be used in a more transparent way.

An alternative is to make async return a cancelable_task but this will need also a cancelable closure.

Current executor

The library does not provision for the ability to get the current executor, though having access to it could simplify a lot the user code.

The reason is that the user can always use a thread_local variable and reset it using the at_thread_entry member function.

thread_local current_executor_state_type current_executor_state;
executor* current_executor() { return current_executor_state.current_executor(); }
basic_thread_pool pool(
	// at_thread_entry
	[](basic_thread_pool& pool) {
		current_executor_state.set_current_executor(pool);
	}
);

[

Default executor

The library authors share some of the concerns of the C++ standard committee (introduction of a new single shared resource, a singleton, could make it difficult to make it portable to all the environments) and that this library doesn't need to provide a default executor for the time been.

The user can always define his default executor himself.

boost::generic_executor_ref default_executor()
{
    static boost::basic_thread_pool tp(4);
    return generic_executor_ref(tp);
}

A type E meets the Closure requirements if is a model of Callable(void()) and a model of CopyConstructible/MoveConstructible.

The Executor concept models the common operations of all the executors.

A type E meets the Executor requirements if the following expressions are well-formed and have the specified semantics

  • e.submit(lc);
  • e.submit(rc);
  • e.close();
  • b = e.closed();
  • e.try_executing_one();
  • e.reschedule_until(p);

where

  • e denotes a value of type E,
  • lc denotes a lvalue reference of type Closure,
  • rc denotes a rvalue reference of type Closure
  • p denotes a value of type Predicate

Effects:

The specified closure will be scheduled for execution at some point in the future. If invoked closure throws an exception the executor will call std::terminate, as is the case with threads.

Synchronization:

completion of closure on a particular thread happens before destruction of thread's thread local variables.

Return type:

void.

Throws:

sync_queue_is_closed if the thread pool is closed. Whatever exception that can be throw while storing the closure.

Exception safety:

If an exception is thrown then the executor state is unmodified.

Effects:

The specified closure will be scheduled for execution at some point in the future. If invoked closure throws an exception the executor will call std::terminate, as is the case with threads.

Synchronization:

completion of closure on a particular thread happens before destruction of thread's thread local variables.

Return type:

void.

Throws:

sync_queue_is_closed if the thread pool is closed. Whatever exception that can be throw while storing the closure.

Exception safety:

If an exception is thrown then the executor state is unmodified.

Effects:

close the executor e for submissions.

Remark:

The worker threads will work until there is no more closures to run.

Return type:

void.

Throws:

Whatever exception that can be thrown while ensuring the thread safety.

Exception safety:

If an exception is thrown then the executor state is unmodified.

Return type:

bool.

Return:

whether the executor is closed for submissions.

Throws:

Whatever exception that can be throw while ensuring the thread safety.

Effects:

try to execute one work.

Remark:

whether a work has been executed.

Return type:

bool.

Return:

Whether a work has been executed.

Throws:

whatever the current work constructor throws or the work() throws.

Requires:

This must be called from a scheduled work

Effects:

reschedule works until p().

Return type:

bool.

Return:

Whether a work has been executed.

Throws:

whatever the current work constructor throws or the work() throws.

#include <boost/thread/executors/work.hpp>
namespace boost {
  typedef 'implementation_defined' work;
}

Requires:

work is a model of 'Closure'

Executor abstract base class.

#include <boost/thread/executors/executor.hpp>
namespace boost {
  class executor
  {
  public:
    typedef  boost::work work;

    executor(executor const&) = delete;
    executor& operator=(executor const&) = delete;

    executor();
    virtual ~executor() {};

    virtual void close() = 0;
    virtual bool closed() = 0;

    virtual void submit(work&& closure) = 0;
    virtual void submit(work& closure) = 0;
    template <typename Closure>
    void submit(Closure&& closure);

    virtual bool try_executing_one() = 0;
    template <typename Pred>
    bool reschedule_until(Pred const& pred);
  };
}
executor();

Effects:

Constructs an executor.

Throws:

Nothing.

virtual ~executor();

Effects:

Destroys the executor.

Synchronization:

The completion of all the closures happen before the completion of the executor destructor.

Polymorphic adaptor of a model of Executor to an executor.

#include <boost/thread/executors/executor.hpp>
namespace boost {
  template <typename Executor>
  class executor_adaptor : public executor
  {
    Executor ex; // for exposition only
  public:
    typedef  executor::work work;

    executor_adaptor(executor_adaptor const&) = delete;
    executor_adaptor& operator=(executor_adaptor const&) = delete;

    template <typename ...Args>
    executor_adaptor(Args&& ... args);

    Executor& underlying_executor() noexcept;

    void close();
    bool closed();

    void submit(work&& closure);
    void submit(work& closure);

    bool try_executing_one();

  };
}
template <typename ...Args>
executor_adaptor(Args&& ... args);

Effects:

Constructs an executor_adaptor.

Throws:

Nothing.

virtual ~executor_adaptor();

Effects:

Destroys the executor_adaptor.

Synchronization:

The completion of all the closures happen before the completion of the executor destructor.

Executor& underlying_executor() noexcept;

Return:

The underlying executor instance.

Executor abstract base class.

#include <boost/thread/executors/generic_executor_ref.hpp>
namespace boost {
  class generic_executor_ref
  {
  public:
    generic_executor_ref(generic_executor_ref const&);
    generic_executor_ref& operator=(generic_executor_ref const&);

    template <class Executor>
    generic_executor_ref(Executor& ex);
    generic_executor_ref() {};

    void close() = 0;
    bool closed() = 0;

    template <typename Closure>
    void submit(Closure&& closure);

    virtual bool try_executing_one() = 0;
    template <typename Pred>
    bool reschedule_until(Pred const& pred);
  };
}

Scheduler providing time related functions. Note that scheduler is not an Executor.

#include <boost/thread/executors/scheduler.hpp>
namespace boost {

  template <class Clock=steady_clock>
  class scheduler
  {
  public:
    using work = boost::function<void()> ;
    using clock = Clock;

    scheduler(scheduler const&) = delete;
    scheduler& operator=(scheduler const&) = delete;

    scheduler();
    ~scheduler();

    void close();
    bool closed();

    template <class Duration, typename Closure>
    void submit_at(chrono::time_point<clock,Duration> abs_time, Closure&& closure);
    template <class Rep, class Period, typename Closure>
    void submit_after(chrono::duration<Rep,Period> rel_time, Closure&& closure);

    template <class Duration>
    at_executor<scheduler> submit_at(chrono::time_point<clock,Duration> abs_time);
    template <class Rep, class Period>
    at_executor<scheduler> submit_after(chrono::duration<Rep,Period> rel_time);

    template <class Executor>
    scheduler_executor_wrapper<scheduler, Executor> on(Executor& ex);

  };
}
scheduler();

Effects:

Constructs a scheduler.

Throws:

Nothing.

~scheduler();

Effects:

Destroys the scheduler.

Synchronization:

The completion of all the closures happen before the completion of the executor destructor.

template <class Clock, class Duration, typename Closure>
void submit_at(chrono::time_point<Clock,Duration> abs_time, Closure&& closure);

Effects:

Schedule a closure to be executed at abs_time.

Throws:

Nothing.

template <class Rep, class Period, typename Closure>
void submit_after(chrono::duration<Rep,Period> rel_time, Closure&& closure);

Effects:

Schedule a closure to be executed after rel_time.

Throws:

Nothing.

#include <boost/thread/executors/scheduler.hpp>
namespace boost {

  template <class Scheduler>
  class at_executor
  {
  public:
    using work = Scheduler::work;
    using clock = Scheduler::clock;

    at_executor(at_executor const&) = default;
    at_executor(at_executor &&) = default;
    at_executor& operator=(at_executor const&) = default;
    at_executor& operator=(at_executor &&) = default;

    at_executor(Scheduler& sch, clock::time_point const& tp);
    ~at_executor();

    void close();
    bool closed();

    Scheduler& underlying_scheduler();

    template <class Closure>
    void submit(Closure&& closure);
    template <class Duration, typename Work>
    void submit_at(chrono::time_point<clock,Duration> abs_time, Closure&& closure);
    template <class Rep, class Period, typename Work>
    void submit_after(chrono::duration<Rep,Period> rel_time, Closure&& closure);

    template <class Executor>
    resubmit_at_executor<Scheduler, Executor> on(Executor& ex);

  };
}
at_executor(Scheduler& sch, clock::time_point const& tp);

Effects:

Constructs a at_executor.

Throws:

Nothing.

~at_executor();

Effects:

Destroys the at_executor.

Synchronization:

The completion of all the closures happen before the completion of the executor destructor.

Scheduler& underlying_scheduler() noexcept;

Return:

The underlying scheduler instance.

template <typename Closure>
void submit(Closure&& closure);

Effects:

Schedule the closure to be executed at the abs_time given at construction time.

Throws:

Nothing.

template <class Clock, class Duration, typename Closure>
void submit_at(chrono::time_point<Clock,Duration> abs_time, Closure&& closure);

Effects:

Schedule a closure to be executed at abs_time.

Throws:

Nothing.

template <class Rep, class Period, typename Closure>
void submit_after(chrono::duration<Rep,Period> rel_time, Closure&& closure);

Effects:

Schedule a closure to be executed after rel_time.

Throws:

Nothing.

#include <boost/thread/executors/scheduler.hpp>
namespace boost {

  template <class Scheduler, class Executor>
  class scheduler_executor_wrapper
  {
  public:
    using work = Scheduler::work;
    using clock = Scheduler::clock;

    scheduler_executor_wrapper(scheduler_executor_wrapper const&) = default;
    scheduler_executor_wrapper(scheduler_executor_wrapper &&) = default;
    scheduler_executor_wrapper& operator=(scheduler_executor_wrapper const&) = default;
    scheduler_executor_wrapper& operator=(scheduler_executor_wrapper &&) = default;

    scheduler_executor_wrapper(Scheduler& sch, Executor& ex);

    ~scheduler_executor_wrapper();

    void close();
    bool closed();

    Executor& underlying_executor();
    Scheduler& underlying_scheduler();

    template <class Closure>
    void submit(Closure&& closure);
    template <class Duration, typename Work>
    void submit_at(chrono::time_point<clock,Duration> abs_time, Closure&& closure);
    template <class Rep, class Period, typename Work>
    void submit_after(chrono::duration<Rep,Period> rel_time, Closure&& closure);

    template <class Duration>
    resubmit_at_executor<Scheduler, Executor> at(chrono::time_point<clock,Duration> abs_time);
    template <class Rep, class Period>
    resubmit_at_executor<Scheduler, Executor> after(chrono::duration<Rep,Period> rel_time);

  };
}
scheduler_executor_wrapper(Scheduler& sch, Executor& ex);

Effects:

Constructs a scheduler_executor_wrapper.

Throws:

Nothing.

~scheduler_executor_wrapper();

Effects:

Destroys the scheduler_executor_wrapper.

Synchronization:

The completion of all the closures happen before the completion of the executor destructor.

Scheduler& underlying_scheduler() noexcept;

Return:

The underlying scheduler instance.

Executor& underlying_executor() noexcept;

Return:

The underlying executor instance.

template <typename Closure>
void submit(Closure&& closure);

Effects:

Submit the closure on the underlying executor.

Throws:

Nothing.

template <class Clock, class Duration, typename Closure>
void submit_at(chrono::time_point<Clock,Duration> abs_time, Closure&& closure);

Effects:

Resubmit the closure to be executed on the underlying executor at abs_time.

Throws:

Nothing.

template <class Rep, class Period, typename Closure>
void submit_after(chrono::duration<Rep,Period> rel_time, Closure&& closure);

Effects:

Resubmit the closure to be executed on the underlying executor after rel_time.

Throws:

Nothing.

Executor wrapping an Scheduler, an Executor and a time_point providing an Executor interface.

#include <boost/thread/executors/scheduler.hpp>
namespace boost {

  template <class Scheduler, class Executor>
  class resubmit_at_executor
  {
  public:
    using work = Scheduler::work;
    using clock = Scheduler::clock;

    resubmit_at_executor(resubmit_at_executor const&) = default;
    resubmit_at_executor(resubmit_at_executor &&) = default;
    resubmit_at_executor& operator=(resubmit_at_executor const&) = default;
    resubmit_at_executor& operator=(resubmit_at_executor &&) = default;

    template <class Duration>
    resubmit_at_executor(Scheduler& sch, Executor& ex, clock::time_point<Duration> const& tp);
    ~resubmit_at_executor();

    void close();
    bool closed();

    Executor& underlying_executor();
    Scheduler& underlying_scheduler();

    template <class Closure>
    void submit(Closure&& closure);
    template <class Duration, typename Work>
    void submit_at(chrono::time_point<clock,Duration> abs_time, Closure&& closure);
    template <class Rep, class Period, typename Work>
    void submit_after(chrono::duration<Rep,Period> rel_time, Closure&& closure);

  };
}
template <class Duration>
resubmit_at_executor(Scheduler& sch, Executor& ex, clock::time_point<Duration> const& tp);

Effects:

Constructs a resubmit_at_executor.

Throws:

Nothing.

~resubmit_at_executor();

Effects:

Destroys the executor_adaptor.

Synchronization:

The completion of all the closures happen before the completion of the executor destructor.

Executor& underlying_executor() noexcept;

Return:

The underlying executor instance.

Scheduler& underlying_scheduler() noexcept;

Return:

The underlying scheduler instance.

template <typename Closure>
void submit(Closure&& closure);

Effects:

Resubmit the closure to be executed on the underlying executor at the abs_time given at construction time.

Throws:

Nothing.

template <class Clock, class Duration, typename Closure>
void submit_at(chrono::time_point<Clock,Duration> abs_time, Closure&& closure);

Effects:

Resubmit the closure to be executed on the underlying executor at abs_time.

Throws:

Nothing.

template <class Rep, class Period, typename Closure>
void submit_after(chrono::duration<Rep,Period> rel_time, Closure&& closure);

Effects:

Resubmit the closure to be executed on the underlying executor after rel_time.

Throws:

Nothing.

A serial executor ensuring that there are no two work units that executes concurrently.

#include <boost/thread/executors/serial_executor.hpp>
namespace boost {
  template <class Executor>
  class serial_executor
  {
  public:
    serial_executor(serial_executor const&) = delete;
    serial_executor& operator=(serial_executor const&) = delete;

    template <class Executor>
    serial_executor(Executor& ex);

    Executor& underlying_executor() noexcept;

    void close();
    bool closed();

    template <typename Closure>
    void submit(Closure&& closure);

    bool try_executing_one();
    template <typename Pred>
    bool reschedule_until(Pred const& pred);

  };
}
template <class Executor>
serial_executor(Executor& ex);

Effects:

Constructs a serial_executor.

Throws:

Nothing.

~serial_executor();

Effects:

Destroys the serial_executor.

Synchronization:

The completion of all the closures happen before the completion of the executor destructor.

generic_executor_ref& underlying_executor() noexcept;

Return:

The underlying executor instance.

Throws:

Nothing.

A serial executor ensuring that there are no two work units that executes concurrently.

#include <boost/thread/executors/inline_executor.hpp>
namespace boost {
  class inline_executor
  {
  public:
    inline_executor(inline_executor const&) = delete;
    inline_executor& operator=(inline_executor const&) = delete;

    inline_executor();

    void close();
    bool closed();

    template <typename Closure>
    void submit(Closure&& closure);

    bool try_executing_one();
    template <typename Pred>
    bool reschedule_until(Pred const& pred);

  };
}
inline_executor();

Effects:

Constructs an inline_executor.

Throws:

Nothing.

~inline_executor();

Effects:

Destroys the inline_executor.

Synchronization:

The completion of all the closures happen before the completion of the executor destructor.

A thread pool with up to a fixed number of threads.

#include <boost/thread/executors/basic_thread_pool.hpp>
namespace boost {
  class basic_thread_pool
  {
  public:

    basic_thread_pool(basic_thread_pool const&) = delete;
    basic_thread_pool& operator=(basic_thread_pool const&) = delete;

    basic_thread_pool(unsigned const thread_count = thread::hardware_concurrency());
    template <class AtThreadEntry>
    basic_thread_pool( unsigned const thread_count, AtThreadEntry at_thread_entry);
    ~basic_thread_pool();

    void close();
    bool closed();

    template <typename Closure>
    void submit(Closure&& closure);

    bool try_executing_one();

    template <typename Pred>
    bool reschedule_until(Pred const& pred);

  };
}

Effects:

creates a thread pool that runs closures on thread_count threads.

Throws:

Whatever exception is thrown while initializing the needed resources.

~basic_thread_pool();

Effects:

Interrupts and joins all the threads and then destroys the threads.

Synchronization:

The completion of all the closures happen before the completion of the executor destructor.

A thread_executor with a threads for each task.

#include <boost/thread/executors/thread_executor.hpp>
namespace boost {
  class thread_executor
  {
  public:

    thread_executor(thread_executor const&) = delete;
    thread_executor& operator=(thread_executor const&) = delete;

    thread_executor();
    template <class AtThreadEntry>
    basic_thread_pool( unsigned const thread_count, AtThreadEntry at_thread_entry);
    ~thread_executor();

    void close();
    bool closed();

    template <typename Closure>
    void submit(Closure&& closure);

  };
}

Effects:

creates a thread_executor.

Throws:

Whatever exception is thrown while initializing the needed resources.

~thread_executor();

Effects:

Waits for closures (if any) to complete, then joins and destroys the threads.

Synchronization:

The completion of all the closures happen before the completion of the executor destructor.

A user scheduled executor.

#include <boost/thread/executors/loop_executor.hpp>
namespace boost {
  class loop_executor
  {
  public:

    loop_executor(loop_executor const&) = delete;
    loop_executor& operator=(loop_executor const&) = delete;

    loop_executor();
    ~loop_executor();

    void close();
    bool closed();

    template <typename Closure>
    void submit(Closure&& closure);

    bool try_executing_one();
    template <typename Pred>
    bool reschedule_until(Pred const& pred);

    void loop();
    void run_queued_closures();
  };
}
loop_executor();

Effects:

creates an executor that runs closures using one of its closure-executing methods.

Throws:

Whatever exception is thrown while initializing the needed resources.

virtual ~loop_executor();

Effects:

Destroys the executor.

Synchronization:

The completion of all the closures happen before the completion of the executor destructor.

void loop();

Return:

reschedule works until closed() or empty.

Throws:

whatever the current work constructor throws or the work() throws.

void run_queued_closures();

Return:

reschedule the enqueued works.

Throws:

whatever the current work constructor throws or the work() throws.

The futures library provides a means of handling synchronous future values, whether those values are generated by another thread, or on a single thread in response to external stimuli, or on-demand.

This is done through the provision of four class templates: future and boost::shared_future which are used to retrieve the asynchronous results, and boost::promise and boost::packaged_task which are used to generate the asynchronous results.

An instance of future holds the one and only reference to a result. Ownership can be transferred between instances using the move constructor or move-assignment operator, but at most one instance holds a reference to a given asynchronous result. When the result is ready, it is returned from boost::future<R>::get() by rvalue-reference to allow the result to be moved or copied as appropriate for the type.

On the other hand, many instances of boost::shared_future may reference the same result. Instances can be freely copied and assigned, and boost::shared_future<R>::get() returns a const reference so that multiple calls to boost::shared_future<R>::get() are safe. You can move an instance of future into an instance of boost::shared_future, thus transferring ownership of the associated asynchronous result, but not vice-versa.

boost::async is a simple way of running asynchronous tasks. A call to boost::async returns a future that will contain the result of the task.

You can wait for futures either individually or with one of the boost::wait_for_any() and boost::wait_for_all() functions.

You can set the value in a future with either a boost::promise or a boost::packaged_task. A boost::packaged_task is a callable object that wraps a function or callable object. When the packaged task is invoked, it invokes the contained function in turn, and populates a future with the return value. This is an answer to the perennial question: "how do I return a value from a thread?": package the function you wish to run as a boost::packaged_task and pass the packaged task to the thread constructor. The future retrieved from the packaged task can then be used to obtain the return value. If the function throws an exception, that is stored in the future in place of the return value.

int calculate_the_answer_to_life_the_universe_and_everything()
{
    return 42;
}

boost::packaged_task<int> pt(calculate_the_answer_to_life_the_universe_and_everything);
boost:: future<int> fi=pt.get_future();

boost::thread task(boost::move(pt)); // launch task on a thread

fi.wait(); // wait for it to finish

assert(fi.is_ready());
assert(fi.has_value());
assert(!fi.has_exception());
assert(fi.get_state()==boost::future_state::ready);
assert(fi.get()==42);

A boost::promise is a bit more low level: it just provides explicit functions to store a value or an exception in the associated future. A promise can therefore be used where the value may come from more than one possible source, or where a single operation may produce multiple values.

boost::promise<int> pi;
boost:: future<int> fi;
fi=pi.get_future();

pi.set_value(42);

assert(fi.is_ready());
assert(fi.has_value());
assert(!fi.has_exception());
assert(fi.get_state()==boost::future_state::ready);
assert(fi.get()==42);

Both boost::promise and boost::packaged_task support wait callbacks that are invoked when a thread blocks in a call to wait() or timed_wait() on a future that is waiting for the result from the boost::promise or boost::packaged_task, in the thread that is doing the waiting. These can be set using the set_wait_callback() member function on the boost::promise or boost::packaged_task in question.

This allows lazy futures where the result is not actually computed until it is needed by some thread. In the example below, the call to f.get() invokes the callback invoke_lazy_task, which runs the task to set the value. If you remove the call to f.get(), the task is not ever run.

int calculate_the_answer_to_life_the_universe_and_everything()
{
    return 42;
}

void invoke_lazy_task(boost::packaged_task<int>& task)
{
    try
    {
        task();
    }
    catch(boost::task_already_started&)
    {}
}

int main()
{
    boost::packaged_task<int> task(calculate_the_answer_to_life_the_universe_and_everything);
    task.set_wait_callback(invoke_lazy_task);
    boost:: future<int> f(task.get_future());

    assert(f.get()==42);
}

Detached threads pose a problem for objects with thread storage duration. If we use a mechanism other than thread::__join to wait for a thread to complete its work - such as waiting for a future to be ready - then the destructors of thread specific variables will still be running after the waiting thread has resumed. This section explain how the standard mechanism can be used to make such synchronization safe by ensuring that the objects with thread storage duration are destroyed prior to the future being made ready. e.g.

int find_the_answer(); // uses thread specific objects
void thread_func(boost::promise<int>&& p)
{
    p.set_value_at_thread_exit(find_the_answer());
}

int main()
{
    boost::promise<int> p;
    boost::thread t(thread_func,boost::move(p));
    t.detach(); // we're going to wait on the future
    std::cout<<p.get_future().get()<<std::endl;
}

When the call to get() returns, we know that not only is the future value ready, but the thread specific variables on the other thread have also been destroyed.

Such mechanisms are provided for boost::condition_variable, boost::promise and boost::packaged_task. e.g.

void task_executor(boost::packaged_task<void(int)> task,int param)
{
    task.make_ready_at_thread_exit(param); // execute stored task
} // destroy thread specific and wake threads waiting on futures from task

Other threads can wait on a future obtained from the task without having to worry about races due to the execution of destructors of the thread specific objects from the task's thread.

boost::condition_variable cv;
boost::mutex m;
complex_type the_data;
bool data_ready;

void thread_func()
{
    boost::unique_lock<std::mutex> lk(m);
    the_data=find_the_answer();
    data_ready=true;
    boost::notify_all_at_thread_exit(cv,boost::move(lk));
} // destroy thread specific objects, notify cv, unlock mutex

void waiting_thread()
{
    boost::unique_lock<std::mutex> lk(m);
    while(!data_ready)
    {
        cv.wait(lk);
    }
    process(the_data);
}

The waiting thread is guaranteed that the thread specific objects used by thread_func() have been destroyed by the time process(the_data) is called. If the lock on m is released and re-acquired after setting data_ready and before calling boost::notify_all_at_thread_exit() then this does NOT hold, since the thread may return from the wait due to a spurious wake-up.

boost::async is a simple way of running asynchronous tasks to make use of the available hardware concurrency. A call to boost::async returns a boost::future that will contain the result of the task. Depending on the launch policy, the task is either run asynchronously on its own thread or synchronously on whichever thread calls the wait() or get() member functions on that future.

A launch policy of either boost::launch::async, which asks the runtime to create an asynchronous thread, or boost::launch::deferred, which indicates you simply want to defer the function call until a later time (lazy evaluation). This argument is optional - if you omit it your function will use the default policy.

For example, consider computing the sum of a very large array. The first task is to not compute asynchronously when the overhead would be significant. The second task is to split the work into two pieces, one executed by the host thread and one executed asynchronously.

int parallel_sum(int* data, int size)
{
  int sum = 0;
  if ( size < 1000 )
    for ( int i = 0; i < size; ++i )
      sum += data[i];
  else {
    auto handle = boost::async(parallel_sum, data+size/2, size-size/2);
    sum += parallel_sum(data, size/2);
    sum += handle.get();
  }
  return sum;
}

shared_future is designed to be shared between threads, that is to allow multiple concurrent get operations.

Multiple get

The second get() call in the following example is undefined.

void bad_second_use( type arg ) {

  auto ftr = async( [=]{ return work( arg ); } );
    if ( cond1 )
    {
        use1( ftr.get() );
    } else
    {
        use2( ftr.get() );
    }
    use3( ftr.get() ); // second use is undefined
}

Using a shared_future solves the issue

void good_second_use( type arg ) {

   shared_future<type> ftr = async( [=]{ return work( arg ); } );
    if ( cond1 )
    {
        use1( ftr.get() );
    } else
    {
        use2(  ftr.get() );
    }
    use3( ftr.get() ); // second use is defined
}
share()

Naming the return type when declaring the shared_future is needed; auto is not available within template argument lists. Here share() could be used to simplify the code

void better_second_use( type arg ) {

   auto ftr = async( [=]{ return work( arg ); } ).share();
    if ( cond1 )
    {
        use1( ftr.get() );
    } else
    {
        use2(  ftr.get() );
    }
    use3( ftr.get() ); // second use is defined
}
Writing on get()

The user can either read or write the future variable.

void write_to_get( type arg ) {

   auto ftr = async( [=]{ return work( arg ); } ).share();
    if ( cond1 )
    {
        use1( ftr.get() );
    } else
    {
      if ( cond2 )
        use2(  ftr.get() );
      else
        ftr.get() = something(); // assign to non-const reference.  
    }
    use3( ftr.get() ); // second use is defined
}

This works because the shared_future<>::get() function returns a non-const reference to the appropriate storage. Of course the access to this storage must be ensured by the user. The library doesn't ensure the access to the internal storage is thread safe.

There has been some work by the C++ standard committee on an atomic_future that behaves as an atomic variable, that is thread_safe, and a shared_future that can be shared between several threads, but there were not enough consensus and time to get it ready for C++11.

Some functions may know the value at the point of construction. In these cases the value is immediately available, but needs to be returned as a future or shared_future. By using make_ready_future a future can be created which holds a pre-computed result in its shared state.

Without these features it is non-trivial to create a future directly from a value. First a promise must be created, then the promise is set, and lastly the future is retrieved from the promise. This can now be done with one operation.

make_ready_future

This function creates a future for a given value. If no value is given then a future<void> is returned. This function is primarily useful in cases where sometimes, the return value is immediately available, but sometimes it is not. The example below illustrates, that in an error path the value is known immediately, however in other paths the function must return an eventual value represented as a future.

boost::future<int> compute(int x)
{
  if (x == 0) return boost::make_ready_future(0);
  if (x < 0) return boost::make_ready_future<int>(std::logic_error("Error"));
  boost::future<int> f1 = boost::async([]() { return x+1; });
  return f1;
}

There are two variations of this function. The first takes a value of any type, and returns a future of that type. The input value is passed to the shared state of the returned future. The second version takes no input and returns a future<void>.

In asynchronous programming, it is very common for one asynchronous operation, on completion, to invoke a second operation and pass data to it. The current C++ standard does not allow one to register a continuation to a future. With .then, instead of waiting for the result, a continuation is "attached" to the asynchronous operation, which is invoked when the result is ready. Continuations registered using the .then function will help to avoid blocking waits or wasting threads on polling, greatly improving the responsiveness and scalability of an application.

future.then() provides the ability to sequentially compose two futures by declaring one to be the continuation of another. With .then() the antecedent future is ready (has a value or exception stored in the shared state) before the continuation starts as instructed by the lambda function.

In the example below the future<string> f2 is registered to be a continuation of future<int> f1 using the .then() member function. This operation takes a lambda function which describes how f2 should proceed after f1 is ready.

#include <boost/thread/future.hpp>
using namespace boost;
int main()
{
  future<int> f1 = async([]() { return 123; });
  future<string> f2 = f1.then([](future<int> f) { return f.get().to_string(); // here .get() won't block });
}

One key feature of this function is the ability to chain multiple asynchronous operations. In asynchronous programming, it's common to define a sequence of operations, in which each continuation executes only when the previous one completes. In some cases, the antecedent future produces a value that the continuation accepts as input. By using future.then(), creating a chain of continuations becomes straightforward and intuitive:

myFuture.then(...).then(...).then(...).

Some points to note are:

  • Each continuation will not begin until the preceding has completed.
  • If an exception is thrown, the following continuation can handle it in a try-catch block

Input Parameters:

  • Lambda function: One option which can be considered is to take two functions, one for success and one for error handling. However this option has not been retained for the moment. The lambda function takes a future as its input which carries the exception through. This makes propagating exceptions straightforward. This approach also simplifies the chaining of continuations.
  • Executor: Providing an overload to .then, to take an executor reference places great flexibility over the execution of the future in the programmer's hand. As described above, often taking a launch policy is not sufficient for powerful asynchronous operations. The lifetime of the executor must outlive the continuation.
  • Launch policy: if the additional flexibility that the executor provides is not required.

Return values: The decision to return a future was based primarily on the ability to chain multiple continuations using .then(). This benefit of composability gives the programmer incredible control and flexibility over their code. Returning a future object rather than a shared_future is also a much cheaper operation thereby improving performance. A shared_future object is not necessary to take advantage of the chaining feature. It is also easy to go from a future to a shared_future when needed using future::share().

//#include <boost/thread/future.hpp>

namespace boost
{
  namespace future_state  // EXTENSION
  {
    enum state {uninitialized, waiting, ready, moved};
  }

  enum class future_errc
  {
    broken_promise,
    future_already_retrieved,
    promise_already_satisfied,
    no_state
  };

  enum class launch
  {
    none = unspecified,
    async = unspecified,
    deferred = unspecified,
    executor = unspecified,
    inherit = unspecified,
    any = async | deferred
  };

  enum class future_status {
    ready,  timeout, deferred
  };

  namespace system
  {
    template <>
    struct is_error_code_enum<future_errc> : public true_type {};

    error_code make_error_code(future_errc e);

    error_condition make_error_condition(future_errc e);
  }

  const system::error_category& future_category();

  class future_error;

  class exceptional_ptr;

  template <typename R>
  class promise;

  template <typename R>
  void swap(promise<R>& x, promise<R>& y) noexcept;

  namespace container {
    template <class R, class Alloc>
    struct uses_allocator<promise<R>, Alloc>:: true_type;
  }

  template <typename R>
  class future;

  template <typename R>
  class shared_future;

  template <typename S>
  class packaged_task;
  template <class S> void swap(packaged_task<S>&, packaged_task<S>&) noexcept;

  template <class S, class Alloc>
  struct uses_allocator<packaged_task <S>, Alloc>;

  template <class F>
    future<typename result_of<typename decay<F>::type()>::type>
    async(F f);
  template <class F>
    future<typename result_of<typename decay<F>::type()>::type>
    async(launch policy, F f);

  template <class F, class... Args>
    future<typename result_of<typename decay<F>::type(typename decay<Args>::type...)>::type>
    async(F&& f, Args&&... args);
  template <class F, class... Args>
    future<typename result_of<typename decay<F>::type(typename decay<Args>::type...)>::type>
    async(launch policy, F&& f, Args&&... args);
  template <class Executor, class F, class... Args>
    future<typename result_of<typename decay<F>::type(typename decay<Args>::type...)>::type>
    async(Executor &ex, F&& f, Args&&... args);

  template<typename Iterator>
    void wait_for_all(Iterator begin,Iterator end); // EXTENSION
  template<typename F1,typename... FS>
    void wait_for_all(F1& f1,Fs&... fs); // EXTENSION

  template<typename Iterator>
    Iterator wait_for_any(Iterator begin,Iterator end); // EXTENSION
  template<typename F1,typename... Fs>
    unsigned wait_for_any(F1& f1,Fs&... fs); // EXTENSION

  template <class InputIterator>
    future<std::vector<typename InputIterator::value_type::value_type>>
    when_all(InputIterator first, InputIterator last);
  template <typename... T>
    future<std::tuple<decay_t<T>...> when_all(T&&... futures);
  template <class InputIterator>
    future<std::vector<typename InputIterator::value_type::value_type>>
    when_any(InputIterator first, InputIterator last); // EXTENSION
  template <typename... T>
    future<std::tuple<decay_t<T>...> when_any(T&&... futures);

  template <typename T>
    future<typename decay<T>::type> make_future(T&& value);  // DEPRECATED
  future<void> make_future();  // DEPRECATED

  template <typename T>
    future<typename decay<T>::type> make_ready_future(T&& value);  // EXTENSION
  future<void> make_ready_future();  // EXTENSION

  exceptional_ptr make_exceptional_future(exception_ptr ex);  // EXTENSION
  template <typename E>
    exceptional_ptr make_exceptional_future(E ex);  // EXTENSION
  exceptional_ptr make_exceptional_future();  // EXTENSION


  template <typename T>
  shared_future<typename decay<T>::type> make_shared_future(T&& value);  // DEPRECATED
  shared_future<void> make_shared_future();  // DEPRECATED
namespace future_state
{
  enum state {uninitialized, waiting, ready, moved};
}
 enum class future_errc
 {
   broken_promise = implementation defined,
   future_already_retrieved = implementation defined,
   promise_already_satisfied = implementation defined,
   no_state = implementation defined
 }


The enum values of future_errc are distinct and not zero.
enum class launch
{
  none = unspecified,
  async = unspecified,
  deferred = unspecified,
  executor = unspecified,
  inherit = unspecified,
  any = async | deferred
};

The enum type launch is a bitmask type with launch::async and launch::deferred denoting individual bits.

A future created with promise<> or with a packaged_task<> or with make_ready_future/make_exceptional_future (has no associated launch policy), has an implicit a launch policy of launch::none.

A future created by async(launch::async, ...) or ::then(launch::async, ...) has associated a launch policy launch::async. A future created by async(launch::deferred, ...) or ::then(launch::deferred, ...) has associated a launch policy launch::deferred. A future created by async(Executor, ...) or ::then(Executor, ...) or ::then(launch::executor, ...) has associated a launch policy launch::executor. A future created by async(...) or ::then(...) has associated a launch policy launch::none.

A future created by ::then(launch::inherit, ...) has associated a launch policy parent future.

The executor and the inherit launch policies have a sense only can be user only on then().

namespace system
{
  template <>
  struct is_error_code_enum<future_errc> : public true_type {};

}
namespace system
{
  error_code make_error_code(future_errc e);
}

Returns:

error_code(static_cast<int>(e), future_category()).

namespace system
{
  error_condition make_error_condition(future_errc e);
}

Returns:

error_condition(static_cast<int>(e), future_category()).

const system::error_category& future_category();

Returns:

A reference to an object of a type derived from class error_category.

Notes:

The object's default_error_condition and equivalent virtual functions behave as specified for the class system::error_category. The object's name virtual function returns a pointer to the string "future".

class future_error
    : public std::logic_error
{
public:
    future_error(system::error_code ec);

    const system::error_code& code() const no_except;
};
future_error(system::error_code ec);

Effects:

Constructs a future_error.

Postconditions:

code()==ec

Throws:

Nothing.

const system::error_code& code() const no_except;

Returns:

The value of ec that was passed to the object's constructor.

enum class future_status {
  ready,  timeout, deferred
};
class exceptional_ptr
{
public:
  exceptional_ptr();
  explicit exceptional_ptr(exception_ptr ex);
  template <class E>
  explicit exceptional_ptr(E&& ex);
};
exceptional_ptr();
explicit exceptional_ptr(exception_ptr ex);
template <class E>
explicit exceptional_ptr(E&& ex);

Effects:

The exception that is passed in to the constructor or the current exception if no parameter is moved into the constructed exceptional_ptr if it is an rvalue. Otherwise the exception is copied into the constructed exceptional_ptr.

Postconditions:

valid() == true && is_ready() = true && has_value() = false

Throws:

Nothing.

template <typename R>
class  future
{

public:
  typedef R value_type;  // EXTENSION
   future( future const& rhs) = delete;
   future& operator=( future const& rhs) = delete;

   future() noexcept;
  ~ future();

  // move support
   future( future && other) noexcept;
  explicit  future( future< future<R>>&& rhs);  // EXTENSION
   future& operator=( future && other) noexcept;

  // factories
  shared_future<R> share();

  template<typename F>
   future<typename boost::result_of<F( future)>::type>
  then(F&& func); // EXTENSION
  template<typename Ex, typename F>
   future<typename boost::result_of<F( future)>::type>
  then(Ex& executor, F&& func); // EXTENSION
  template<typename F>
   future<typename boost::result_of<F( future)>::type>
  then(launch policy, F&& func); // EXTENSION

  see below unwrap();  // EXTENSION
   future fallback_to();  // EXTENSION

  void swap( future& other) noexcept;

  // retrieving the value
  see below get();
  see below get_or(see below);  // EXTENSION

  exception_ptr get_exception_ptr(); // EXTENSION

  // functions to check state
  bool valid() const noexcept;
  bool is_ready() const; // EXTENSION
  bool has_exception() const; // EXTENSION
  bool has_value() const; // EXTENSION

  // waiting for the result to be ready
  void wait() const;
  template <class Rep, class Period>
  future_status wait_for(const chrono::duration<Rep, Period>& rel_time) const;
  template <class Clock, class Duration>
  future_status wait_until(const chrono::time_point<Clock, Duration>& abs_time) const;

#if defined BOOST_THREAD_USES_DATE_TIME
  template<typename Duration>
  bool timed_wait(Duration const& rel_time) const; // DEPRECATED SINCE V3.0.0
  bool timed_wait_until(boost::system_time const& abs_time) const; // DEPRECATED SINCE V3.0.0
#endif
  typedef future_state::state state;  // EXTENSION
  state get_state() const;  // EXTENSION
};
 future();

Effects:

Constructs an uninitialized future.

Postconditions:

this->is_ready returns false. this->get_state() returns boost::future_state::uninitialized.

Throws:

Nothing.

~ future();

Effects:

Destroys *this.

Throws:

Nothing.

 future( future && other);

Effects:

Constructs a new future, and transfers ownership of the shared state associated with other to *this.

Postconditions:

this->get_state() returns the value of other->get_state() prior to the call. other->get_state() returns boost::future_state::uninitialized. If other was associated with a shared state, that result is now associated with *this. other is not associated with any shared state.

Throws:

Nothing.

Notes:

If the compiler does not support rvalue-references, this is implemented using the boost.thread move emulation.

explicit  future( future< future<R>>&& other);  // EXTENSION
[Warning] Warning

This constructor is experimental and subject to change in future versions. There are not too much tests yet, so it is possible that you can find out some trivial bugs :(

Requires:

other.valid().

[Effects:

Constructs a new future, and transfers ownership of the shared state associated with other and unwrapping the inner future (see unwrap()).

Postconditions:

this->get_state() returns the value of other->get_state() prior to the call. other->get_state() returns boost::future_state::uninitialized. The associated shared state is now unwrapped and the inner future shared state is associated with *this. other is not associated with any shared state, ! other.valid().

Throws:

Nothing.

Notes:

If the compiler does not support rvalue-references, this is implemented using the boost.thread move emulation.

 future& operator=( future && other);

Effects:

Transfers ownership of the shared state associated with other to *this.

Postconditions:

this->get_state() returns the value of other->get_state() prior to the call. other->get_state() returns boost::future_state::uninitialized. If other was associated with a shared state, that result is now associated with *this. other is not associated with any shared state. If *this was associated with an asynchronous result prior to the call, that result no longer has an associated future instance.

Throws:

Nothing.

Notes:

If the compiler does not support rvalue-references, this is implemented using the boost.thread move emulation.

void swap( future & other) no_except;

Effects:

Swaps ownership of the shared states associated with other and *this.

Postconditions:

this->get_state() returns the value of other->get_state() prior to the call. other->get_state() returns the value of this->get_state() prior to the call. If other was associated with a shared state, that result is now associated with *this, otherwise *this has no associated result. If *this was associated with a shared state, that result is now associated with other, otherwise other has no associated result.

Throws:

Nothing.

R get();
R&  future<R&>::get();
void  future<void>::get();

Effects:

If *this is associated with a shared state, waits until the result is ready as-if by a call to boost::future<R>::wait(), and retrieves the result (whether that is a value or an exception).

Returns:

- future<R&>::get() return the stored reference.

- future<void>::get(), there is no return value.

- future<R>::get() returns an rvalue-reference to the value stored in the shared state.

Postconditions:

this->is_ready() returns true. this->get_state() returns boost::future_state::ready.

Throws:

- boost::future_uninitialized if *this is not associated with a shared state.

- boost::thread_interrupted if the result associated with *this is not ready at the point of the call, and the current thread is interrupted.

- Any exception stored in the shared state in place of a value.

Notes:

get() is an interruption point.

R get_or(R&& v); // EXTENSION
R get_or(R const& v);  // EXTENSION
R&  future<R&>::get_or(R& v);  // EXTENSION
void  future<void>::get_or();  // EXTENSION
[Warning] Warning

These functions are experimental and subject to change in future versions. There are not too much tests yet, so it is possible that you can find out some trivial bugs :(

Effects:

If *this is associated with a shared state, waits until the result is ready as-if by a call to boost::future<R>::wait(), and depending on whether the shared state has_value() the retrieves the result.

Returns:

- future<R&>::get_or(v) return the stored reference if has_value() and the passes parameter otherwise.

- future<void>::get_or(), there is no return value, but the function doesn't throws even if the shared state contained an exception.

- future<R>::get_or(v) returns an rvalue-reference to the value stored in the shared state if has_value() and an rvalue-reference build with the parameter v.

Postconditions:

this->is_ready() returns true. this->get_state() returns boost::future_state::ready.

Throws:

- boost::future_uninitialized if *this is not associated with a shared state.

Notes:

get_or() is an interruption point.

void wait() const;

Effects:

If *this is associated with a shared state, waits until the result is ready. If the result is not ready on entry, and the result has a wait callback set, that callback is invoked prior to waiting.

Throws:

- boost::future_uninitialized if *this is not associated with a shared state.

- boost::thread_interrupted if the result associated with *this is not ready at the point of the call, and the current thread is interrupted.

- Any exception thrown by the wait callback if such a callback is called.

Postconditions:

this->is_ready() returns true. this->get_state() returns boost::future_state::ready.

Notes:

wait() is an interruption point.

template<typename Duration>
bool timed_wait(Duration const& wait_duration);
[Warning] Warning

DEPRECATED since 3.00.

Use instead wait_for.

Effects:

If *this is associated with a shared state, waits until the result is ready, or the time specified by wait_duration has elapsed. If the result is not ready on entry, and the result has a wait callback set, that callback is invoked prior to waiting.

Returns:

true if *this is associated with a shared state, and that result is ready before the specified time has elapsed, false otherwise.

Throws:

- boost::future_uninitialized if *this is not associated with a shared state.

- boost::thread_interrupted if the result associated with *this is not ready at the point of the call, and the current thread is interrupted.

- Any exception thrown by the wait callback if such a callback is called.

Postconditions:

If this call returned true, then this->is_ready() returns true and this->get_state() returns boost::future_state::ready.

Notes:

timed_wait() is an interruption point. Duration must be a type that meets the Boost.DateTime time duration requirements.

bool timed_wait(boost::system_time const& wait_timeout);
[Warning] Warning

DEPRECATED since 3.00.

Use instead wait_until.

Effects:

If *this is associated with a shared state, waits until the result is ready, or the time point specified by wait_timeout has passed. If the result is not ready on entry, and the result has a wait callback set, that callback is invoked prior to waiting.

Returns:

true if *this is associated with a shared state, and that result is ready before the specified time has passed, false otherwise.

Throws:

- boost::future_uninitialized if *this is not associated with a shared state.

- boost::thread_interrupted if the result associated with *this is not ready at the point of the call, and the current thread is interrupted.

- Any exception thrown by the wait callback if such a callback is called.

Postconditions:

If this call returned true, then this->is_ready() returns true and this->get_state() returns boost::future_state::ready.

Notes:

timed_wait() is an interruption point.

template <class Rep, class Period>
future_status wait_for(const chrono::duration<Rep, Period>& rel_time) const;

Effects:

If *this is associated with a shared state, waits until the result is ready, or the time specified by wait_duration has elapsed. If the result is not ready on entry, and the result has a wait callback set, that callback is invoked prior to waiting.

Returns:

- future_status::deferred if the shared state contains a deferred function. (Not implemented yet)

- future_status::ready if the shared state is ready.

- future_status::timeout if the function is returning because the relative timeout specified by rel_time has expired.

Throws:

- boost::future_uninitialized if *this is not associated with a shared state.

- boost::thread_interrupted if the result associated with *this is not ready at the point of the call, and the current thread is interrupted.

- Any exception thrown by the wait callback if such a callback is called.

Postconditions:

If this call returned true, then this->is_ready() returns true and this->get_state() returns boost::future_state::ready.

Notes:

wait_for() is an interruption point. Duration must be a type that meets the Boost.DateTime time duration requirements.

template <class Clock, class Duration>
future_status wait_until(const chrono::time_point<Clock, Duration>& abs_time) const;

Effects:

If *this is associated with a shared state, waits until the result is ready, or the time point specified by wait_timeout has passed. If the result is not ready on entry, and the result has a wait callback set, that callback is invoked prior to waiting.

Returns:

- future_status::deferred if the shared state contains a deferred function. (Not implemented yet)

- future_status::ready if the shared state is ready.

- future_status::timeout if the function is returning because the absolute timeout specified by absl_time has reached.

Throws:

- boost::future_uninitialized if *this is not associated with a shared state.

- boost::thread_interrupted if the result associated with *this is not ready at the point of the call, and the current thread is interrupted.

- Any exception thrown by the wait callback if such a callback is called.

Postconditions:

If this call returned true, then this->is_ready() returns true and this->get_state() returns boost::future_state::ready.

Notes:

wait_until() is an interruption point.

bool valid() const noexcept;

Returns:

true if *this is associated with a shared state, false otherwise.

Remarks:

The result of this function is not stable and that the future could become invalid even if the function returned true or vice-versa.

Throws:

Nothing.

bool is_ready() const;

Returns:

true if *this is associated with a shared state and that result is ready for retrieval, false otherwise.

Remarks:

The result of this function is not stable and that the future could become not ready even if the function returned true or vice-versa.

Throws:

Nothing.

bool has_value() const;

Returns:

true if *this is associated with a shared state, that result is ready for retrieval, and the result is a stored value, false otherwise.

Remarks:

The result of this function is not stable and the future could lost its value even if the function returned true or vice-versa.

Throws:

Nothing.

bool has_exception() const;

Returns:

true if *this is associated with a shared state, that result is ready for retrieval, and the result is a stored exception, false otherwise.

Remarks:

The result of this function is not stable and the future could lost its exception even if the function returned true or vice-versa.

Throws:

Nothing.

exception_ptr get_exception_ptr();

Effects:

If *this is associated with a shared state, waits until the result is ready. If the result is not ready on entry, and the result has a wait callback set, that callback is invoked prior to waiting.

Returns:

an exception_ptr, storing or not an exception.

Remarks:

The result of this function is not stable and the future could lost its exception even if the function returned a valid exception_ptr or vice-versa.

Throws:

Whatever mutex::lock()/mutex::unlock() can throw.

future_state::state get_state();

Effects:

Determine the state of the shared state associated with *this, if any.

Returns:

boost::future_state::uninitialized if *this is not associated with a shared state. boost::future_state::ready if the shared state associated with *this is ready for retrieval, boost::future_state::waiting otherwise.

Remarks:

The result of this function is not stable.

Throws:

Nothing.

shared_future<R> share();

Returns:

shared_future<R>(boost::move(*this)).

Postconditions:

this->valid() == false.

template<typename F>
 future<typename boost::result_of<F( future)>::type>
then(F&& func); // EXTENSION
template<typename Ex, typename F>
 future<typename boost::result_of<F( future)>::type>
then(Ex& executor, F&& func); // EXTENSION
template<typename F>
 future<typename boost::result_of<F( future)>::type>
then(launch policy, F&& func); // EXTENSION
[Warning] Warning

These functions are experimental and subject to change in future versions. There are not too much tests yet, so it is possible that you can find out some trivial bugs :(

[Note] Note

These functions are based on the N3634 - Improvements to std::future<T> and related APIs C++1y proposal by N. Gustafsson, A. Laksberg, H. Sutter, S. Mithani.

Notes:

The three functions differ only by input parameters. The first only takes a callable object which accepts a future object as a parameter. The second function takes an executor as the first parameter and a callable object as the second parameter. The third function takes a launch policy as the first parameter and a callable object as the second parameter.

Requires:

INVOKE(DECAY_COPY (std::forward<F>(func)), std::move(*this)) shall be a valid expression.

Effects:

All the functions create a shared state that is associated with the returned future object. Additionally,

- When the object's shared state is ready, the continuation INVOKE(DECAY_COPY(std::forward<F>(func)), std::move(*this)) is called depending on the overload (see below) with the call to DECAY_COPY() being evaluated in the thread that called then.

- Any value returned from the continuation is stored as the result in the shared state of the resulting future. Any exception propagated from the execution of the continuation is stored as the exceptional result in the shared state of the resulting future.

The continuation launches according to the specified policy or executor or noting.

- When the launch policy is launch::none the continuation is called on an unspecified thread of execution.

- When the launch policy is launch::async the continuation is called on a new thread of execution.

- When the launch policy is launch::deferred the continuation is called on demand.

- When the launch policy is launch::executor the continuation is called on one of the thread of execution of the executor.

- When the launch policy is launch::inherit the continuation inherits the parent's launch policy or executor.

- When the executor or launch policy is not provided (first overload) is if as if launch::none was specified.

- When the executor is provided (second overload) the continuation is called on one of the thread of execution of the executor.

- If the parent has a policy of launch::deferred and the continuation does not have a specified launch policy executor, then the parent is filled by immediately calling .wait(), and the policy of the antecedent is launch::deferred.

Returns:

An object of type future<typename boost::result_of<F( future)> that refers to the shared state created by the continuation.

Notes:

- Note that nested futures are not implicitly unwrapped yet. This could be subject to change in future versions.

- The returned futures behave as the ones returned from boost::async, the destructor of the future object returned from then will block. This could be subject to change in future versions.

Postconditions:

- The future object passed to the parameter of the continuation function is a copy of the original future.

- valid() == false on original future; valid() == true on the future returned from then.

template <typename R2>
 future<R2>  future< future<R2>>::unwrap();  // EXTENSION
template <typename R2>
 boost::shared_future<R2>  future< boost::shared_future<R2>>::unwrap();  // EXTENSION
[Warning] Warning

These functions are experimental and subject to change in future versions. There are not too much tests yet, so it is possible that you can find out some trivial bugs :(

[Note] Note

These functions are based on the N3634 - Improvements to std::future<T> and related APIs C++1y proposal by N. Gustafsson, A. Laksberg, H. Sutter, S. Mithani.

Notes:

Removes the outermost future and returns a future with the associated state been a proxy of the outer future.

Effects:

- Returns a future that becomes ready when the shared state of the outer and inner future is ready. The validity of the future returned from get() applied on the outer future cannot be established a priori. If it is not valid, this future is forced to be valid and becomes ready with an exception of type future_error, with an error code of future_errc::broken_promise.

Returns:

An object of type future with the associated state been a proxy of outer future.

Postconditions:

- The returned future has valid() == true.

template <typename R>
class shared_future
{
public:
  typedef future_state::state state; // EXTENSION
  typedef R value_type;  // EXTENSION

  shared_future() noexcept;
  ~shared_future();

  // copy support
  shared_future(shared_future const& other);
  shared_future& operator=(shared_future const& other);

  // move support
  shared_future(shared_future && other) noexcept;
  shared_future( future<R> && other) noexcept;
  shared_future& operator=(shared_future && other) noexcept;
  shared_future& operator=( future<R> && other) noexcept;

  // factories
  template<typename F>
   future<typename boost::result_of<F(shared_future)>::type>
  then(F&& func) const; // EXTENSION
  template<typename S, typename F>
   future<typename boost::result_of<F(shared_future)>::type>
  then(S& scheduler, F&& func) const; // EXTENSION
  template<typename F>
   future<typename boost::result_of<F(shared_future)>::type>
  then(launch policy, F&& func) const; // EXTENSION

  void swap(shared_future& other);

  // retrieving the value
  see below get() const;

  exception_ptr get_exception_ptr(); // EXTENSION

  // functions to check state, and wait for ready
  bool valid() const noexcept;
  bool is_ready() const noexcept; // EXTENSION
  bool has_exception() const noexcept; // EXTENSION
  bool has_value() const noexcept; // EXTENSION

  // waiting for the result to be ready
  void wait() const;
  template <class Rep, class Period>
  future_status wait_for(const chrono::duration<Rep, Period>& rel_time) const;
  template <class Clock, class Duration>
  future_status wait_until(const chrono::time_point<Clock, Duration>& abs_time) const;

#if defined BOOST_THREAD_USES_DATE_TIME || defined BOOST_THREAD_DONT_USE_CHRONO
  template<typename Duration>
  bool timed_wait(Duration const& rel_time) const;  // DEPRECATED SINCE V3.0.0
  bool timed_wait_until(boost::system_time const& abs_time) const;  // DEPRECATED SINCE V3.0.0
#endif
  state get_state() const noexcept;  // EXTENSION

};
shared_future();

Effects:

Constructs an uninitialized shared_future.

Postconditions:

this->is_ready returns false. this->get_state() returns boost::future_state::uninitialized.

Throws:

Nothing.

const R& get() const;
R& get() const;
void get() const;

Effects:

If *this is associated with a shared state, waits until the result is ready as-if by a call to boost::shared_future<R>::wait(), and returns a const reference to the result.

Returns:

- shared_future<R&>::get() return the stored reference.

- shared_future<void>::get(), there is no return value.

- shared_future<R>::get() returns a const reference to the value stored in the shared state.

Throws:

- boost::future_uninitialized if *this is not associated with a shared state.

- boost::thread_interrupted if the result associated with *this is not ready at the point of the call, and the current thread is interrupted.

Notes:

get() is an interruption point.

void wait() const;

Effects:

If *this is associated with a shared state, waits until the result is ready. If the result is not ready on entry, and the result has a wait callback set, that callback is invoked prior to waiting.

Throws:

- boost::future_uninitialized if *this is not associated with a shared state.

- boost::thread_interrupted if the result associated with *this is not ready at the point of the call, and the current thread is interrupted.

- Any exception thrown by the wait callback if such a callback is called.

Postconditions:

this->is_ready() returns true. this->get_state() returns boost::future_state::ready.

Notes:

wait() is an interruption point.

template<typename Duration>
bool timed_wait(Duration const& wait_duration);

Effects:

If *this is associated with a shared state, waits until the result is ready, or the time specified by wait_duration has elapsed. If the result is not ready on entry, and the result has a wait callback set, that callback is invoked prior to waiting.

Returns:

true if *this is associated with a shared state, and that result is ready before the specified time has elapsed, false otherwise.

Throws:

- boost::future_uninitialized if *this is not associated with a shared state.

- boost::thread_interrupted if the result associated with *this is not ready at the point of the call, and the current thread is interrupted.

- Any exception thrown by the wait callback if such a callback is called.

Postconditions:

If this call returned true, then this->is_ready() returns true and this->get_state() returns boost::future_state::ready.

Notes:

timed_wait() is an interruption point. Duration must be a type that meets the Boost.DateTime time duration requirements.

bool timed_wait(boost::system_time const& wait_timeout);

Effects:

If *this is associated with a shared state, waits until the result is ready, or the time point specified by wait_timeout has passed. If the result is not ready on entry, and the result has a wait callback set, that callback is invoked prior to waiting.

Returns:

true if *this is associated with a shared state, and that result is ready before the specified time has passed, false otherwise.

Throws:

- boost::future_uninitialized if *this is not associated with a shared state.

- boost::thread_interrupted if the result associated with *this is not ready at the point of the call, and the current thread is interrupted.

- Any exception thrown by the wait callback if such a callback is called.

Postconditions:

If this call returned true, then this->is_ready() returns true and this->get_state() returns boost::future_state::ready.

Notes:

timed_wait() is an interruption point.

template <class Rep, class Period>
future_status wait_for(const chrono::duration<Rep, Period>& rel_time) const;

Effects:

If *this is associated with a shared state, waits until the result is ready, or the time specified by wait_duration has elapsed. If the result is not ready on entry, and the result has a wait callback set, that callback is invoked prior to waiting.

Returns:

- future_status::deferred if the shared state contains a deferred function. (Not implemented yet)

- future_status::ready if the shared state is ready.

- future_status::timeout if the function is returning because the relative timeout specified by rel_time has expired.

Throws:

- boost::future_uninitialized if *this is not associated with a shared state.

- boost::thread_interrupted if the result associated with *this is not ready at the point of the call, and the current thread is interrupted.

- Any exception thrown by the wait callback if such a callback is called.

Postconditions:

If this call returned true, then this->is_ready() returns true and this->get_state() returns boost::future_state::ready.

Notes:

timed_wait() is an interruption point. Duration must be a type that meets the Boost.DateTime time duration requirements.

template <class Clock, class Duration>
future_status wait_until(const chrono::time_point<Clock, Duration>& abs_time) const;

Effects:

If *this is associated with a shared state, waits until the result is ready, or the time point specified by wait_timeout has passed. If the result is not ready on entry, and the result has a wait callback set, that callback is invoked prior to waiting.

Returns:

- future_status::deferred if the shared state contains a deferred function. (Not implemented yet)

- future_status::ready if the shared state is ready.

- future_status::timeout if the function is returning because the absolute timeout specified by absl_time has reached.

Throws:

- boost::future_uninitialized if *this is not associated with a shared state.

- boost::thread_interrupted if the result associated with *this is not ready at the point of the call, and the current thread is interrupted.

- Any exception thrown by the wait callback if such a callback is called.

Postconditions:

If this call returned true, then this->is_ready() returns true and this->get_state() returns boost::future_state::ready.

Notes:

timed_wait() is an interruption point.

bool valid() const noexcept;

Returns:

true if *this is associated with a shared state, false otherwise.

Throws:

Nothing.

bool is_ready() const;

Returns:

true if *this is associated with a shared state, and that result is ready for retrieval, false otherwise.

Throws:

Whatever mutex::lock()/mutex::unlock() can throw.

bool has_value() const;

Returns:

true if *this is associated with a shared state, that result is ready for retrieval, and the result is a stored value, false otherwise.

Throws:

Whatever mutex::lock()/mutex::unlock() can throw.

bool has_exception() const;

Returns:

true if *this is associated with a shared state, that result is ready for retrieval, and the result is a stored exception, false otherwise.

Throws:

Whatever mutex::lock()/mutex::unlock() can throw.

exception_ptr get_exception_ptr();

Effects:

If *this is associated with a shared state, waits until the result is ready. If the result is not ready on entry, and the result has a wait callback set, that callback is invoked prior to waiting.

Returns:

an exception_ptr, storing or not an exception.

Throws:

Whatever mutex::lock()/mutex::unlock() can throw.

future_state::state get_state();

Effects:

Determine the state of the shared state associated with *this, if any.

Returns:

boost::future_state::uninitialized if *this is not associated with a shared state. boost::future_state::ready if the shared state associated with *this is ready for retrieval, boost::future_state::waiting otherwise.

Throws:

Whatever mutex::lock()/mutex::unlock() can throw.

template<typename F>
 future<typename boost::result_of<F(shared_future)>::type>
then(F&& func) const; // EXTENSION
template<typename Ex, typename F>
 future<typename boost::result_of<F(shared_future)>::type>
then(Ex& executor, F&& func) const; // EXTENSION
template<typename F>
 future<typename boost::result_of<F(shared_future)>::type>
then(launch policy, F&& func) const; // EXTENSION
[Warning] Warning

These functions are experimental and subject to change in future versions. There are not too much tests yet, so it is possible that you can find out some trivial bugs :(

[Note] Note

These functions are based on the N3634 - Improvements to std::future<T> and related APIs C++1y proposal by N. Gustafsson, A. Laksberg, H. Sutter, S. Mithani.

Notes:

The three functions differ only by input parameters. The first only takes a callable object which accepts a shared_future object as a parameter. The second function takes an executor as the first parameter and a callable object as the second parameter. The third function takes a launch policy as the first parameter and a callable object as the second parameter.

Requires:

INVOKE(DECAY_COPY (std::forward<F>(func)), *this) shall be a valid expression.

Effects:

All the functions create a shared state that is associated with the returned future object. Additionally,

- When the object's shared state is ready, the continuation INVOKE(DECAY_COPY(std::forward<F>(func)), *this) is called depending on the overload (see below) with the call to DECAY_COPY() being evaluated in the thread that called then.

- Any value returned from the continuation is stored as the result in the shared state of the resulting future. Any exception propagated from the execution of the continuation is stored as the exceptional result in the shared state of the resulting future.

The continuation launches according to the specified policy or executor or noting.

- When the launch policy is launch::none the continuation is called on an unspecified thread of execution.

- When the launch policy is launch::async the continuation is called on a new thread of execution.

- When the launch policy is launch::deferred the continuation is called on demand.

- When the launch policy is launch::executor the continuation is called on one of the thread of execution of the executor.

- When the launch policy is launch::inherit the continuation inherits the parent's launch policy or executor.

- When the executor or launch policy is not provided (first overload) is if as if launch::none was specified.

- When the executor is provided (second overload) the continuation is called on one of the thread of execution of the executor.

- If the parent has a policy of launch::deferred and the continuation does not have a specified launch policy executor, then the parent is filled by immediately calling .wait(), and the policy of the antecedent is launch::deferred.

Returns:

An object of type future<typename boost::result_of<F(shared_future)> that refers to the shared state created by the continuation.

Notes:

- Note that nested futures are not implicitly unwrapped yet. This could be subject to change in future versions.

- The returned futures behave as the ones returned from boost::async, the destructor of the future object returned from then will block. This could be subject to change in future versions.

Postconditions:

- The future object is moved to the parameter of the continuation function .

- valid() == true on original shared_future; valid() == true on the future returned from then.

template <typename R>
class promise
{
public:
  typedef R value_type;  // EXTENSION

  promise();
  template <class Allocator>
  promise(allocator_arg_t, Allocator a);
  promise & operator=(promise const& rhs) = delete;
  promise(promise const& rhs) = delete;
  ~promise();

  // Move support
  promise(promise && rhs) noexcept;;
  promise & operator=(promise&& rhs) noexcept;;

  void swap(promise& other) noexcept;
  // Result retrieval
   future<R> get_future();

  // Set the value
  void set_value(see below);
  void set_exception(boost::exception_ptr e);
  template <typename E>
  void set_exception(E e); // EXTENSION

  // setting the result with deferred notification
  void set_value_at_thread_exit(see below);
  void set_exception_at_thread_exit(exception_ptr p);
  template <typename E>
  void set_exception_at_thread_exit(E p);  // EXTENSION

  template<typename F>
  void set_wait_callback(F f); // EXTENSION

  void set_value_deferred(see below);  // EXTENSION
  void set_exception_deferred(exception_ptr p);  // EXTENSION
  template <typename E>
  void set_exception_deferred(E e); // EXTENSION
  void notify_deferred(); // EXTENSION

};
promise();

Effects:

Constructs a new boost::promise with no associated result.

Throws:

Nothing.

template <class Allocator>
promise(allocator_arg_t, Allocator a);

Effects:

Constructs a new boost::promise with no associated result using the allocator a.

Throws:

Nothing.

Notes:

Available only if BOOST_THREAD_FUTURE_USES_ALLOCATORS is defined.

promise(promise && other);

Effects:

Constructs a new boost::promise, and transfers ownership of the result associated with other to *this, leaving other with no associated result.

Throws:

Nothing.

Notes:

If the compiler does not support rvalue-references, this is implemented using the boost.thread move emulation.

promise& operator=(promise && other);

Effects:

Transfers ownership of the result associated with other to *this, leaving other with no associated result. If there was already a result associated with *this, and that result was not ready, sets any futures associated with that result to ready with a boost::broken_promise exception as the result.

Throws:

Nothing.

Notes:

If the compiler does not support rvalue-references, this is implemented using the boost.thread move emulation.

~promise();

Effects:

Destroys *this. If there was a result associated with *this, and that result is not ready, sets any futures associated with that task to ready with a boost::broken_promise exception as the result.

Throws:

Nothing.

 future<R> get_future();

Effects:

If *this was not associated with a result, allocate storage for a new shared state and associate it with *this. Returns a future associated with the result associated with *this.

Throws:

boost::future_already_retrieved if the future associated with the task has already been retrieved. std::bad_alloc if any memory necessary could not be allocated.

void set_value(R&& r);
void set_value(const R& r);
void promise<R&>::set_value(R& r);
void promise<void>::set_value();

Effects:

- If BOOST_THREAD_PROVIDES_PROMISE_LAZY is defined and if *this was not associated with a result, allocate storage for a new shared state and associate it with *this.

- Store the value r in the shared state associated with *this. Any threads blocked waiting for the asynchronous result are woken.

Postconditions:

All futures waiting on the shared state are ready and boost::future<R>::has_value() or boost::shared_future<R>::has_value() for those futures shall return true.

Throws:

- boost::promise_already_satisfied if the result associated with *this is already ready.

- boost::broken_promise if *this has no shared state.

- std::bad_alloc if the memory required for storage of the result cannot be allocated.

- Any exception thrown by the copy or move-constructor of R.

void set_exception(boost::exception_ptr e);
template <typename E>
void set_exception(E e); // EXTENSION

Effects:

- If BOOST_THREAD_PROVIDES_PROMISE_LAZY is defined and if *this was not associated with a result, allocate storage for a new shared state and associate it with *this.

- Store the exception e in the shared state associated with *this. Any threads blocked waiting for the asynchronous result are woken.

Postconditions:

All futures waiting on the shared state are ready and boost::future<R>::has_exception() or boost::shared_future<R>::has_exception() for those futures shall return true.

Throws:

- boost::promise_already_satisfied if the result associated with *this is already ready.

- boost::broken_promise if *this has no shared state.

- std::bad_alloc if the memory required for storage of the result cannot be allocated.

void set_value_at_thread_exit(R&& r);
void set_value_at_thread_exit(const R& r);
void promise<R&>::set_value_at_thread_exit(R& r);
void promise<void>::set_value_at_thread_exit();

Effects:

Stores the value r in the shared state without making that state ready immediately. Schedules that state to be made ready when the current thread exits, after all objects of thread storage duration associated with the current thread have been destroyed.

Postconditions:

the result associated with *this is set as deferred

Throws:

- boost::promise_already_satisfied if the result associated with *this is already ready or deferred.

- boost::broken_promise if *this has no shared state.

- std::bad_alloc if the memory required for storage of the result cannot be allocated.

- Any exception thrown by the copy or move-constructor of R.

void set_exception_at_thread_exit(boost::exception_ptr e);
template <typename E>
void set_exception_at_thread_exit(E p);  // EXTENSION

Effects:

Stores the exception pointer p in the shared state without making that state ready immediately. Schedules that state to be made ready when the current thread exits, after all objects of thread storage duration associated with the current thread have been destroyed.

Postconditions:

the result associated with *this is set as deferred

Throws:

- boost::promise_already_satisfied if the result associated with *this is already ready or deferred.

- boost::broken_promise if *this has no shared state.

- std::bad_alloc if the memory required for storage of the result cannot be allocated.

template<typename F>
void set_wait_callback(F f);

Preconditions:

The expression f(t) where t is a lvalue of type boost::promise shall be well-formed. Invoking a copy of f shall have the same effect as invoking f

Effects:

Store a copy of f with the shared state associated with *this as a wait callback. This will replace any existing wait callback store alongside that result. If a thread subsequently calls one of the wait functions on a future or boost::shared_future associated with this result, and the result is not ready, f(*this) shall be invoked.

Throws:

std::bad_alloc if memory cannot be allocated for the required storage.

void set_value_deferred(R&& r);
void set_value_deferred(const R& r);
void promise<R&>:: set_value_deferred(R& r);
void promise<void>:: set_value_deferred();

Effects:

- If BOOST_THREAD_PROVIDES_PROMISE_LAZY is defined and if *this was not associated with a result, allocate storage for a new shared state and associate it with *this.

- Stores the value r in the shared state without making that state ready immediately. Threads blocked waiting for the asynchronous result are not woken. They will be woken only when notify_deferred is called.

Postconditions:

the result associated with *this is set as deferred

Throws:

- boost::promise_already_satisfied if the result associated with *this is already ready or deferred.

- boost::broken_promise if *this has no shared state.

- std::bad_alloc if the memory required for storage of the result cannot be allocated.

- Any exception thrown by the copy or move-constructor of R.

void set_exception_deferred(boost::exception_ptr e);
template <typename E>
void set_exception_deferred(E e); // EXTENSION

Effects:

- If BOOST_THREAD_PROVIDES_PROMISE_LAZY is defined and if *this was not associated with a result, allocate storage for a new shared state and associate it with *this.

- Store the exception e in the shared state associated with *thiswithout making that state ready immediately. Threads blocked waiting for the asynchronous result are not woken. They will be woken only when notify_deferred is called.

Postconditions:

the result associated with *this is set as deferred

Throws:

- boost::promise_already_satisfied if the result associated with *this is already ready or deferred.

- boost::broken_promise if *this has no shared state.

- std::bad_alloc if the memory required for storage of the result cannot be allocated.

Effects:

Any threads blocked waiting for the asynchronous result are woken.

Postconditions:

All futures waiting on the shared state are ready and boost::future<R>::has_value() or boost::shared_future<R>::has_value() for those futures shall return true.

Postconditions:

the result associated with *this is ready.

template<typename S>
class packaged_task;
template<typename R
  , class... ArgTypes
>
class packaged_task<R(ArgTypes)>
{
public:
  packaged_task(packaged_task const&) = delete;
  packaged_task& operator=(packaged_task const&) = delete;

  // construction and destruction
  packaged_task() noexcept;

  explicit packaged_task(R(*f)(ArgTypes...));

  template <class F>
  explicit packaged_task(F&& f);

  template <class Allocator>
  packaged_task(allocator_arg_t, Allocator a, R(*f)(ArgTypes...));
  template <class F, class Allocator>
  packaged_task(allocator_arg_t, Allocator a, F&& f);

  ~packaged_task()
  {}

  // move support
  packaged_task(packaged_task&& other) noexcept;
  packaged_task& operator=(packaged_task&& other) noexcept;

  void swap(packaged_task& other) noexcept;

  bool valid() const noexcept;
  // result retrieval
   future<R> get_future();

  // execution
  void operator()(ArgTypes... );
  void make_ready_at_thread_exit(ArgTypes...);

  void reset();
  template<typename F>
  void set_wait_callback(F f);  // EXTENSION
};
packaged_task(R(*f)(ArgTypes...));

template<typename F>
packaged_task(F&&f);

Preconditions:

f() is a valid expression with a return type convertible to R. Invoking a copy of f must behave the same as invoking f.

Effects:

Constructs a new boost::packaged_task with boost::forward<F>(f) stored as the associated task.

Throws:

- Any exceptions thrown by the copy (or move) constructor of f.

- std::bad_alloc if memory for the internal data structures could not be allocated.

Notes:

The R(*f)(ArgTypes...)) overload to allow passing a function without needing to use &.

Remark:

This constructor doesn't participate in overload resolution if decay<F>::type is the same type as boost::packaged_task<R>.

template <class Allocator>
packaged_task(allocator_arg_t, Allocator a, R(*f)(ArgTypes...));
template <class F, class Allocator>
packaged_task(allocator_arg_t, Allocator a, F&& f);

Preconditions:

f() is a valid expression with a return type convertible to R. Invoking a copy of f shall behave the same as invoking f.

Effects:

Constructs a new boost::packaged_task with boost::forward<F>(f) stored as the associated task using the allocator a.

Throws:

Any exceptions thrown by the copy (or move) constructor of f. std::bad_alloc if memory for the internal data structures could not be allocated.

Notes:

Available only if BOOST_THREAD_FUTURE_USES_ALLOCATORS is defined.

Notes:

The R(*f)(ArgTypes...)) overload to allow passing a function without needing to use &.

packaged_task(packaged_task && other);

Effects:

Constructs a new boost::packaged_task, and transfers ownership of the task associated with other to *this, leaving other with no associated task.

Throws:

Nothing.

Notes:

If the compiler does not support rvalue-references, this is implemented using the boost.thread move emulation.

packaged_task& operator=(packaged_task && other);

Effects:

Transfers ownership of the task associated with other to *this, leaving other with no associated task. If there was already a task associated with *this, and that task has not been invoked, sets any futures associated with that task to ready with a boost::broken_promise exception as the result.

Throws:

Nothing.

Notes:

If the compiler does not support rvalue-references, this is implemented using the boost.thread move emulation.

~packaged_task();

Effects:

Destroys *this. If there was a task associated with *this, and that task has not been invoked, sets any futures associated with that task to ready with a boost::broken_promise exception as the result.

Throws:

Nothing.

 future<R> get_future();

Effects:

Returns a future associated with the result of the task associated with *this.

Throws:

boost::task_moved if ownership of the task associated with *this has been moved to another instance of boost::packaged_task. boost::future_already_retrieved if the future associated with the task has already been retrieved.

void operator()();

Effects:

Invoke the task associated with *this and store the result in the corresponding future. If the task returns normally, the return value is stored as the shared state, otherwise the exception thrown is stored. Any threads blocked waiting for the shared state associated with this task are woken.

Postconditions:

All futures waiting on the shared state are ready

Throws:

- boost::task_moved if ownership of the task associated with *this has been moved to another instance of boost::packaged_task.

- boost::task_already_started if the task has already been invoked.

void make_ready_at_thread_exit(ArgTypes...);

Effects:

Invoke the task associated with *this and store the result in the corresponding future. If the task returns normally, the return value is stored as the shared state, otherwise the exception thrown is stored. In either case, this is done without making that state ready immediately. Schedules the shared state to be made ready when the current thread exits, after all objects of thread storage duration associated with the current thread have been destroyed.

Throws:

- boost::task_moved if ownership of the task associated with *this has been moved to another instance of boost::packaged_task.

- boost::task_already_started if the task has already been invoked.

void reset();

Effects:

Reset the state of the packaged_task so that it can be called again.

Throws:

boost::task_moved if ownership of the task associated with *this has been moved to another instance of boost::packaged_task.

template<typename F>
void set_wait_callback(F f);

Preconditions:

The expression f(t) where t is a lvalue of type boost::packaged_task shall be well-formed. Invoking a copy of f shall have the same effect as invoking f

Effects:

Store a copy of f with the task associated with *this as a wait callback. This will replace any existing wait callback store alongside that task. If a thread subsequently calls one of the wait functions on a future or boost::shared_future associated with this task, and the result of the task is not ready, f(*this) shall be invoked.

Throws:

boost::task_moved if ownership of the task associated with *this has been moved to another instance of boost::packaged_task.

template <class T>
typename decay<T>::type decay_copy(T&& v)
{
  return boost::forward<T>(v);
}

The function template async provides a mechanism to launch a function potentially in a new thread and provides the result of the function in a future object with which it shares a shared state.

Non-Variadic variant
template <class F>
   future<typename result_of<typename decay<F>::type()>::type>
  async(F&& f);
template <class F>
   future<typename result_of<typename decay<F>::type()>::type>
  async(launch policy, F&& f);
template <class Executor, class F>
   future<typename result_of<typename decay<F>::type(typename decay<Args>::type...)>::type>
  async(Executor &ex, F&& f, Args&&... args);

Requires:

decay_copy(boost::forward<F>(f))()

shall be a valid expression.

Effects

The first function behaves the same as a call to the second function with a policy argument of launch::async | launch::deferred and the same arguments for F.

The second and third functions create a shared state that is associated with the returned future object.

The further behavior of the second function depends on the policy argument as follows (if more than one of these conditions applies, the implementation may choose any of the corresponding policies):

- if policy & launch::async is non-zero - calls decay_copy(boost::forward<F>(f))() as if in a new thread of execution represented by a thread object with the calls to decay_copy() being evaluated in the thread that called async. Any return value is stored as the result in the shared state. Any exception propagated from the execution of decay_copy(boost::forward<F>(f))() is stored as the exceptional result in the shared state. The thread object is stored in the shared state and affects the behavior of any asynchronous return objects that reference that state.

- if policy & launch::deferred is non-zero - Stores decay_copy(boost::forward<F>(f)) in the shared state. This copy of f constitute a deferred function. Invocation of the deferred function evaluates boost::move(g)() where g is the stored value of decay_copy(boost::forward<F>(f)). The shared state is not made ready until the function has completed. The first call to a non-timed waiting function on an asynchronous return object referring to this shared state shall invoke the deferred function in the thread that called the waiting function. Once evaluation of boost::move(g)() begins, the function is no longer considered deferred. (Note: If this policy is specified together with other policies, such as when using a policy value of launch::async | launch::deferred, implementations should defer invocation or the selection of the policy when no more concurrency can be effectively exploited.)

- if no valid launch policy is provided the behavior is undefined.

The further behavior of the third function is as follows:

- The Executor::submit() function is given a function<void ()> which calls `INVOKE (DECAY_COPY (std::forward<F>(f)), DECAY_COPY (std::forward<Args>(args))...). The implementation of the executor is decided by the programmer.

Returns:

An object of type future<typename result_of<typename decay<F>::type()>::type> that refers to the shared state created by this call to async.

Synchronization:

Regardless of the provided policy argument,

- the invocation of async synchronizes with the invocation of f. (Note: This statement applies even when the corresponding future object is moved to another thread.); and

- the completion of the function f is sequenced before the shared state is made ready. (Note: f might not be called at all, so its completion might never happen.)

If the implementation chooses the launch::async policy,

- a call to a non-timed waiting function on an asynchronous return object that shares the shared state created by this async call shall block until the associated thread has completed, as if joined, or else time out;

- the associated thread completion synchronizes with the return from the first function that successfully detects the ready status of the shared state or with the return from the last function that releases the shared state, whichever happens first.

Throws:

system_error if policy is launch::async and the implementation is unable to start a new thread.

Error conditions:

- resource_unavailable_try_again - if policy is launch::async and the system is unable to start a new thread.

Remarks::

The first signature shall not participate in overload resolution if decay_t<F> is boost:: launch or boost::is_executor<F> is true_type`.

Variadic variant
template <class F, class... Args>
   future<typename result_of<typename decay<F>::type(typename decay<Args>::type...)>::type>
  async(F&& f, Args&&... args);
template <class F, class... Args>
   future<typename result_of<typename decay<F>::type(typename decay<Args>::type...)>::type>
  async(launch policy, F&& f, Args&&... args);
template <class Executor, class F, class... Args>
   future<typename result_of<typename decay<F>::type(typename decay<Args>::type...)>::type>
  async(Executor &ex, F&& f, Args&&... args);
[Warning] Warning

the variadic prototype is provided only on C++11 compilers supporting rvalue references, variadic templates, decltype and a standard library providing <tuple> (waiting for a boost::tuple that is move aware), and BOOST_THREAD_PROVIDES_SIGNATURE_PACKAGED_TASK is defined.

Requires:

F and each Ti in Args shall satisfy the MoveConstructible requirements.

invoke (decay_copy (boost::forward<F>(f)), decay_copy (boost::forward<Args>(args))...)

shall be a valid expression.

Effects:

- The first function behaves the same as a call to the second function with a policy argument of launch::async | launch::deferred and the same arguments for F and Args.

- The second function creates a shared state that is associated with the returned future object. The further behavior of the second function depends on the policy argument as follows (if more than one of these conditions applies, the implementation may choose any of the corresponding policies):

- if policy & launch::async is non-zero - calls invoke(decay_copy(forward<F>(f)), decay_copy (forward<Args>(args))...) as if in a new thread of execution represented by a thread object with the calls to decay_copy() being evaluated in the thread that called async. Any return value is stored as the result in the shared state. Any exception propagated from the execution of invoke(decay_copy(boost::forward<F>(f)), decay_copy (boost::forward<Args>(args))...) is stored as the exceptional result in the shared state. The thread object is stored in the shared state and affects the behavior of any asynchronous return objects that reference that state.

- if policy & launch::deferred is non-zero - Stores decay_copy(forward<F>(f)) and decay_copy(forward<Args>(args))... in the shared state. These copies of f and args constitute a deferred function. Invocation of the deferred function evaluates invoke(move(g), move(xyz)) where g is the stored value of decay_copy(forward<F>(f)) and xyz is the stored copy of decay_copy(forward<Args>(args)).... The shared state is not made ready until the function has completed. The first call to a non-timed waiting function on an asynchronous return object referring to this shared state shall invoke the deferred function in the thread that called the waiting function. Once evaluation of invoke(move(g), move(xyz)) begins, the function is no longer considered deferred.

- if no valid launch policy is provided the behaviour is undefined.

Note:

If this policy is specified together with other policies, such as when using a policy value of launch::async | launch::deferred, implementations should defer invocation or the selection of the policy when no more concurrency can be effectively exploited.

Returns:

An object of type future<typename result_of<typename decay<F>::type(typename decay<Args>::type...)>::type> that refers to the shared state created by this call to async.

Synchronization:

Regardless of the provided policy argument,

- the invocation of async synchronizes with the invocation of f. (Note: This statement applies even when the corresponding future object is moved to another thread.); and

- the completion of the function f is sequenced before the shared state is made ready. (Note: f might not be called at all, so its completion might never happen.)

If the implementation chooses the launch::async policy,

- a call to a waiting function on an asynchronous return object that shares the shared state created by this async call shall block until the associated thread has completed, as if joined, or else time out;

- the associated thread completion synchronizes with the return from the first function that successfully detects the ready status of the shared state or with the return from the last function that releases the shared state, whichever happens first.

Throws:

system_error if policy is launch::async and the implementation is unable to start a new thread.

Error conditions:

- resource_unavailable_try_again - if policy is launch::async and the system is unable to start a new thread.

Remarks:

The first signature shall not participate in overload resolution if decay<F>::type is boost::launch.

template<typename Iterator>
  Iterator wait_for_any(Iterator begin,Iterator end); // EXTENSION

template<typename F1,typename F2>
  unsigned wait_for_any(F1& f1,F2& f2); // EXTENSION

template<typename F1,typename F2,typename F3>
  unsigned wait_for_any(F1& f1,F2& f2,F3& f3); // EXTENSION

template<typename F1,typename F2,typename F3,typename F4>
  unsigned wait_for_any(F1& f1,F2& f2,F3& f3,F4& f4); // EXTENSION

template<typename F1,typename F2,typename F3,typename F4,typename F5>
  unsigned wait_for_any(F1& f1,F2& f2,F3& f3,F4& f4,F5& f5); // EXTENSION

Preconditions:

The types Fn shall be specializations of future or boost::shared_future, and Iterator shall be a forward iterator with a value_type which is a specialization of future or boost::shared_future.

Effects:

Waits until at least one of the specified futures is ready.

Returns:

The range-based overload returns an Iterator identifying the first future in the range that was detected as ready. The remaining overloads return the zero-based index of the first future that was detected as ready (first parameter => 0, second parameter => 1, etc.).

Throws:

boost::thread_interrupted if the current thread is interrupted. Any exception thrown by the wait callback associated with any of the futures being waited for. std::bad_alloc if memory could not be allocated for the internal wait structures.

Notes:

wait_for_any() is an interruption point.

template<typename Iterator>
  void wait_for_all(Iterator begin,Iterator end); // EXTENSION

template<typename F1,typename F2>
  void wait_for_all(F1& f1,F2& f2); // EXTENSION

template<typename F1,typename F2,typename F3>
  void wait_for_all(F1& f1,F2& f2,F3& f3); // EXTENSION

template<typename F1,typename F2,typename F3,typename F4>
  void wait_for_all(F1& f1,F2& f2,F3& f3,F4& f4); // EXTENSION

template<typename F1,typename F2,typename F3,typename F4,typename F5>
  void wait_for_all(F1& f1,F2& f2,F3& f3,F4& f4,F5& f5); // EXTENSION

Preconditions:

The types Fn shall be specializations of future or boost::shared_future, and Iterator shall be a forward iterator with a value_type which is a specialization of future or boost::shared_future.

Effects:

Waits until all of the specified futures are ready.

Throws:

Any exceptions thrown by a call to wait() on the specified futures.

Notes:

wait_for_all() is an interruption point.

template <class InputIterator>
  future<std::vector<typename InputIterator::value_type::value_type>>
  when_all(InputIterator first, InputIterator last);

template <typename... FutTypes>
  future<std::tuple<decay_t<FutTypes>...> when_all(FutTypes&&... futures);

Requires:

- For the first overload, InputIterator's value type shall be convertible to future<R> or shared_future<R>. All R types must be the same. If any of the future<R> or shared_future<R> objects are in invalid state (i.e. valid() == false), the behavior is undefined. - For the second overload, FutTypes is of type future<R> or shared_future<R>. The effect of calling when_all on a future or a shared_future object for which valid() == false is undefined.

Notes:

- There are two variations of when_all. The first version takes a pair of InputIterators. The second takes any arbitrary number of future<R0> and shared_future<R1> objects, where R0 and R1 need not be the same type.

- Calling the first signature of when_all where InputIterator first equals last, returns a future with an empty vector that is immediately ready.

- Calling the second signature of when_all with no arguments returns a future<tuple<>> that is immediately ready.

Effects:

- If any of the futures supplied to a call to when_all refer to deferred tasks that have not started execution, those tasks are executed before the call to when_all returns. Once all such tasks have been executed, the call to when_all returns immediately.

- The call to when_all does not wait for non-deferred tasks, or deferred tasks that have already started executing elsewhere, to complete before returning.

- Once all the futures/shared_futures supplied to the call to when_all are ready, the futures/shared_futures are moved/copied into the associated state of the future returned from the call to when_all, preserving the order of the futures supplied to when_all.

- The collection is then stored as the result in a newly created shared state.

- A new future object that refers to the shared state is created. The exact type of the future is further described below.

- The future returned by when_all will not throw an exception when calling wait() or get(), but the futures held in the output collection may.

Returns:

- future<tuple<>> if when_all is called with zero arguments.

- future<vector<future<R>>> if the input cardinality is unknown at compile and the iterator pair yields future<R>. The order of the futures in the output vector will be the same as given by the input iterator.

- future<vector<shared_future<R>>> if the input cardinality is unknown at compile time and the iterator pair yields shared_future<R>. The order of the futures in the output vector will be the same as given by the input iterator.

- future<tuple<decay_t<FutTypes>...>> if inputs are fixed in number.

Postconditions:

- All input futures valid() == false.

- All input shared future valid() == true.

- valid() == true.

template <class InputIterator>
  future<std::vector<typename InputIterator::value_type::value_type>>
  when_any(InputIterator first, InputIterator last);

template <typename... FutTypes>
  future<std::tuple<decay_t<FutTypes>...>
  when_any(FutTypes&&... futures);

Requires:

- For the first overload, InputIterator's value type shall be convertible to future<R> or shared_future<R>. All R types must be the same. If any of the future<R> or shared_future<R> objects are in invalid state (i.e. valid() == false), the behavior is undefined. - For the second overload, FutTypes is of type future<R> or shared_future<R>. The effect of calling when_any on a future or a shared_future object for which valid() == false is undefined.

Notes:

- There are two variations of when_any . The first version takes a pair of InputIterators. The second takes any arbitrary number of future<R0> and shared_future<R1> objects, where R0 and R1 need not be the same type.

- Calling the first signature of when_any where InputIterator first equals last, returns a future with an empty vector that is immediately ready.

- Calling the second signature of when_any with no arguments returns a future<tuple<>> that is immediately ready.

Effects:

- Each of the futures supplied to when_any is checked in the order supplied. If a given future is ready, then no further futures are checked, and the call to when_any returns immediately. If a given future refers to a deferred task that has not yet started execution, then no further futures are checked, that task is executed, and the call to when_any then returns immediately.

- The call to when_any does not wait for non-deferred tasks, or deferred tasks that have already started executing elsewhere, to complete before returning.

- Once at least one of the futures supplied to the call to when_any are ready, the futures are moved into the associated state of the future returned from the call to when_any, preserving the order of the futures supplied to when_any. That future is then ready.

- The collection is then stored as the result in a newly created shared state.

- A new future object that refers to the shared state is created. The exact type of the future is further described below.

- The future returned by when_any will not throw an exception when calling wait() or get(), but the futures held in the output collection may.

Returns:

- future<tuple<>> if when_any is called with zero arguments.

- future<vector<future<R>>> if the input cardinality is unknown at compile and the iterator pair yields future<R>. The order of the futures in the output vector will be the same as given by the input iterator.

- future<vector<shared_future<R>>> if the input cardinality is unknown at compile time and the iterator pair yields shared_future<R>. The order of the futures in the output vector will be the same as given by the input iterator.

- future<tuple<decat_t<FutTypes>...>> if inputs are fixed in number.

Postconditions:

- All input futures valid() == false.

- All input shared_futures valid() == true.

- valid() == true.

template <typename T>
  future<V> make_ready_future(T&& value);  // EXTENSION
future<void> make_ready_future();  // EXTENSION
template <typename T>
  future<T> make_ready_future(exception_ptr ex);  // DEPRECATED
template <typename T, typename E>
  future<T> make_ready_future(E ex);  // DEPRECATED

Remark:

where V is determined as follows: Let U be decay_t<T>. Then V is X& if U equals reference_wrapper<X>, otherwise V is U.

Effects:

- value prototype: The value that is passed into the function is moved to the shared state of the returned future if it is an rvalue. Otherwise the value is copied to the shared state of the returned future.

- exception: The exception that is passed into the function is copied to the shared state of the returned future.

.

Returns:

- a ready future with the value set with value

- a ready future with the exception set with ex

- a ready future<void> with the value set (void).

Postcondition:

- Returned future, valid() == true

- Returned future, is_ready() = true

- Returned future, has_value() = true or has_exception() depending on the prototype.

exceptional_ptr make_exceptional_future(exception_ptr ex);  // EXTENSION
template <typename E>
  exceptional_ptr make_exceptional_future(E ex);  // EXTENSION
exceptional_ptr make_exceptional_future();  // EXTENSION

Effects:

The exception that is passed in to the function or the current exception if no parameter is given is moved into the returned exceptional_ptr if it is an rvalue. Otherwise the exception is copied into the returned exceptional_ptr.

Returns:

An exceptional_ptr instance implicitly convertible to a future<T>

template <typename T>
  future<typename decay<T>::type> make_future(T&& value);  // DEPRECATED
future<void> make_future();  // DEPRECATED

Effects:

The value that is passed into the function is moved to the shared state of the returned function if it is an rvalue. Otherwise the value is copied to the shared state of the returned function. .

Returns:

- future<T>, if function is given a value of type T

- future<void>, if the function is not given any inputs.

Postcondition:

- Returned future<T>, valid() == true

- Returned future<T>, is_ready() = true

See:

make_ready_future()

template <typename T>
  shared_future<typename decay<T>::type> make_shared_future(T&& value);  // DEPRECATED
shared_future<void> make_shared_future();  // DEPRECATED

Effects:

The value that is passed in to the function is moved to the shared state of the returned function if it is an rvalue. Otherwise the value is copied to the shared state of the returned function. .

Returns:

- shared_future<T>, if function is given a value of type T

- shared_future<void>, if the function is not given any inputs.

Postcondition:

- Returned shared_future<T>, valid() == true

- Returned shared_future<T>, is_ready() = true

See:

make_ready_future() and future<>::share()


PrevUpHomeNext