...one of the most highly
regarded and expertly designed C++ library projects in the
world.
— Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
But what about the case when we must wait for all results of different types?
We can present an API that is frankly quite cool. Consider a sample struct:
struct Data { std::string str; double inexact; int exact; friend std::ostream& operator<<( std::ostream& out, Data const& data); ... };
Let's fill its members from task functions all running concurrently:
Data data = wait_all_members< Data >( [](){ return sleeper("wams_left", 100); }, [](){ return sleeper(3.14, 150); }, [](){ return sleeper(17, 50); }); std::cout << "wait_all_members<Data>(success) => " << data << std::endl;
Note that for this case, we abandon the notion of capturing the earliest result first, and so on: we must fill exactly the passed struct in left-to-right order.
That permits a beautifully simple implementation:
// Explicitly pass Result. This can be any type capable of being initialized // from the results of the passed functions, such as a struct. template< typename Result, typename ... Fns > Result wait_all_members( Fns && ... functions) { // Run each of the passed functions on a separate fiber, passing all their // futures to helper function for processing. return wait_all_members_get< Result >( boost::fibers::async( std::forward< Fns >( functions) ) ... ); }
template< typename Result, typename ... Futures > Result wait_all_members_get( Futures && ... futures) { // Fetch the results from the passed futures into Result's initializer // list. It's true that the get() calls here will block the implicit // iteration over futures -- but that doesn't matter because we won't be // done until the slowest of them finishes anyway. As results are // processed in argument-list order rather than order of completion, the // leftmost get() to throw an exception will cause that exception to // propagate to the caller. return Result{ futures.get() ... }; }
It is tempting to try to implement wait_all_members()
as a one-liner like this:
return Result{ boost::fibers::async(functions).get()... };
The trouble with this tactic is that it would serialize all the task functions.
The runtime makes a single pass through functions
,
calling fibers::async()
for each and then immediately calling
future::get()
on its returned future<>
. That blocks the implicit loop.
The above is almost equivalent to writing:
return Result{ functions()... };
in which, of course, there is no concurrency at all.
Passing the argument pack through a function-call boundary (wait_all_members_get()
)
forces the runtime to make two passes: one in wait_all_members()
to collect the future<>
s
from all the async()
calls, the second in wait_all_members_get()
to fetch each of the results.
As noted in comments, within the wait_all_members_get()
parameter pack expansion pass, the blocking
behavior of get()
becomes irrelevant. Along the way, we will hit the get()
for the slowest task function; after
that every subsequent get()
will complete in trivial time.
By the way, we could also use this same API to fill a vector or other collection:
// If we don't care about obtaining results as soon as they arrive, and we // prefer a result vector in passed argument order rather than completion // order, wait_all_members() is another possible implementation of // wait_all_until_error(). auto strings = wait_all_members< std::vector< std::string > >( [](){ return sleeper("wamv_left", 150); }, [](){ return sleeper("wamv_middle", 100); }, [](){ return sleeper("wamv_right", 50); }); std::cout << "wait_all_members<vector>() =>"; for ( std::string const& str : strings) { std::cout << " '" << str << "'"; } std::cout << std::endl;