为什么要使用 std: : sync?

我试图深入探索新 C + + 11标准的所有选项,在使用 std: : sync 并阅读其定义时,我注意到了两点,至少在使用 gcc 4.8的 Linux 下是这样的:

  • 它被称为 异步,但是它有一个真正的“顺序行为”,基本上就是在你调用 未来的那一行,与你的异步函数 相关联,程序会一直阻塞,直到 的执行完成。
  • 它依赖于与其他程序完全相同的外部库,以及更好的非阻塞解决方案,这意味着 pthread,如果你想使用 std::async,你需要 pthread。

在这一点上,我很自然地会问,为什么选择 std: : Sync 而不选择一组简单的函数?这个解决方案甚至根本没有扩展性,您调用的未来越多,您的程序的响应性就会越差。

我错过了什么吗?您能给出一个以异步、非阻塞方式执行的示例吗?

83264 次浏览

If you need the result of an asynchronous operation, then you have to block, no matter what library you use. The idea is that you get to choose when to block, and, hopefully when you do that, you block for a negligible time because all the work has already been done.

Note also that std::async can be launched with policies std::launch::async or std::launch::deferred. If you don't specify it, the implementation is allowed to choose, and it could well choose to use deferred evaluation, which would result in all the work being done when you attempt to get the result from the future, resulting in a longer block. So if you want to make sure that the work is done asynchronously, use std::launch::async.

In the reference: http://en.cppreference.com/w/cpp/thread/async

If the async flag is set (i.e. policy & std::launch::async != 0), then async executes the function f on a separate thread of execution as if spawned by std::thread(f, args...), except that if the function f returns a value or throws an exception, it is stored in the shared state accessible through the std::future that async returns to the caller.

It is a nice property to keep a record of exceptions thrown.

I think your problem is with std::future saying that it blocks on get. It only blocks if the result isn't already ready.

If you can arrange for the result to be already ready, this isn't a problem.

There are many ways to know that the result is already ready. You can poll the future and ask it (relatively simple), you could use locks or atomic data to relay the fact that it is ready, you could build up a framework to deliver "finished" future items into a queue that consumers can interact with, you could use signals of some kind (which is just blocking on multiple things at once, or polling).

Or, you could finish all the work you can do locally, and then block on the remote work.

As an example, imagine a parallel recursive merge sort. It splits the array into two chunks, then does an async sort on one chunk while sorting the other chunk. Once it is done sorting its half, the originating thread cannot progress until the second task is finished. So it does a .get() and blocks. Once both halves have been sorted, it can then do a merge (in theory, the merge can be done at least partially in parallel as well).

This task behaves like a linear task to those interacting with it on the outside -- when it is done, the array is sorted.

We can then wrap this in a std::async task, and have a future sorted array. If we want, we could add in a signally procedure to let us know that the future is finished, but that only makes sense if we have a thread waiting on the signals.

  • it's called async, but it got a really "sequential behaviour",

No, if you use the std::launch::async policy then it runs asynchronously in a new thread. If you don't specify a policy it might run in a new thread.

basically in the row where you call the future associated with your async function foo, the program blocks until the execution of foo it's completed.

It only blocks if foo hasn't completed, but if it was run asynchronously (e.g. because you use the std::launch::async policy) it might have completed before you need it.

  • it depends on the exact same external library as others, and better, non-blocking solutions, which means pthread, if you want to use std::async you need pthread.

Wrong, it doesn't have to be implemented using Pthreads (and on Windows it isn't, it uses the ConcRT features.)

at this point it's natural for me asking why choosing std::async over even a simple set of functors ?

Because it guarantees thread-safety and propagates exceptions across threads. Can you do that with a simple set of functors?

It's a solution that doesn't even scale at all, the more future you call, the less responsive your program will be.

Not necessarily. If you don't specify the launch policy then a smart implementation can decide whether to start a new thread, or return a deferred function, or return something that decides later, when more resources may be available.

Now, it's true that with GCC's implementation, if you don't provide a launch policy then with current releases it will never run in a new thread (there's a bugzilla report for that) but that's a property of that implementation, not of std::async in general. You should not confuse the specification in the standard with a particular implementation. Reading the implementation of one standard library is a poor way to learn about C++11.

Can you show an example that is granted to be executed in an async, non blocking, way ?

This shouldn't block:

auto fut = std::async(std::launch::async, doSomethingThatTakesTenSeconds);
auto result1 = doSomethingThatTakesTwentySeconds();
auto result2 = fut.get();

By specifying the launch policy you force asynchronous execution, and if you do other work while it's executing then the result will be ready when you need it.

http://www.cplusplus.com/reference/future/async/

there are three type of policy,

  1. launch::async
  2. launch::deferred
  3. launch::async|launch::deferred

by default launch::async|launch::deferred is passed to std::async.