哪个更有效,基本互斥锁还是原子整数?

对于一些简单的东西,比如计数器,如果多个线程将增加数量。我读到互斥锁会降低效率,因为线程必须等待。所以,对我来说,原子计数器是最有效率的,但是我从内部读到,它基本上是一个锁?所以我很困惑,为什么这两种方法都比另一种更有效率。

77828 次浏览

The atomic variable classes in Java are able to take advantage of Compare and swap instructions provided by the processor.

Here's a detailed description of the differences: http://www.ibm.com/developerworks/library/j-jtp11234/

Atomic operations leverage processor support (compare and swap instructions) and don't use locks at all, whereas locks are more OS-dependent and perform differently on, for example, Win and Linux.

Locks actually suspend thread execution, freeing up cpu resources for other tasks, but incurring in obvious context-switching overhead when stopping/restarting the thread. On the contrary, threads attempting atomic operations don't wait and keep trying until success (so-called busy-waiting), so they don't incur in context-switching overhead, but neither free up cpu resources.

Summing up, in general atomic operations are faster if contention between threads is sufficiently low. You should definitely do benchmarking as there's no other reliable method of knowing what's the lowest overhead between context-switching and busy-waiting.

If you have a counter for which atomic operations are supported, it will be more efficient than a mutex.

Technically, the atomic will lock the memory bus on most platforms. However, there are two ameliorating details:

  • It is impossible to suspend a thread during the memory bus lock, but it is possible to suspend a thread during a mutex lock. This is what lets you get a lock-free guarantee (which doesn't say anything about not locking - it just guarantees that at least one thread makes progress).
  • Mutexes eventually end up being implemented with atomics. Since you need at least one atomic operation to lock a mutex, and one atomic operation to unlock a mutex, it takes at least twice long to do a mutex lock, even in the best of cases.

atomic integer is a user mode object there for it's much more efficient than a mutex which runs in kernel mode. The scope of atomic integer is a single application while the scope of the mutex is for all running software on the machine.

Mutex is a kernel level semantic which provides mutual exclusion even at the Process level. Note that it can be helpful in extending mutual exclusion across process boundaries and not just within a process (for threads). It is costlier.

Atomic Counter, AtomicInteger for e.g., is based on CAS, and usually try attempting to do operation until succeed. Basically, in this case, threads race or compete to increment\decrement the value atomically. Here, you may see good CPU cycles being used by a thread trying to operate on a current value.

Since you want to maintain the counter, AtomicInteger\AtomicLong will be the best for your use case.

Most processors have supported an atomic read or write, and often an atomic cmp&swap. This means that the processor itself writes or reads the latest value in a single operation, and there might be a few cycles lost compared to a normal integer access, especially as the compiler can't optimise around atomic operations nearly as well as normal.

On the other hand a mutex is a number of lines of code to enter and leave, and during that execution other processors that access the same location are totally stalled, so clearly a big overhead on them. In unoptimised high-level code, the mutex enter/exit and the atomic will be function calls, but for mutex, any competing processor will be locked out while your mutex enter function returns, and while your exit function is started. For atomic, it is only the duration of the actual operation which is locked out. Optimisation should reduce that cost, but not all of it.

If you are trying to increment, then your modern processor probably supports atomic increment/decrement, which will be great.

If it does not, then it is either implemented using the processor atomic cmp&swap, or using a mutex.

Mutex:

get the lock
read
increment
write
release the lock

Atomic cmp&swap:

atomic read the value
calc the increment
do{
atomic cmpswap value, increment
recalc the increment
}while the cmp&swap did not see the expected value

So this second version has a loop [incase another processor increments the value between our atomic operations, so value no longer matches, and increment would be wrong] that can get long [if there are many competitors], but generally should still be quicker than the mutex version, but the mutex version may allow that processor to task switch.

A minimal (standards compliant) mutex implementation requires 2 basic ingredients:

  • A way to atomically convey a state change between threads (the 'locked' state)
  • memory barriers to enforce memory operations protected by the mutex to stay inside the protected area.

There is no way you can make it any simpler than this because of the 'synchronizes-with' relationship the C++ standard requires.

A minimal (correct) implementation might look like this:

class mutex {
std::atomic<bool> flag{false};


public:
void lock()
{
while (flag.exchange(true, std::memory_order_relaxed));
std::atomic_thread_fence(std::memory_order_acquire);
}


void unlock()
{
std::atomic_thread_fence(std::memory_order_release);
flag.store(false, std::memory_order_relaxed);
}
};

Due to its simplicity (it cannot suspend the thread of execution), it is likely that, under low contention, this implementation outperforms a std::mutex. But even then, it is easy to see that each integer increment, protected by this mutex, requires the following operations:

  • an atomic store to release the mutex
  • an atomic compare-and-swap (read-modify-write) to acquire the mutex (possibly multiple times)
  • an integer increment

If you compare that with a standalone std::atomic<int> that is incremented with a single (unconditional) read-modify-write (eg. fetch_add), it is reasonable to expect that an atomic operation (using the same ordering model) will outperform the case whereby a mutex is used.