Linked by Hadrien Grasland on Fri 27th May 2011 11:34 UTC
General Development After having an interesting discussion with Brendan on the topic of deadlocks in threaded and asynchronous event handling systems (see the comments on this blog post), I just had something to ask to the developers on OSnews: could you live without blocking API calls? Could you work with APIs where lengthy tasks like writing to a file, sending a signal, doing network I/O, etc is done in a nonblocking fashion, with only callbacks as a mechanism to return results and notify your software when an operation is done?
Permalink for comment 475056
To read all comments associated with this story, please click here.
RE[3]: Mutex-less computing
by Alfman on Mon 30th May 2011 08:59 UTC in reply to "RE[2]: Mutex-less computing"
Member since:


"I would first like to declare that I am speaking out of my area of expertise."

I don't mind in the least.

"If you have multiple cores, then even if you dedicate one core to the server, then if, in the rare case, two other cores decide to request something, then there will be a need to care, no?"

Consider a process like a web server using async IO mechanisms. From one thread, it waits for connections and IO (with a single blocking call to the OS, like epoll_wait). From there, as requests come in, it dispatches further IO requests on behalf of clients but since the async thread only schedules IO and doesn't block, it returns immediately to waiting for more IO. From the userspace perspective, there are no threads nor mutexes needed to implement this model.

One simple way to achieve more parallelism is to run two or more instances of this async application, and there still would be no need for userspace mutex since they'd be running in different processes.

"On the other hand, these kinds of atomics, the compare-and-swap and the LL/SC, are hardware accelerated to be (a) non-blocking and (b) interrupt-enabled and (c) runs in one cycle."

Firstly, my assembly knowledge drops off significantly beyond the x86, so I can't generalize this to be true for other architectures.

(a) non-blocking is true in the threaded sense, but false at the hardware level.

(b) I don't know what you mean, but atomics don't interfere with interrupts.

(c) Runs in one cycle? Where do you get that from?

"Why do you claim that they are slower than CPU speed?"

The CPU exerts a lock signal which prevents other CPUs from using the bus, like a tiny mutex. Since this takes place at the bus level, it also means atomic operations operate at bus speed. And from my hasty measurements, it even takes more than one bus cycle.

But I won't expect you to take my word for's a microbenchmark on my laptop. I used gcc -O2 (gcc optimized away the loop into a mul, which is why I used two adds):

int x=0, y=0, i;
for(i=0; i<100000000; i++) {
//y+=x; // normal add = 0.1s
__sync_add_and_fetch(&y, x); // lock add = 2.8s

Anyways, using "y+=x", the loop ran in .1s and the underlying opcode was just "add".

Using the gcc atomic function, the compiler emitted "lock add" and the loop executed in 2.8s.

The first can execute in parallel on SMP, the second becomes serialized.

Ideally I'd write a more precise assembly language test, but I think this example is sufficient to demonstrate my claim.

"Nonetheless, if you combine atomicity and MT, I cannot foresee why a good implementation will not outperform simple threaded and/or async as described."

Here are a few reasons:

Threads are best suited for longer running tasks when MT overhead is minimal compared to other processing. However for IO, operations are frequently followed by more IO operations (read/write loops). Very little time is spent in the threads doing real work. CPU state context switching overhead becomes enormous.

The async model can handle the same IO from one thread in a queue with no context switching at all. Using an AIO interface, it's possible to batch many requests into a single syscall.

MT is inherently penalized by the need for synchronization overhead, async is not.

Excessive use of MT in certain designs result in unscalable stack utilization (even with enough RAM there'll be CPU cache performance issues).

Reply Parent Score: 2