Linked by Hadrien Grasland on Fri 27th May 2011 11:34 UTC
General Development After having an interesting discussion with Brendan on the topic of deadlocks in threaded and asynchronous event handling systems (see the comments on this blog post), I just had something to ask to the developers on OSnews: could you live without blocking API calls? Could you work with APIs where lengthy tasks like writing to a file, sending a signal, doing network I/O, etc is done in a nonblocking fashion, with only callbacks as a mechanism to return results and notify your software when an operation is done?
Thread beginning with comment 474815
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE: Yes, but it would be ugly
by samuelmello on Fri 27th May 2011 14:05 UTC in reply to "Yes, but it would be ugly"
samuelmello
Member since:
2011-05-27

One approach to reduce the amount of semaphores/mutex is to use one thread for each group of resources you may need to lock. Then, as you know that just a single thread modifies that resources, you don't need mutexes. You can adjust the scalability for multiple cores by changing the granularity of the resource groups (smaller groups need more threads).

Each of these threads have a queue of work to be done and you'll have to model your "work" as structs/objects that can be stored in a queue.

So, instead of having:

Component blah:
...do some_stuff with blah;
...wait(lock_for_foobar);
...do X with foobar;
...release(lock_for_foobar);
...do more_stuff with blah;

You will end up with:

Blah.some_stuff:
...do some_stuff with blah;
...enqueue "do X and then notify Blah" to Foobar

Foobar.X:
...do X with foobar;
...enqueue "X done" to Blah

Blah.X_done:
...do more_stuff with blah;

This way you can get rid of mutexes and use only async calls.

Empirically, this approach seems to have larger latency but better throughput. Anybody have seen performance comparisons among these kind of programming models?

Edited 2011-05-27 14:06 UTC

Reply Parent Score: 1

looncraz Member since:
2005-07-24

Performance really depends on the exact implementation.

In my (private)LoonCAFE project I use an almost fully async API ( with optional blocking calls ).

Performance is actually significantly better with callbacks and signals overall and the applications feel much better - especially when things really get heavy.

It would seem that latency would increase but I have not experienced that at all. The overhead is extremely minimal for callbacks and signals, being no more than the overhead of calling a function - or a simple loop in the case of a signal ( signals really serve a different purpose, though ).

Which is to say, not much in my implementation. An extra 'call' ain't very expensive.

--The loon

Reply Parent Score: 2