Linked by Hadrien Grasland on Fri 27th May 2011 11:34 UTC
General Development After having an interesting discussion with Brendan on the topic of deadlocks in threaded and asynchronous event handling systems (see the comments on this blog post), I just had something to ask to the developers on OSnews: could you live without blocking API calls? Could you work with APIs where lengthy tasks like writing to a file, sending a signal, doing network I/O, etc is done in a nonblocking fashion, with only callbacks as a mechanism to return results and notify your software when an operation is done?
Thread beginning with comment 474808
To read all comments associated with this story, please click here.
Yes, but it would be ugly
by tomz on Fri 27th May 2011 12:48 UTC
tomz
Member since:
2010-05-06

Particularly in the case of a semaphore or mutex.

One function of blocking is to give up the CPU when you can't do anything, and to queue for a resource.

It would turn simple code into something like the following, but the pastabilities are endless:

gotit=0;
waitforit() { gotit=1 }
func() {
...
getit(resource,waitforit);
while(!gotit) sched_yield; // heat-up, drain battery,etc.

Of course there might not be callbacks, but check-if-done functions,

while( !areWeThereYet(destination) ) burn();

Of course if it is truly asynchronous, the callback function will have a parameter specifying the callback which needs to be called when the first callback is completed.

I could live without blocking, but I could also live in a post-apocalyptic world as well, but it wouldn't be any more easy, simple, or pleasant.

Reply Score: 1

RE: Yes, but it would be ugly
by tbcpp on Fri 27th May 2011 13:08 in reply to "Yes, but it would be ugly"
tbcpp Member since:
2006-02-06

I think you're missing how async is done in modern languages. For instance the entire IO API in silverlight is implemented via async calls.

So instead of actually warming the cpu in a loop, instead you provide a callback where execution will continue. Semiphores and mutexes could be coded in the same way (using the () => C# lambda syntax:).

DoStuff()
mutex.Aquire( () => {
DoMoreStuff();
});

In C# 5 there are extensions for doing this in a simpler manner (something like this):

DoStuff()
await mutex.Aquire();
DoMoreStuff();

Basically this is just syntactic sugar around the first example. I don't think any serious async system uses loops they instead hand off the async callback handling to other VM or OS functions.

Reply Parent Score: 3

RE: Yes, but it would be ugly
by samuelmello on Fri 27th May 2011 14:05 in reply to "Yes, but it would be ugly"
samuelmello Member since:
2011-05-27

One approach to reduce the amount of semaphores/mutex is to use one thread for each group of resources you may need to lock. Then, as you know that just a single thread modifies that resources, you don't need mutexes. You can adjust the scalability for multiple cores by changing the granularity of the resource groups (smaller groups need more threads).

Each of these threads have a queue of work to be done and you'll have to model your "work" as structs/objects that can be stored in a queue.

So, instead of having:

Component blah:
...do some_stuff with blah;
...wait(lock_for_foobar);
...do X with foobar;
...release(lock_for_foobar);
...do more_stuff with blah;

You will end up with:

Blah.some_stuff:
...do some_stuff with blah;
...enqueue "do X and then notify Blah" to Foobar

Foobar.X:
...do X with foobar;
...enqueue "X done" to Blah

Blah.X_done:
...do more_stuff with blah;

This way you can get rid of mutexes and use only async calls.

Empirically, this approach seems to have larger latency but better throughput. Anybody have seen performance comparisons among these kind of programming models?

Edited 2011-05-27 14:06 UTC

Reply Parent Score: 1

looncraz Member since:
2005-07-24

Performance really depends on the exact implementation.

In my (private)LoonCAFE project I use an almost fully async API ( with optional blocking calls ).

Performance is actually significantly better with callbacks and signals overall and the applications feel much better - especially when things really get heavy.

It would seem that latency would increase but I have not experienced that at all. The overhead is extremely minimal for callbacks and signals, being no more than the overhead of calling a function - or a simple loop in the case of a signal ( signals really serve a different purpose, though ).

Which is to say, not much in my implementation. An extra 'call' ain't very expensive.

--The loon

Reply Parent Score: 2