Linked by Thom Holwerda on Sat 9th Feb 2013 02:04 UTC, submitted by ac
Linux "Both of these articles allude to the fact that I'm working on putting the D-Bus protocol into the kernel, in order to help achieve these larger goals of proper IPC for applications. And I'd like to confirm that yes, this is true, but it's not going to be D-Bus like you know it today. Our goal (and I use 'goal' in a very rough term, I have 8 pages of scribbled notes describing what we want to try to implement here), is to provide a reliable multicast and point-to-point messaging system for the kernel, that will work quickly and securely. On top of this kernel feature, we will try to provide a 'libdbus' interface that allows existing D-Bus users to work without ever knowing the D-Bus daemon was replaced on their system."
Thread beginning with comment 552011
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[2]: Finally!
by JAlexoid on Sun 10th Feb 2013 00:08 UTC in reply to "RE: Finally!"
JAlexoid
Member since:
2009-05-19

It's true that multi-process blocking IO is the traditional unix way, and some people prefer that way of programming. But a big reason for phasing it out is because it scales very inefficiently compared to multi-threaded and async models.

As a popular example, take a look at the scalability differences for a traditional multi-process web server (apache MPM) versus modern asynchronous ones:

http://whisperdale.net/11-nginx-vs-cherokee-vs-apache-vs-lighttpd.h...


Quite a common mistake. Comparing async performance to performance of Apache. Those don't highlight the differences in scalability of non-blocking IO, but demonstrate that Apache is big.

Reply Parent Score: 3

RE[3]: Finally!
by Alfman on Sun 10th Feb 2013 01:08 in reply to "RE[2]: Finally!"
Alfman Member since:
2011-01-28

JAlexoid,

"Quite a common mistake. Comparing async performance to performance of Apache. Those don't highlight the differences in scalability of non-blocking IO, but demonstrate that Apache is big."

In the past I've done the socket benchmarks myself, apache was just an example, feel free to substitute whatever example you like. Blocking socket IO implies using either multi-process or multi-threaded models, I hope that we can agree that the forking multi-process model is the least scalable.

There's nothing wrong with the multi-threaded blocking model, I don't critisize anyone for using it. However it does imply more memory/cpu overhead than the non-blocking model due to each client using it's own stack and the requirement of synchronization primitives that are usually unnecessary with the async model. Additionally the multithreaded solution does not increase concurrency over an asynchronous solution when the number of async handlers equals the number of cores, so right there you've got all the ingredients for async to come out ahead.

Mind you I think the difference is negligible when the number of concurrent users is low, but async really shines with many concurrent connections (10K+).

Reply Parent Score: 4

RE[4]: Finally!
by JAlexoid on Sun 10th Feb 2013 23:11 in reply to "RE[3]: Finally!"
JAlexoid Member since:
2009-05-19

Blocking socket IO implies using either multi-process or multi-threaded models, I hope that we can agree that the forking multi-process model is the least scalable.

There is no form of IO that will imply high scalability without multi-threading or multiple processes.(nginx, lighttpd and node.js use multiple processes to scale)

However it does imply more memory/cpu overhead than the non-blocking model due to each client using it's own stack and the requirement of synchronization primitives that are usually unnecessary with the async model.

What gave you the impression that:
A) threads are expensive in memory or CPU. They are quite cheap these days.
B) synchronization of blocking IO isn't replaced with something similar in nonblocking IO. (It always is)

Reply Parent Score: 3