Linked by Thom Holwerda on Sat 9th Feb 2013 02:04 UTC, submitted by ac
Linux "Both of these articles allude to the fact that I'm working on putting the D-Bus protocol into the kernel, in order to help achieve these larger goals of proper IPC for applications. And I'd like to confirm that yes, this is true, but it's not going to be D-Bus like you know it today. Our goal (and I use 'goal' in a very rough term, I have 8 pages of scribbled notes describing what we want to try to implement here), is to provide a reliable multicast and point-to-point messaging system for the kernel, that will work quickly and securely. On top of this kernel feature, we will try to provide a 'libdbus' interface that allows existing D-Bus users to work without ever knowing the D-Bus daemon was replaced on their system."
Thread beginning with comment 551969
To read all comments associated with this story, please click here.
Finally!
by butters on Sat 9th Feb 2013 03:05 UTC
butters
Member since:
2005-07-08

A real pub-sub multicast socket implementation in the kernel. This has been at the top of my feature wish list for a long time. Obviously D-Bus will be the first client, but libevent would be next, and then the Wayland devs are beginning to rethink their IPC protocol.

In the beginning, there was select and poll. Then there was epoll. But their days may be numbered. It should be obvious that the socket is the one true UNIX abstraction for IPC. This is the way it was always meant to be. We fork a bunch of small processes, each of which does one thing well, and hook them all together with read() and write().

But we need more flexibility in how we compose networks of communicating processes. We can't do everything with pipelines and the client-server pattern. Even a microkernel purist would agree that IPC should be provided by a point-to-point message bus in the kernel. Anything else we might want can be built on top.

Reply Score: 9

RE: Finally!
by Alfman on Sat 9th Feb 2013 05:58 in reply to "Finally!"
Alfman Member since:
2011-01-28

butters,

Well, first of all let me state that I do share your enthusiasm for moving this sort of IPC into the kernel.


"In the beginning, there was select and poll. Then there was epoll. But their days may be numbered. It should be obvious that the socket is the one true UNIX abstraction for IPC."

I'm not sure if you meant exactly what you said here, but select/poll/epoll are complimentary mechanisms for using sockets, sockets don't supersede them. I've found that epoll is by far the most efficient way to handle many asynchronous sockets.


"This is the way it was always meant to be. We fork a bunch of small processes, each of which does one thing well, and hook them all together with read() and write()."

It's true that multi-process blocking IO is the traditional unix way, and some people prefer that way of programming. But a big reason for phasing it out is because it scales very inefficiently compared to multi-threaded and async models.

As a popular example, take a look at the scalability differences for a traditional multi-process web server (apache MPM) versus modern asynchronous ones:

http://whisperdale.net/11-nginx-vs-cherokee-vs-apache-vs-lighttpd.h...

Reply Parent Score: 7

RE[2]: Finally!
by butters on Sat 9th Feb 2013 23:29 in reply to "RE: Finally!"
butters Member since:
2005-07-08

I should have said "execution units" instead of processes. Obviously multithreading is an improvement over multiprocessing, and multiplexing coroutines on a non-blocking thread pool is an improvement over multithreading.

But an in-kernel message bus would still ease the implementation and accelerate the performance of a modern concurrent runtime platform such as Go, whose channel type would map nicely to AF_BUS.

Reply Parent Score: 4

RE[2]: Finally!
by JAlexoid on Sun 10th Feb 2013 00:08 in reply to "RE: Finally!"
JAlexoid Member since:
2009-05-19

It's true that multi-process blocking IO is the traditional unix way, and some people prefer that way of programming. But a big reason for phasing it out is because it scales very inefficiently compared to multi-threaded and async models.

As a popular example, take a look at the scalability differences for a traditional multi-process web server (apache MPM) versus modern asynchronous ones:

http://whisperdale.net/11-nginx-vs-cherokee-vs-apache-vs-lighttpd.h...


Quite a common mistake. Comparing async performance to performance of Apache. Those don't highlight the differences in scalability of non-blocking IO, but demonstrate that Apache is big.

Reply Parent Score: 3