Linked by Thom Holwerda on Sat 9th Feb 2013 02:04 UTC, submitted by ac
Linux "Both of these articles allude to the fact that I'm working on putting the D-Bus protocol into the kernel, in order to help achieve these larger goals of proper IPC for applications. And I'd like to confirm that yes, this is true, but it's not going to be D-Bus like you know it today. Our goal (and I use 'goal' in a very rough term, I have 8 pages of scribbled notes describing what we want to try to implement here), is to provide a reliable multicast and point-to-point messaging system for the kernel, that will work quickly and securely. On top of this kernel feature, we will try to provide a 'libdbus' interface that allows existing D-Bus users to work without ever knowing the D-Bus daemon was replaced on their system."
Thread beginning with comment 552097
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[5]: Finally!
by Alfman on Mon 11th Feb 2013 00:47 UTC in reply to "RE[4]: Finally!"
Alfman
Member since:
2011-01-28

JAlexoid,

"There is no form of IO that will imply high scalability without multi-threading or multiple processes.(nginx, lighttpd and node.js use multiple processes to scale)"

That's not exactly true, or it's misleading at best. The reason nginx uses multiple processes is simply to distribute the load across all cores, but this is a *fixed* number of processes with a *fixed* overhead. All further scaling under the non-blocking asynchronous model is achieved without ANY more processes or threads REGARDLESS of the size of the workload.

With a blocking model, each client requires another thread/process, which adds up to gross inefficiencies at the upper end of the scale.


"What gave you the impression that:
A) threads are expensive in memory or CPU. They are quite cheap these days."

Well, if it means the difference between ~ 8-32KiB per client thread versus a couple hundred bytes or for the client data structure used by the asynchronous approach, I'd say that's a significant difference both in theory and in practice.

"B) synchronization of blocking IO isn't replaced with something similar in nonblocking IO. (It always is)"

I'm not sure what you mean here, but I was referring to multi-threaded programming needing to implement synchronization around shared data objects between threads (for client accounting or shared caches or whatever), but with the async model there's usually no need to grab a mutex because each client is handled within one thread.

Your responses give me the impression you've never written an epoll based handler, am I right?

See the epoll documentation:
http://linux.die.net/man/2/epoll_wait

They've done a number of things to make the interface highly efficient, for one we can retrieve hundreds of events from the kernel in one fell swoop. For another, unlike the earlier select/poll syscalls we don't have to constantly specify the sockets to be monitored at each syscall. It may not be for everyone, but I swear by it.


Since you seem not to believe me about the better efficiency of the non-blocking/async model, I challenge you to find an example where a multi-threaded / multi-process solution performs as well as or better than the asynchronous model at very large scales.

Reply Parent Score: 2