Linked by Hadrien Grasland on Sun 29th May 2011 09:42 UTC
OSNews, Generic OSes It's funny how trying to have a consistent system design makes you constantly jump from one area of the designed OS to another. I initially just tried to implement interrupt handling, and now I'm cleaning up the design of an RPC-based daemon model, which will be used to implement interrupt handlers, along with most other system services. Anyway, now that I get to something I'm personally satisfied with, I wanted to ask everyone who's interested to check that design and tell me if anything in it sounds like a bad idea to them in the short or long run. That's because this is a core part of this OS' design, and I'm really not interested in core design mistakes emerging in a few years if I can fix them now. Many thanks in advance.
Permalink for comment 475221
To read all comments associated with this story, please click here.
RE[6]: RPC considered harmful
by Neolander on Tue 31st May 2011 07:26 UTC in reply to "RE[5]: RPC considered harmful"
Neolander
Member since:
2010-03-08

While I can see some similarities between this and asynchronous messaging, there's also significant differences; including the overhead of creating (and eventually destroying) threads, which (in my experience) is the third most expensive operation microkernels do (after creating and destroying processes).

Ah, Brendan, Brendan, how do you always manage to be so kind and helpful with people who play with OSdeving ? Do you teach it in real life or something ?

Anyway, have you pushed your investigation so far that you know which step of the thread creation process is expensive ? Maybe it's something whose impact can be reduced...

On top of that, (because you can't rely on the queues to serialise access to data structures) programmers would have to rely on something else for reentrancy control; like traditional locking, which is error-prone (lots of programmers find it "hard" and/or screw it up) and adds extra overhead (e.g. mutexes with implied task switches when under lock contention).

This has been pointed out by Alfman, solved by introducing an asynchronous operating mode where pending threads are queued and run one after the other. Sorry for not mentioning it in the post where I try to describe my model, when I noticed the omission it was already too late to edit.

I also wouldn't underestimate the effect that IPC overhead will have on the system as a whole (especially for "micro-kernel-like" kernels).

I know, I know, but then we reach one of those chicken and egg problems which are always torturing me : how do I know that my IPC design is "light enough" without doing measurements on a working system for real-world use cases ? And how do I perform these measurements on something which I'm currently designing and is not implemented yet ?

For example, if IRQs are delivered to device drivers via. IPC, then on a server under load (with high speed ethernet, for e.g.) you can expect thousands of IRQs per second (and expect to be creating and destroying thousands of threads per second). Once you add normal processes communicating with each other, this could easily go up to "millions per second" under load. If IPC costs twice as much as it does on other OSs, then the resulting system as a whole can be 50% slower than comparable systems (e.g. other micro-kernels) because of the IPC alone.

First objection which spontaneously comes to my mind is that this OS is not designed to run on server, but rather on desktop and smaller single-user computers.

Maybe desktop use cases also include the need to endure thousands of IRQ per second though, but I was under the impression that desktop computers are ridiculously powerful compared to what one asks from their OSs and that their reactivity issues rather come from things like poor task scheduling ("running the divx encoding process more often than the window manager") or excessive dependency on disk I/O.

In general, any form of IPC can be implemented on top of any other form of IPC. In practice it's not quite that simple because you can't easily emulate the intended interaction with scheduling (blocking/unblocking, etc) in all cases; and even in cases where you can there's typically some extra overhead involved.

Understood.

The alternative would be if the kernel has inbuilt support for multiple different forms of IPC; which can lead to a "Tower of Babel" situation where it's awkward for different processes (using different types of IPC) to communicate with each other.

Actually, I tend to lean towards this solution, even though I know of the Babel risk and have regularly thought about it, because each IPC mechanism has its strength and weaknesses. As an example, piping and messaging systems are better when processing a stream of data, while remote calls are better suited when giving a process some tasks to do.

You're right that I need to keep the number of available IPC primitives very small regardless of the benefits of each, though, so there's a compromise there and I have to investigate the usefulness of each IPC primitive.

Edited 2011-05-31 07:28 UTC

Reply Parent Score: 1