Linked by Hadrien Grasland on Sun 29th May 2011 09:42 UTC
OSNews, Generic OSes It's funny how trying to have a consistent system design makes you constantly jump from one area of the designed OS to another. I initially just tried to implement interrupt handling, and now I'm cleaning up the design of an RPC-based daemon model, which will be used to implement interrupt handlers, along with most other system services. Anyway, now that I get to something I'm personally satisfied with, I wanted to ask everyone who's interested to check that design and tell me if anything in it sounds like a bad idea to them in the short or long run. That's because this is a core part of this OS' design, and I'm really not interested in core design mistakes emerging in a few years if I can fix them now. Many thanks in advance.
Thread beginning with comment 475652
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[9]: RPC considered harmful
by Neolander on Thu 2nd Jun 2011 10:47 UTC in reply to "RE[8]: RPC considered harmful"
Neolander
Member since:
2010-03-08

You can estimate; but how fast would be "too fast"?

In my opinion, there's no such thing as too fast. It's mostly a question of whether any extra overheads are worth any extra benefits.

For me, "too fast" would be when gaining extra speed implies having another desirable characteristic of the OS become exceedingly low. Speed has its trade-offs, and to solve the problem of trade-offs it's good to have goals. Well, I think we agree there anyway.

The difference between a server OS and a desktop OS is mostly artificial product differentiation made in higher levels of the OS (e.g. if it comes with HTTP/FTP servers and no GUI, or if it comes with a GUI and no HTTP/FTP servers; licensing, advertising, availability of support contracts, etc). It makes no difference at the lowest levels; until/unless you start looking at fault tolerance features (redundancy, hot-plug RAM/CPUs, etc).

That's the way it's done today, but it has not been always done like that. Classic Windows and Mac OS, as an example, were designed for desktop use, and would have been terrible as server OSs for a number of reasons.

With TOSP, I design solely for "desktop" (more precisely, personal) computers, because I believe that reducing the amount of target use cases will in turn simplify the design in some areas and reduce the amount of trade-offs, resulting in a design that's lighter and better suited for the job. It's the classical "generic vs specialized" debate, really.

Well, not quite (I'm not sure you fully understand the differences between messaging and pipes).

For me, the difference is about what is the atomic transmitted unit that is processed by the recipient.

In a pipe, that atomic unit is a fixed-size heap of binary data, typically a byte in the UNIX world. In a message, the atomic unit is a variable-sized message, which is not actually processed by the recipient until the message's terminator has been received.

Am I right ?

Pipes would work well for streams of bytes, but messaging wouldn't be ideal (there'd be extra/unnecessary overhead involved with breaking a stream into smaller pieces). Most things aren't streams of bytes though - they're streams of "some sort of data structure". Maybe a stream of "video frames", a stream of "keypresses", a stream of "commands/instructions", a stream of "compressed blocks of audio", etc. In these cases there's natural boundaries between the "some sort of data structures" - messages would be ideal (one message per "some sort of data structure") and pipes wouldn't be ideal (there'd be extra/unnecessary overhead involved with finding the natural boundaries).

But what about a kind of pipe which would take something larger than a byte as its basic transmission unit ?

Like, you know, if a keypress is defined by a 32-bit scancodes, a 32-bit scancode pipe ?

You could avoid the overhead of a messaging protocols for fixed-size structures this way.

Also, for messaging each message typically has a "message type" associated with it. This means that the same receiver can handle many different things at the same time (e.g. it could be receiving a stream of "video frames", a stream of "keypresses" and a stream of "compressed blocks of audio" at the same time, and be able to distinguish between them using the message types). Pipes don't work like that - you'd need a different pipe for each stream. This means that rather than waiting to receive messages of any type, you end up needing something special like "select()" to monitor many different pipes.

What if a program could monitor several streams at the same time by having a different callback being triggered when a message comes in each of the pipes ?

(Again, if the OSnews comment system has decided to archive this discussion by the time you answer, feel free to continue replying on my blog.)

Edited 2011-06-02 10:50 UTC

Reply Parent Score: 1

RE[10]: RPC considered harmful
by Alfman on Thu 2nd Jun 2011 18:59 in reply to "RE[9]: RPC considered harmful"
Alfman Member since:
2011-01-28

Neolander,

"Like, you know, if a keypress is defined by a 32-bit scancodes, a 32-bit scancode pipe ?

You could avoid the overhead of a messaging protocols for fixed-size structures this way."

Unless there's a compelling reason, I wouldn't limit devs to fixed size messages.



"What if a program could monitor several streams at the same time by having a different callback being triggered when a message comes in each of the pipes ?"


Hmm, all this talk of pipes is making me think why aren't pipes and RPC unified into one fundamental concept?

The typical use cases for pipes is that they are explicitly "read", where as for RPC a function is explicitly called with parameters "passed in".

But wouldn't it be possible for them to share the same paths in the OS and just let the userspace determine which access method is more appropriate?

Would there be a shortcoming in doing so?

...Just a thought.

Reply Parent Score: 2