Linked by Hadrien Grasland on Sun 29th May 2011 09:42 UTC
OSNews, Generic OSes It's funny how trying to have a consistent system design makes you constantly jump from one area of the designed OS to another. I initially just tried to implement interrupt handling, and now I'm cleaning up the design of an RPC-based daemon model, which will be used to implement interrupt handlers, along with most other system services. Anyway, now that I get to something I'm personally satisfied with, I wanted to ask everyone who's interested to check that design and tell me if anything in it sounds like a bad idea to them in the short or long run. That's because this is a core part of this OS' design, and I'm really not interested in core design mistakes emerging in a few years if I can fix them now. Many thanks in advance.
Thread beginning with comment 475219
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[14]: Comment by Kaj-de-Vos
by xiaokj on Tue 31st May 2011 07:19 UTC in reply to "RE[13]: Comment by Kaj-de-Vos"
xiaokj
Member since:
2005-06-30

I see most of that as quite pragmatic, so I don't think I can argue much further than already had been. However:

Question is: can you, with declarative data, transform an old instance of the class into a new instance of the class without putting inappropriate data in the "identifier" class member? My conclusion was that it is impossible in my "sort of like RPC" model, but maybe declarative data can do the trick.

You may not be able to make the old code suddenly be new, but without recompiling, you can make the old code speak in the new slang. The parser can just export glue code (as thin as possible, hopefully).

"Also, once versioning is done, it also means you can provide function overloading (in versions, not parameters, this time), (...) It also means that you can employ temporary fixes as you go along, which is definitely powerful.


Not sure I understand this concept, can you give more details ?
"
Nah, it's simple stuff. For the moment, think of a design loop: Maybe to implement something important, you found that your own design phase had an infinite loop. To implement A, you had to first implement B, which requires A. Then, what you can do is to implement proto-A and proto-B and get it over with. The mechanism can take over from there, really.

Or, if you found yourself in a temporary crisis: Something important crashed in the middle of your computing. Your debug options are in peril. Then, you may find yourself implementing temporary fixes in your codebase that you intend to remove and reconstruct later. (Something you do to just keep temporary order at the fastest moment, so that you can still get some rest.) Something like the BKL (Big Kernel Lock) Linux had.

The feature of versioning can definitely be added to an RPC mechanism : at the time where prototypes are broadcasted to the kernel, the client and server processes only have to also broadcast a version number. From this point, it works like function overloading : the kernel investigates whether a compatible version of what the client is looking for is available.

If all you have is a version number, then it is really troublesome trying to keep the details intact. Having a complete spec sheet makes interoperability easier. With a version number, then you can guarantee that the functionality is provided in a later version. But you cannot guarantee the functionality works exactly as prescribed before. Also, it means that you absolutely have to provide said functionality in future versions of the library code -- you cannot do a turnabout and remove said functionality. With a spec sheet, you can guarantee that the client code can still run, as long as it does not use the removed functionality.

Reply Parent Score: 2

RE[15]: Comment by Kaj-de-Vos
by Neolander on Tue 31st May 2011 07:44 in reply to "RE[14]: Comment by Kaj-de-Vos"
Neolander Member since:
2010-03-08

You may not be able to make the old code suddenly be new, but without recompiling, you can make the old code speak in the new slang. The parser can just export glue code (as thin as possible, hopefully).

Okay, so it is at the same point as my approach. More generally, I have been for some time under the impression that in this topic, we are talking about very similar things with a very similar set of advantages and drawbacks even though we don't know it yet.

Nah, it's simple stuff. For the moment, think of a design loop: Maybe to implement something important, you found that your own design phase had an infinite loop. To implement A, you had to first implement B, which requires A. Then, what you can do is to implement proto-A and proto-B and get it over with. The mechanism can take over from there, really.

*laughs* Do you want to know the most ugly hack ever usable to do this in my "RPC" system ? Have the server process broadcast a prototype that is associated with the NULL function pointer. Any attempt to run this prototype during the design phase would crash the server, but if you make sure that the functionality gets implemented...

More seriously, common development practices like using placeholder implementations of the "myfunc() { return 0; }" kind can also be envisioned. As usual, the trick is to always remember to implement the functionality in the end.

If all you have is a version number, then it is really troublesome trying to keep the details intact. Having a complete spec sheet makes interoperability easier. With a version number, then you can guarantee that the functionality is provided in a later version. But you cannot guarantee the functionality works exactly as prescribed before. Also, it means that you absolutely have to provide said functionality in future versions of the library code -- you cannot do a turnabout and remove said functionality. With a spec sheet, you can guarantee that the client code can still run, as long as it does not use the removed functionality.

I broadcast a version number ALONG WITH a prototype, doesn't the whole thing qualify as a spec sheet ?

Besides, removing functionality can be done, here's how :

BEFORE:
-Server process provides functionality A and B
-Client process shows up, asks the kernel for access to the functionality A of server process during initialization
-The kernel check that server process indeed broadcasts functionality A, and says client that everything is okay
-Client can later make calls to A

AFTER:
-Server process only provides functionality A now
-Client process shows up, asks the kernel for access to the functionality A of server process during initialization
-The kernel check that server process indeed broadcasts functionality A, and says client that everything is okay
-Client can later make calls to A

About interoperability between versions, I thought about using semantic version numbers of the Breaking.Compatible form.

Edited 2011-05-31 07:47 UTC

Reply Parent Score: 1

RE[16]: Comment by Kaj-de-Vos
by Alfman on Tue 31st May 2011 18:30 in reply to "RE[15]: Comment by Kaj-de-Vos"
Alfman Member since:
2011-01-28

Neolander,

"About interoperability between versions, I thought about using semantic version numbers of the Breaking.Compatible form."

Is your OS going to behave differently based on the exposed version numbers? If so, I think it'd be wise to use manage versioning internally since programmers are bound to screw it up doing it manually.

I'm queasy about the use of versioned models like this though. It could be irrational, but it reminds me of active-x hell.

In active-X, if two developers tried to update one component, then the component was permanently diverged (at the binary level). If an application was compiled against one of the divergent branches, it would not be compatible the other divergent branch.

Personally, I'm leaning towards a "if the prototypes match, then the link should succeed" approach.

Reply Parent Score: 2

RE[16]: Comment by Kaj-de-Vos
by xiaokj on Tue 31st May 2011 23:05 in reply to "RE[15]: Comment by Kaj-de-Vos"
xiaokj Member since:
2005-06-30

More generally, I have been for some time under the impression that in this topic, we are talking about very similar things with a very similar set of advantages and drawbacks even though we don't know it yet.

*laughs*

Same here. |0|
(in-joke, somebody read that as lol)

I broadcast a version number ALONG WITH a prototype, doesn't the whole thing qualify as a spec sheet ?

Still too crude, since you can have minor behavioural changes. Mathematica is one of such examples -- each update, it will tweak some commands a little bit, and even though the parameters are the exact same, the functionality may be drastically changed such that automated updating is impossible.

Version numbers do little, especially if you run a gap of a few version numbers, depending upon the scale of the problem (determined mainly by the coder, really).

I am really more interested in compatible breakage -- for example, a previously provided functionality A is now replaced by B and C whereby most cases go for B, and some go to C under some conditions. If automation can still be of use, I do not see why the original code needs to be recompiled -- the slight performance hit should be okay for most. Even after a few more rounds, I see no problem. It really should be the translator's job, to me (the translator will kill me, hehe).

Come to think of it, this is really abuse of translation. Some things just cannot be handled that way. For example, the old C string format had been changed drastically because of massive security holes. Such that, we realised, that one of the tokens is completely dangerous and it is no longer even allowed (let alone support)! Most new implementations will just "politely segfault" if you tried to use it. (I'm talking about the one that outputs the number of bytes written into a memory address thing). I don't know how the translator should handle this: Should it barf (as is currently done), or should it silently drop the message? Or something in between? This is a huge thing to judge, because of the myriad implications.

Sigh.

Reply Parent Score: 2