Linked by Hadrien Grasland on Sat 5th Feb 2011 10:59 UTC
OSNews, Generic OSes So you have taken the test and you think you are ready to get started with OS development? At this point, many OS-deving hobbyists are tempted to go looking for a simple step-by-step tutorial which would guide them into making a binary boot, do some text I/O, and other "simple" stuff. The implicit plan is more or less as follow: any time they'll think about something which in their opinion would be cool to implement, they'll implement it. Gradually, feature after feature, their OS would supposedly build up, slowly getting superior to anything out there. This is, in my opinion, not the best way to get somewhere (if getting somewhere is your goal). In this article, I'll try to explain why, and what you should be doing at this stage instead in my opinion.
Thread beginning with comment 461184
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[4]: Not always rational
by Morin on Mon 7th Feb 2011 08:58 UTC in reply to "RE[3]: Not always rational"
Morin
Member since:
2005-12-31

1. Code could be written in a type safe language under a VM such as Java or Mono. The calls for IPC could be implemented by exchanging data pointers between VMs sharing a common heap or memory space without changing CPU rings.


I used to consider this a plausible approach, too. However, any shared-memory approach will make the RAM a bottleneck. It would also enforce a single shared RAM by definition.

This made me consider isolated processes and message passing again, with shared RAM to boost performance but avoiding excessive IPC whenever possible. One of the concepts I think is useful for that is uploading (bytecode) scripts into server processes. This avoids needless IPC round-trips and even allows server processes to handle events like keypresses in client-supplied scripts instead of IPC-ing to the client, avoiding round-trips and thus be more responsive.

The idea isn't new, though. SQL does this with complex expressions and stored procedures. X11 and OpenGL do this with display lists. Web sites do this with Javascript. Windows 7 does it to a certain extent with retained-mode draing in WPF. There just doesn't seem to be an OS that does it everywhere, presumably using some kind of configurable bytecode interopreter to enable client script support in server processes in a generalized way.

Example: a GUI server process would know about the widget tree of a process and has client scripts installed like "on key press: ignore if the key is (...). for TAB, cycle the GUI focus. On ESC, close window (window reference). On ENTER, run input validation (validation constraints), and send the client process an IPC message is successful. (...)"

There you have a lot of highly responsive application-specific code, running in the server process and sending the client an IPC message only if absolutely needed, while still being "safe" due to being interpreted and any action checked.

2. Segmentation has been declared a legacy feature in favor of flat memory models, but hypothetically memory segmentation could provide isolation among microkernel modules while eliminating the need for expensive IPC.


That would be a more elegant way to do the same as could be done with paging. On 64-bit CPUs the discussion becomes moot anyway. Those can emulate segments by using subranges of the address space; virtual address space is so abundant that you can afford it. The only thing you don't get with that is implicit bounds checking, but you still can't access memory locations which the process cannot access anyway.

3. User mode CPU protections may not be necessary if the compiler can generate binary modules which are inherently isolated even though running in the same memory space.


If used for "real" programs, this argument is the same as using a JVM or .NET runtime.

On the other hand, if you allow interpreted as well as compiled programs, and run them in the context of a server process, you get my scripting approach.

Reply Parent Score: 2

RE[5]: Not always rational
by Alfman on Mon 7th Feb 2011 16:17 in reply to "RE[4]: Not always rational"
Alfman Member since:
2011-01-28

Morin,

"I used to consider this a plausible approach, too. However, any shared-memory approach will make the RAM a bottleneck. It would also enforce a single shared RAM by definition."

That's a fair criticism - the shared ram and cache coherency model used by x86 systems is fundamentally unscalable. However, considering that shared memory is the only form of IPC possible on multicore x86 processors, we can't really view it as a weakness of the OS.

"This made me consider isolated processes and message passing again, with shared RAM to boost performance but avoiding excessive IPC whenever possible. One of the concepts I think is useful for that is uploading (bytecode) scripts into server processes."

I like that idea a lot, especially because it could be used across computers on a network without any shared memory.

Further still, if we had a language capability which could extract and submit the logic surrounding web service calls instead of submitting web service calls individually, that would be a killer feature of these "bytecodes".

"That would be a more elegant way to do the same as could be done with paging. On 64-bit CPUs the discussion becomes moot anyway."

See my other post as to why this isn't so if we're not using a VM for isolation, but your conclusion is correct.

Reply Parent Score: 1

RE[6]: Not always rational
by Morin on Tue 8th Feb 2011 17:12 in reply to "RE[5]: Not always rational"
Morin Member since:
2005-12-31

That's a fair criticism - the shared ram and cache coherency model used by x86 systems is fundamentally unscalable.


I was referring to the shared RAM and coherency model used by Java specifically. That one is a lot better than what x86 does, but it still makes the RAM a bottleneck. For example, a (non-nested) "monitorexit" instruction (end of a non-nested "synchronized" code block) forces all pending writes to be committed to RAM before continuing.

However, considering that shared memory is the only form of IPC possible on multicore x86 processors, we can't really view it as a weakness of the OS.


If you limit yourself to single-chip, multi-core x86 systems, then yes. That's a pretty harsh restriction though: There *are* multi-chip x86 systems (e.g. high-end workstations), there *are* ARM systems (much of the embedded stuff, as well as netbooks), and there *are* systems with more than one RAM (e.g. clusters, but I'd expect single boxes that technically contain clusters to be "not far from now").

Reply Parent Score: 2