Linked by Hadrien Grasland on Sat 5th Feb 2011 10:59 UTC
OSNews, Generic OSes So you have taken the test and you think you are ready to get started with OS development? At this point, many OS-deving hobbyists are tempted to go looking for a simple step-by-step tutorial which would guide them into making a binary boot, do some text I/O, and other "simple" stuff. The implicit plan is more or less as follow: any time they'll think about something which in their opinion would be cool to implement, they'll implement it. Gradually, feature after feature, their OS would supposedly build up, slowly getting superior to anything out there. This is, in my opinion, not the best way to get somewhere (if getting somewhere is your goal). In this article, I'll try to explain why, and what you should be doing at this stage instead in my opinion.
Thread beginning with comment 461292
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[8]: Not always rational
by Valhalla on Tue 8th Feb 2011 07:04 UTC in reply to "RE[7]: Not always rational"
Valhalla
Member since:
2006-01-24


the generated code would always be for the current processor. Some JVMs go as far as to optimize code paths on the fly as the system gets used.

Yes, this is often stated as a PRO for JIT compiled code, but since it is JIT (just-in-time) the actual optimizations that can be performed during an acceptable timeframe are VERY POOR.

'C' is only a language, there is absolutely nothing about it that is inherently faster than Ada or Lisp (for instance). It's like saying Assembly is faster than C, that's not true either. We need to compare the compilers rather than the languages.

Well, assembly allows more control than C, so given two expert programmers, the Assembly programmer will be able to produce atleast as good and often better code than the C programmer, since some of the control is 'lost in translation' when programming in C as opposed to Assembly. Obviously this gets worse as we proceed to even higher-level languages where the flexibility offered by low level code is traded in for generalistic solutions that works across a large set of problems but are much less optimized for each of them.

GNU C generates sub-par code compared with some other C compilers, and yet we still use it for Linux.

Which compilers would that be? Intel Compiler?

I don't understand this criticism, doesn't the kernel need to do these things regardless?

While both a GC and manual memory management has the same cost in the actual allocating and freeing of memory (well almost, a GC running in a WM will have to ask the host OS for more memory should it run out of heap memory which is VERY costly, also in order to reduce memory fragmentation it often compacts the heap which means MOVING memory around, again VERY costly though hopefully less costly than asking the host OS for a heap resize), a GC adds the overhead of trying to decide IF/WHEN memory can be reclaimed which is a costly process.

I've been looking forward to seeing a managed code OS happening because I am very interested in seeing how it would perform. My experience tells me that it will be very slow, and that programs running on the OS will be even slower, last I perused the source code of a managed code OS it was filled with unsafe code, part of which was there because of accessing hardware registers but alot of it also for speed, which is a luxury a program RUNNING on said OS will NOT have.

Hopefully we will have a managed code OS someday capable of more than printing 'hello world' to a terminal which might give us a good performance comparison but personally I'm sceptic. I think Microsoft sent Singularity to academia for a reason.

Reply Parent Score: 2

RE[9]: Not always rational
by Alfman on Tue 8th Feb 2011 07:52 in reply to "RE[8]: Not always rational"
Alfman Member since:
2011-01-28

The problem is, an efficient implementation of a microkernel needs a lot of optimization planning up front, which, if you've read the previous article in the series, is a very unpopular notion.

Therefor, most programmers will start writing the OS the easiest way they know how, which more often than not means modeling it after existing kernels. Unfortunately this often results in many new operating systems sharing the same inefficiencies as the old ones.

I was trying to highlight some areas of improvement, but of course people are highly resistant to any changes.

Reply Parent Score: 1