Linked by Hadrien Grasland on Sat 5th Feb 2011 10:59 UTC
OSNews, Generic OSes So you have taken the test and you think you are ready to get started with OS development? At this point, many OS-deving hobbyists are tempted to go looking for a simple step-by-step tutorial which would guide them into making a binary boot, do some text I/O, and other "simple" stuff. The implicit plan is more or less as follow: any time they'll think about something which in their opinion would be cool to implement, they'll implement it. Gradually, feature after feature, their OS would supposedly build up, slowly getting superior to anything out there. This is, in my opinion, not the best way to get somewhere (if getting somewhere is your goal). In this article, I'll try to explain why, and what you should be doing at this stage instead in my opinion.
Order by: Score:
Obvious and mundane
by rom508 on Sat 5th Feb 2011 12:09 UTC
rom508
Member since:
2007-04-20

I appreciate the author took time to write this article, but I'm sorry to say this article is obvious and mundane. There is really nothing interesting or insightful, just a list of motivational steps.

From reading the article I just get the feeling the author does not know much about operating systems design, but he tries to look clever and gives some advice to would be operating system developers from a lame user's prospective.

Reply Score: 0

RE: Obvious and mundane
by Kroc on Sat 5th Feb 2011 12:20 UTC in reply to "Obvious and mundane"
Kroc Member since:
2005-11-10

You'd be surprised how many people don't stop to consider the obvious. Nobody gets taught to do that in school. Thinking of the obvious, it not an obvious thing to many.

Reply Score: 4

RE[2]: Obvious and mundane
by Neolander on Sat 5th Feb 2011 13:13 UTC in reply to "RE: Obvious and mundane"
Neolander Member since:
2010-03-08

Exactly. I've got a nice book about website usability where if you only read the author's twelve principles on the subject, you consider that she's just stating the obvious and wonder why you have bothered buying the book at all.

Then, if you take the time to read the rest of the book, you discover many, many examples of famous websites which don't follow these "obvious" rules.

Afterwards, you can consider that all website designers are idiots. Or admit that even when something sounds obvious, it's not necessarily so.

This, plus some time spent on OSdev's forums, is why I felt it was a good idea to include this part in my tutorial's plan. The "throw random features on a raw educational kernel and hope it sticks" attitude is much more prevalent that one would spontaneously think it is.

If an OS is meant to be used, considering it from the user's point of view first is a truly vital step. Because it allows to guide further design decisions, and avoid making something which tries to do everything at once, and ends up sucking in every area.

Edited 2011-02-05 13:30 UTC

Reply Score: 1

RE: Obvious and mundane
by demetrioussharpe on Mon 7th Feb 2011 20:15 UTC in reply to "Obvious and mundane"
demetrioussharpe Member since:
2009-01-09

I appreciate the author took time to write this article, but I'm sorry to say this article is obvious and mundane. There is really nothing interesting or insightful, just a list of motivational steps.

From reading the article I just get the feeling the author does not know much about operating systems design, but he tries to look clever and gives some advice to would be operating system developers from a lame user's prospective.


Consider your perspective now in comparison to your perspective when you first started with computers. With that in mind, consider the perspective of someone who's never worked on creating an OS before & compare that to the perspective of someone who has not only worked on one, but has also released one. That's quite a large gap in perspectives between the new guys & the guys who've been there and done that, correct? I'm sure that you'll agree that hindsight is 20/20 & there are many things that are obvious now, but once weren't.

Reply Score: 1

v Crap
by spaskie on Sat 5th Feb 2011 13:30 UTC
RE: Crap
by Neolander on Sat 5th Feb 2011 13:34 UTC in reply to "Crap"
Neolander Member since:
2010-03-08

So, when are you reading the rest of the article ?

Reply Score: 3

RE: Crap
by fran on Sat 5th Feb 2011 15:11 UTC in reply to "Crap"
fran Member since:
2010-08-06

more comments like this I read, the more cynical i become about the human race

Reply Score: 2

RE: Crap
by Soulbender on Sat 5th Feb 2011 21:59 UTC in reply to "Crap"
Soulbender Member since:
2005-08-18

So where is your awesome, mind blowing non-unix OS?

Reply Score: 2

RE: Crap
by demetrioussharpe on Mon 7th Feb 2011 20:20 UTC in reply to "Crap"
demetrioussharpe Member since:
2009-01-09

"their OS would supposedly build up, slowly getting superior to anything out there. This is, in my opinion, not the best way to get somewhere (if getting somewhere is your goal)"

It is the ONLY way of getting somewhere, and the sole motiviation of any OS dev hobbyist getting anywhere.

The author is a moron who tries to make us write yet another "unix like" system.

There's a ton of ways to be superior to anything out there, and everyone in this business need the audacity to believe it.


Could you possibly be more wrong? That's exactly what the author's trying to avoid. Did you even read the whole article or are you just spouting out non-sense?

Reply Score: 1

wow
by fran on Sat 5th Feb 2011 15:00 UTC
fran
Member since:
2010-08-06

Your target audience should be world of warcraft;-)
The distro would jump to the number three spot in no time

On a more serious note. Microsoft monopoly on DX graphics engine. One of the last deal breakers to other OS adoption?

Reply Score: 2

RE: wow
by Neolander on Sat 5th Feb 2011 15:26 UTC in reply to "wow"
Neolander Member since:
2010-03-08

It's important, but I think the biggest problem for now is that most computers are sold bundled with Windows, to the point where for the average Joe, PC = Windows. This is the main reason why everyone would expect DX games to work everywhere, and is pissed off when it doesn't happen.

Look at a traditionally multi-OS environment like the mobile space : the life of newcomers is much easier, because people are used to seeing lots of devices with similar HW capabilities but incompatible software. You mostly choose a phone based on hardware and experience of the brand, you don't expect all phones to work in exactly the same way and run the same apps...

Edited 2011-02-05 15:44 UTC

Reply Score: 2

RE: wow
by demetrioussharpe on Mon 7th Feb 2011 20:22 UTC in reply to "wow"
demetrioussharpe Member since:
2009-01-09

Your target audience should be world of warcraft;-)
The distro would jump to the number three spot in no time

On a more serious note. Microsoft monopoly on DX graphics engine. One of the last deal breakers to other OS adoption?


You know, DX isn't the only API used for game development. It's probably the most popular on Windows, but None of the other OSs have it & there are still quite a bit of games out there that run on more than just Windows.

Food for thought.

Reply Score: 1

Not always rational
by renox on Sat 5th Feb 2011 16:21 UTC
renox
Member since:
2005-07-06

Frankly when I look at things like Haiku and Wayland, I'm not 100% sure that the decision behind them are very rational..

For Haiku, not starting from a FreeBSD or Linux kernel, is probably a severe case of NIH(*), for Wayland, I still don't understand why they didn't create a major version change of the X protocol instead, but I'm not yet 100% sure that it's only NIH syndrome.

* Apple and Google have shown that you can reuse a kernel core to something totally different in the userspace..

Reply Score: 3

RE: Not always rational
by bogomipz on Sat 5th Feb 2011 17:39 UTC in reply to "Not always rational"
bogomipz Member since:
2005-07-11

For Haiku, not starting from a FreeBSD or Linux kernel, is probably a severe case of NIH(*)

Haiku did not write their kernel from scratch, they forked NewOS, so it's not a clear cut case of NIH.

I do agree, however, that using Linux or kFreeBSD would mean less work and better hardware support. Video drivers is an exception, though, because I doubt they would have kept X11. Whether the devs would be happy with the result is another question, and since they didn't choose this road, I suspect they decided it was too much of an architectual compromise.

Before Haiku started to show signs of success, there was a project aiming to recreate the BeOS APIs on top of Linux. This basically was the rational approach you seem to wish for, but it failed: http://blueeyedos.com/

Still, you may be right that it would be saner to start with a widespread kernel.

Reply Score: 3

RE: Not always rational
by Neolander on Sat 5th Feb 2011 18:08 UTC in reply to "Not always rational"
Neolander Member since:
2010-03-08

I don't know what has happened in Haiku's case, but there are rationales behind not using Linux/FreeBSD's kernels, if your design goals don't match theirs.

Examples : if you want something customizable and reliable, you'd probably want a microkernel. If you want something good for hugely interactive tasks, there are better options than Linux and BSDs out there too (just look at the huge lot of RTOS projects).

Besides NIH, there's also the "there's so much to fix, it's just better to start over" aspect of things. This doesn't prevent reusing some code either, as an example Haiku reuses some Linux driver code.

Reply Score: 2

RE[2]: Not always rational
by abstraction on Sat 5th Feb 2011 22:26 UTC in reply to "RE: Not always rational"
abstraction Member since:
2008-11-27

Just because it is a microkernel doesn't mean it is more customizable or reliable. The only difference is that if a driver runs in userland and dies it might not bring down the entire system. The system is as reliable as the quality of it's drivers.

And if by interactive task you mean real-time task then you shouldn't compare Linux with an RTOS since the goal of them are different meaning Linux until recently couldn't even run real-time tasks.

Edited 2011-02-05 22:33 UTC

Reply Score: 1

RE[3]: Not always rational
by Neolander on Sat 5th Feb 2011 22:43 UTC in reply to "RE[2]: Not always rational"
Neolander Member since:
2010-03-08

Just because it is a microkernel doesn't mean it is more customizable or reliable. The only difference is that if a driver runs in userland and dies it might not bring down the entire system. The system is as reliable as the quality of it's drivers.

By customizable, I meant that putting a process boundary between things makes sure that they are much more independent from each other than they would be if they were part of the same codebase. The microkernel model enforces modularity by its very nature.

Better reliability is enforced because a much more fine-grained security model can be used, where even drivers have only access to the system capabilities they need (and thus are prevented from doing some sorts of damage when they run amok). As you mention, putting drivers in the userspace also allows them to crash freely, without taking the rest of the OS with them.

And if by interactive task you mean real-time task then you shouldn't compare Linux with an RTOS since the goal of them are different meaning Linux until recently couldn't even run real-time tasks.

To the contrary, that's precisely the point.

Not every OS project should be based on Linux, because it's only good for a limited set of things. If you have RTOSs or desktop reactivity in mind, Linux is a very poor base to start from. It's still good to take a look at its source when coding drivers, though, due to its wide HW support.

Reply Score: 1

RE[4]: Not always rational
by Alfman on Sun 6th Feb 2011 06:08 UTC in reply to "RE[3]: Not always rational"
Alfman Member since:
2011-01-28

"By customizable, I meant that putting a process boundary between things makes sure that they are much more independent from each other than they would be if they were part of the same codebase. The microkernel model enforces modularity by its very nature."


The lack of modularity in linux is a serious problem. We continue to have problems with graphics support in new kernels due to factors which are completely out of the user's hands.

Driver writers blame kernel developers for constantly changing interfaces (the total lack of a kernel ABI whatsoever), meanwhile kernel developers blame driver writers for nor releasing source code. The idealogical battle, which is rational on both sides, causes end users to suffer.

Even when everyone plays fairly (by releasing source code), there's a great deal which is rejected by linux mainline (let's use AUFS as an example of something that linux users want, but kernel maintainers reject).

This means the driver developers need to either release a binary compiled against every single kernel/distro variant the user's might be using, or the end users are forced to compile their own kernel and hope that the mainline is compatible with the patches they want to use.

The situation gets potentially much worse if the user wants to install additional patches.

These problems stem directly from the lack of modularity in linux. Modularization would be an excellent topic to iron out in the initial OS design rather than doing a ginormous macro-kernel.

As much as I despise ms for using DRM to lock open source developers out of the kernel, I have to say they did get the driver modularization aspect right.


In the ideal world, driver interfaces would be standardized such that they'd be portable across operating systems. Not that this is likely to happen, windows kernel DRM means open source devs are not welcome there any longer. And linux maintainers don't love the idea of defining ABI interfaces because it enables driver writers to distribute binaries easily without source.


"Not every OS project should be based on Linux, because it's only good for a limited set of things."

Nobody wants another *nix clone which works on less hardware than the original.

Reply Score: 2

RE: Not always rational
by Valhalla on Sun 6th Feb 2011 05:27 UTC in reply to "Not always rational"
Valhalla Member since:
2006-01-24


For Haiku, not starting from a FreeBSD or Linux kernel, is probably a severe case of NIH(*)

Well I think I disagree, Linux/BSD are very capable kernels but they are not optimized for desktop use (interactivity) and while it certainly could be rewritten for such purpose, from a programmer's perspective I would rather spend the time necessary for rewriting an existing kernel on making a new one better suited for the task instead. Either way they didn't have to start from scratch, as the previously mentioned NewOS kernel was available (written by an ex-Beos engineer).

As for the talk of micro kernels, I'd like to point out that Haiku is not a micro kernel. Hardware drivers will bring down the system if they fail (same goes for networking, filesystem which resides in the kernel space iirc). However it does have a stable'ish driver api which means updating the kernel won't break existing drivers.

Microkernels offers stability at the expense of performance, rtos'es offers fine grained precision at the expense of performance. There are lots of places where these characteristics are worth the loss performance, but the desktop isn't one of them.

Reply Score: 2

RE[2]: Not always rational
by Neolander on Sun 6th Feb 2011 07:46 UTC in reply to "RE: Not always rational"
Neolander Member since:
2010-03-08

Microkernels offers stability at the expense of performance, rtos'es offers fine grained precision at the expense of performance. There are lots of places where these characteristics are worth the loss performance, but the desktop isn't one of them.

Are you sure that microkernels wouldn't be worth it ?

AFAIK, desktop computers have had plenty of power for years (just look at the evolution of games). The problem is just to use that power wisely.

On a KDE 4 Linux desktop, having some disk-intensive taks in the background is all it takes to cause intermittent freezes. This, simply put, shouldn't happen. If the computer is powerful enough to interact smoothly with the user when no power-hungry app is running, it should be powerful enough when there's one around. I think what is currently needed is not necessarily raw firepower, but wise and scalable resource use. Given that, I bet that microkernels could provide a smooth desktop experience on something as slow as an Atom.

Edited 2011-02-06 07:48 UTC

Reply Score: 1

RE[3]: Not always rational
by Valhalla on Sun 6th Feb 2011 09:49 UTC in reply to "RE[2]: Not always rational"
Valhalla Member since:
2006-01-24

On a KDE 4 Linux desktop, having some disk-intensive taks in the background is all it takes to cause intermittent freezes. This, simply put, shouldn't happen.

This has nothing to do with monolithic vs hybrid vs micro. It has to do with the Linux kernel being optimized for throughput rather than responsiveness (as in, not really optimized for the desktop). You can have the exact same optimization for throughput on hybrid and micro kernels. There have been patches around for ages which helps alleviate this problem, iirc the latest '200 lines blabla' patch is supposed to be included in the kernel.

What a micro kernel offers is separation between it's components, so that if one fails the rest of the system will continue to function. This results in components having to pass messages around to intercommunicate which is much slower than accessing memory directly, hence loss of performance.

Now the likelyhood that say my keyboard driver would malfunction while I'm using my computer is very low, in fact it has never happened during my entire lifetime. If it would happen though, it would bring my system down with it and it would be a bummer for sure. But I still don't feel the need for a micro kernel just so that IF this happened I'd be able to save whatever I was working on, particularly since it comes with a definite performance penalty.

However, if I was sitting in the space shuttle, and the same unlikely thing happened and a malfunctioning keyboard driver would take down my computer then it would be a disaster. So in this case, yes I'd certainly be willing to sacrifice performance so that in the unlikely event this happened the whole system would not shut down.


Given that, I bet that microkernels could provide a smooth desktop experience on something as slow as an Atom.

Obviously depending on what you are doing, but why would anyone bother, the current operating systems are not so unstable that micro kernels are needed for mainstream usage. I'd rather take well written drivers and the performance thankyouverymuch.

Reply Score: 2

RE[4]: Not always rational
by Neolander on Sun 6th Feb 2011 09:57 UTC in reply to "RE[3]: Not always rational"
Neolander Member since:
2010-03-08

Well, it's a choice. Myself, I'd rather take something guaranteed to be rock-solid and not to crash simply due to some buggy NVidia/AMD driver. I do only few things which require performance on my computer (compilation and image editing), and none of these would be much affected by a kernel->microkernel switch.

(Sorry for the misunderstanding, I meant that the overhead of a microkernel was nothing compared to the amount of power "lost" due to inefficient scheduling)

Edited 2011-02-06 09:58 UTC

Reply Score: 1

RE[3]: Not always rational
by Alfman on Sun 6th Feb 2011 11:17 UTC in reply to "RE[2]: Not always rational"
Alfman Member since:
2011-01-28

"Microkernels offers stability at the expense of performance...There are lots of places where these characteristics are worth the loss performance, but the desktop isn't one of them."

I'd say the stability problems such as corruption and overflow stem more from the choice of highly "unsafe" languages rather than choice of micro-kernel/macro-kernel.


You're argument in favor of a macrokernel in order to achieve performance is somewhat dependent on the assumption that a microkernel cannot perform well. However I think there are various things to boast the performance of a microkernel design.


The microkernel does not have to imply expensive IPC. If modules are linked together at run or compile time, they could run together within a privileged cpu ring to boast performance.

As for stability and module isolation, there are a few things we can try:

1. Code could be written in a type safe language under a VM such as Java or Mono. The calls for IPC could be implemented by exchanging data pointers between VMs sharing a common heap or memory space without changing CPU rings. Individual models would never step on each other despite existing in the same memory space.

Not only is this approach plausible, I think it's realistic given the current performance and transparency of JIT compilers.

2. Segmentation has been declared a legacy feature in favor of flat memory models, but hypothetically memory segmentation could provide isolation among microkernel modules while eliminating the need for expensive IPC.

3. User mode CPU protections may not be necessary if the compiler can generate binary modules which are inherently isolated even though running in the same memory space. Therefor, the compiler rather than the CPU would be enforcing module isolation.




As much as people hated my opinion on up front performance analysis, I'd say this is an instance where the module inter-communications interface should be performance tested up front. Obviously, as more of the kernel modules get built, this will be very difficult to change later on when we notice an efficiency problem.

Reply Score: 1

RE[4]: Not always rational
by Neolander on Sun 6th Feb 2011 14:11 UTC in reply to "RE[3]: Not always rational"
Neolander Member since:
2010-03-08

I'd say the stability problems such as corruption and overflow stem more from the choice of highly "unsafe" languages rather than choice of micro-kernel/macro-kernel.

I've spent hours arguing on this precise subject with moondevil, I won't start over. In short, I'll believe that it's possible to write a decent desktop OS in a "safe" language when I see it.

In meantime, microkernels offer the advantage of reducing much the impact of failures and exploits., when there are some. A buggy process can only have the impact it's authorized to have.

You're argument in favor of a macrokernel in order to achieve performance is somewhat dependent on the assumption that a microkernel cannot perform well. However I think there are various things to boast the performance of a microkernel design.

That's not what I said. My take on the subject is that microkernels can obviously not have the same performance as a macrokernel (some optimization is only possible when kernel components share a common address space), but that they can have sufficient performance for desktop use.

The microkernel does not have to imply expensive IPC. If modules are linked together at run or compile time, they could run together within a privileged cpu ring to boast performance.

Then you do not have a microkernel, but a modular monolithic kernel. Putting components in separate processes is afaik a defining characteristic of microkernels.

As for stability and module isolation, there are a few things we can try:

1. Code could be written in a type safe language under a VM such as Java or Mono. The calls for IPC could be implemented by exchanging data pointers between VMs sharing a common heap or memory space without changing CPU rings. Individual models would never step on each other despite existing in the same memory space.

Not only is this approach plausible, I think it's realistic given the current performance and transparency of JIT compilers.

As said before, I'll believe it when I see it.

Note that microkernels are not incompatible with shared memory regions between processes, though. It's one of the niceties which paging permits. In fact, I believe that they are the key to fast IPC.

2. Segmentation has been declared a legacy feature in favor of flat memory models, but hypothetically memory segmentation could provide isolation among microkernel modules while eliminating the need for expensive IPC.

Segmentation is disabled in AMD64 and non-existent in most non-x86 architectures, so I'm not sure it has much of a future. Besides... How would you want to use it ? If you prevent each process from peeking in other process' address space, then they need IPC to communicate with each other. But perhaps you had something more subtle in mind ?

3. User mode CPU protections may not be necessary if the compiler can generate binary modules which are inherently isolated even though running in the same memory space. Therefor, the compiler rather than the CPU would be enforcing module isolation.

But then hand-crafted machine code and code from other compilers than yours could bypass system security... Unless you would forbid those ?

As much as people hated my opinion on up front performance analysis, I'd say this is an instance where the module inter-communications interface should be performance tested up front. Obviously, as more of the kernel modules get built, this will be very difficult to change later on when we notice an efficiency problem.

It is possible to stress-test inter-module/process communication after implementing it and before implementing modules, or even while implementing it. The problem is to determine what is good enough performance at this early stage. Better make code as flexible as possible.

Edited 2011-02-06 14:15 UTC

Reply Score: 1

RE[5]: Not always rational
by Alfman on Sun 6th Feb 2011 23:10 UTC in reply to "RE[4]: Not always rational"
Alfman Member since:
2011-01-28

"That's not what I said."

Sorry, I responded to your post quoting something which was from someone else.

"I've spent hours arguing on this precise subject with moondevil, I won't start over."

Fair enough, but it's not really adequate to dismiss my argument, there isn't even a citation.

"As said before, I'll believe it when I see it."

It doesn't exist yet, therefor you don't believe it could exist?

Neolander, I appreciate your view, but I cannot let you get away with that type of reasoning.

All of today's (major) kernels predate the advent of efficient VMs. With some original out of the box thinking, plus the benefit of the technological progress in the field in the past 15 years, a type safe efficient kernel is not far-fetched at all.

Per usual, the main impediments are political and financial rather than technological.


"Segmentation is disabled in AMD64 and non-existent in most non-x86 architectures, so I'm not sure it has much of a future."

That's exactly what I meant when I called it a legacy feature. However, conceivably the feature might not have been dropped if we had popular microkernels around using it.


"But then hand-crafted machine code and code from other compilers than yours could bypass system security... Unless you would forbid those ?"

You need to either trust your binaries are not malicious, or validate them for compliance somehow.
If we're running malicious kernel modules which are never the less "in spec", then there's not much any kernel can do. In any case, this is not a reason to dismiss a microkernel.

"It is possible to stress-test inter-module/process communication after implementing it and before implementing modules, or even while implementing it."

I am glad we agree here.

Reply Score: 1

RE[6]: Not always rational
by Neolander on Mon 7th Feb 2011 08:25 UTC in reply to "RE[5]: Not always rational"
Neolander Member since:
2010-03-08

It doesn't exist yet, therefor you don't believe it could exist?

Neolander, I appreciate your view, but I cannot let you get away with that type of reasoning.

All of today's (major) kernels predate the advent of efficient VMs. With some original out of the box thinking, plus the benefit of the technological progress in the field in the past 15 years, a type safe efficient kernel is not far-fetched at all.

Per usual, the main impediments are political and financial rather than technological.

Okay, let's explain my view in more details.

First, let's talk about performance. I've been seeing claims that interpreted, VM-based languages, can replace C/C++ everywhere for some times. That they now are good enough. I've seen papers, stats, and theoretical arguments for this to be true. Yet when I run a Java app, that's not what I see. As of today, I've seen exactly one complex java application which had almost no performance problems on a modern computer : the Revenge of the Titans game. Flash is another good example of popular interpreted language which eats CPU (and now GPU) time for no good reason. It's also fairly easy to reach the limits of Python's performance, in that case I've done it myself with some very simple programs. In short, these languages are good for light tasks, but still not for heavy work, in my experience.

So considering all of that, what I believe now is that either the implementation of current interpreters sucks terribly, or that they only offer the performance they claim to offer when developers use some specific programming practices that increase the interpreter's performance.

If it's the interpreter implementation, then we have a problem. Java has been here for more than 20 years, yet it would still not have reached maturity ? Maybe what this means is that although theoretically feasible, "good" VMs are too complex to actually be implemented in practice.

If it's about devs having to adopt specific coding practices in order to make code which ran perfectly well in C/C++ run reasonably well in Java/Flash/Python... Then I find it quite ironical, for something which is supposed to make developer's life easier. Let's see if the "safe" language clan will one day manage to make everyone adopt these coding practice, I'll believe it when I see it.

Apart from the performance side of things, in our specific case (coding a kernel in a "safe" language that we'll now call X), there's another aspect of things to look at. I'm highly skeptical about the fact that those languages could work well at the OS level AND bring their usual benefits at the same time.

If we only code a minimal VM implementation, ditching all the complex features, what we end up having is a subset of X that is effectively perfectly equivalent to C, albeit maybe with slightly worse performance. Code only a GC implementation, and your interpreter now has to do memory management. Code threads, and it has to manage multitasking and schedule things. Code pointer checks, and all X code which needs lots of pointers see its performance sink. In short, if you get something close the the desktop language X experience, and get all of the usual X benefit in terms of safety, your interpreter ends up becoming a (bloated) C/C++ monolithic kernel in its own right.

Then there are some hybrid solutions, of course. If you want some challenge and want to reduce the amount of C/C++ code you have to a minimal level, you can code memory management with a subset of X that does not have GC yet. You can code pointer-heavy code with a subset of X where pointer checks are disabled. And so on. But except for proving a point, I don't see a major benefit in doing this instead of assuming that said code is dirty by its very nature and just coding it in C/C++ right away.

That's exactly what I meant when I called it a legacy feature. However, conceivably the feature might not have been dropped if we had popular microkernels around using it.

Yes, but you did not answer my question. Why would they have used segmentation instead of flat seg + paging ? What could have segmentation permitted that paging cannot ?

You need to either trust your binaries are not malicious, or validate them for compliance somehow.
If we're running malicious kernel modules which are never the less "in spec", then there's not much any kernel can do. In any case, this is not a reason to dismiss a microkernel.

Again, I do not dismiss microkernels. But I do think that forcing a specific, "safe" compiler in the hand of kernel module devs is a bad idea.

Edited 2011-02-07 08:41 UTC

Reply Score: 1

RE[7]: Not always rational
by Alfman on Mon 7th Feb 2011 15:51 UTC in reply to "RE[6]: Not always rational"
Alfman Member since:
2011-01-28

"First, let's talk about performance. I've been seeing claims that interpreted, VM-based languages, can replace C/C++ everywhere for some times...."

Firstly, I agree about not using interpreted languages in the kernel, so lets get that out of the picture right away.

Secondly, to my knowledge, the performance problems with Java stem from poor libraries rather than poor code generation. For instance, Java graphics were designed to be easily portable rather than highly performing, therefor it's very poorly integrated with the lower level drivers. Would you agree this is probably where it gets it's reputation for bad performance?

Thirdly, many people run generic binaries which aren't tuned for the system they're using. Using JIT technology (actually, the machine code could be cached too to save compilation time), the generated code would always be for the current processor. Some JVMs go as far as to optimize code paths on the fly as the system gets used.

I do have some issues with the Java language, but I don't suppose those are relevant here.



"I'm highly skeptical about the fact that those languages could work well at the OS level AND bring their usual benefits at the same time."

Can you illustrate why a safe language would necessarily be unsuitable for use in the kernel?


"If we only code a minimal VM implementation, ditching all the complex features, what we end up having is a subset of X that is effectively perfectly equivalent to C, albeit maybe with slightly worse performance."

'C' is only a language, there is absolutely nothing about it that is inherently faster than Ada or Lisp (for instance). It's like saying Assembly is faster than C, that's not true either. We need to compare the compilers rather than the languages.

GNU C generates sub-par code compared with some other C compilers, and yet we still use it for Linux.


"Code only a GC implementation, and your interpreter now has to do memory management. Code threads, and it has to manage multitasking and schedule things."

I don't understand this criticism, doesn't the kernel need to do these things regardless? It's not like you are implementing memory management or multitasking just to support the kernel VM.


"Code pointer checks, and all X code which needs lots of pointers see its performance sink."

This is treading very closely to a full blown optimization discussion, but the only variables which must be range checked are those who's values are truly unknown within the code path. The compiler can optimize away all range checks on variables who's values are implied by the code path. In principal, even an unsafe language would require variables to be range checked explicitly by the programmer (otherwise they've left themselves vulnerable to things like stack overflow), which should be considered bugs and thus an unfair "advantage".


"Why would they have used segmentation instead of flat seg + paging ? What could have segmentation permitted that paging cannot ?"

In principal, paging can accomplish everything selectors did. In practice though switching selectors is much faster than adjusting page tables. A compiler could trivially ensure that the kernel module didn't overwrite data from other modules by simply enforcing the selectors except in well defined IPC calls - thus simultaneously achieving good isolation and IPC performance. Using page tables for isolation would imply that well defined IPC calls could not communicate directly with other modules without an intermediary helper or mucking with page tables on each call.

Of course, the point is mute today anyways with AMD64.

Reply Score: 1

RE[7]: Not always rational
by lucas_maximus on Wed 9th Feb 2011 14:12 UTC in reply to "RE[6]: Not always rational"
lucas_maximus Member since:
2009-08-18

Swing UI Performance is not the same as Java Performance, which is probably what you are complaining about.

Reply Score: 2

RE[5]: Not always rational
by Kochise on Mon 7th Feb 2011 19:42 UTC in reply to "RE[4]: Not always rational"
Kochise Member since:
2006-03-03

"In short, I'll believe that it's possible to write a decent desktop OS in a "safe" language when I see it."

http://programatica.cs.pdx.edu/House/
http://web.cecs.pdx.edu/~kennyg/house/

Now bend down and praise the Lords...

Kochise

Reply Score: 2

RE[6]: Not always rational
by Neolander on Mon 7th Feb 2011 20:11 UTC in reply to "RE[5]: Not always rational"
Neolander Member since:
2010-03-08


This thing has an awful tendency to have one of my CPU cores run amok, I wonder if it uses timer interrupts properly... But indeed, I must admit that apart from that it does work reasonably well.

Now bend down and praise the Lords...

*bends down indeed, impressed by how far people have gone with what looks like a language only suitable for mad mathematicians when browsing code snippets*

However, I wonder : if haskell is a "safe" language, how do they manage to create a pointer targeting a specific memory region, which is required e.g. for VESA VBE ? Or to trigger BIOS interrupts ?

Edited 2011-02-07 20:21 UTC

Reply Score: 1

RE[7]: Not always rational
by Alfman on Mon 7th Feb 2011 22:45 UTC in reply to "RE[6]: Not always rational"
Alfman Member since:
2011-01-28

"However, I wonder : if haskell is a 'safe' language, how do they manage to create a pointer targeting a specific memory region, which is required e.g. for VESA VBE ? Or to trigger BIOS interrupts ?"

A safe compiler can assure us that all pointers are in bounds before being dereferenced. Most of these bounds checks would be "free" code since the values are implied within the code paths.

The compiler might track two pointer variable attributes: SAFE & UNSAFE.

A function could explicitly ask for validated pointers.
This way, the compiler knows that any pointer it gets is already safe to use without a bounds check.

void Func(attribute(SAFE) char*x) {
// safely dereference x
}

for(char *p = (char*)0xa0000; p<(char*)0xaffff; p++) {
Func(p); // no extra bounds check needed, the code path implies the pointer is in valid range.
}

char*p;
fscan("%p", &p); // yucky dangerous pointer
Func(p); // here the compiler is forced to implicitly bounds check the pointer, since the function is requesting a safe pointer.

This is not a performance penalty because code which does not validate the pointer is a bug waiting to be exploited anyways.


The SAFE/UNSAFEness of pointers can be tracked under the hood and need not complicate the language. Although if we wanted to, we could certainly make it explicit.


Developers of safe languages have been doing this type of safe code analysis for a long time. It really works.
Unfortunately for OS developers, most safe languages are interpreted rather than compiled, but JVM and CLR show that it is possible.

Reply Score: 1

RE[7]: Not always rational
by Kochise on Tue 8th Feb 2011 17:22 UTC in reply to "RE[6]: Not always rational"
Kochise Member since:
2006-03-03

This is somewhat as much as impressive than my own attempt to do something similar, yet using existing kernel code (Minix 3) and functional language (Erlang).

The biggest problem so far is that Minix 3 is only "self-compiling" : you can only developp and hack Minix FROM Minix (no cross development possible due to hackish code targeted to their own C compiler -ACK- no pun intended) and the big dependence of Erlang to GCC's specific extensions.

C portability has never been so much discutable...

On a brighter note, I also greatly considered this thesis project as programming interface :

http://www.csse.uwa.edu.au/~joel/vfpe/index.html

It is written in Java, and might scales better over Haskell than Erlang, yet I find some "operations" to be more intuitive using emacs (plain text editing) than creating "tempo" objects to fit functional's lazyness :/

My 0.02€ :p

Kochise

EDIT : typo

Edited 2011-02-08 17:23 UTC

Reply Score: 2

RE[6]: Not always rational
by renox on Tue 8th Feb 2011 09:50 UTC in reply to "RE[5]: Not always rational"
renox Member since:
2005-07-06

"In short, I'll believe that it's possible to write a decent desktop OS in a "safe" language when I see it."
http://programatica.cs.pdx.edu/House/
http://web.cecs.pdx.edu/~kennyg/house/

Now bend down and praise the Lords...
Kochise


Well, I for one, consider that a *decent desktop OS* needs to be able to run:
-an HW accelerated games such as Doom3.
-a fully featured webbrowser
-a fully featured office suite (say LibreOffice).

Wake me up when they reach this point..
And then there is also the issue of hardware support..

Reply Score: 2

RE[7]: Not always rational
by Neolander on Tue 8th Feb 2011 10:12 UTC in reply to "RE[6]: Not always rational"
Neolander Member since:
2010-03-08

Well, I for one, consider that a *decent desktop OS* needs to be able to run:
-an HW accelerated games such as Doom3.

If they can support PCI and VESA, they can support HW accelerated graphics as well, it's only a matter of writing lots of chipset-specific code in order to support it, which is a brute force development task.

-a fully featured webbrowser
-a fully featured office suite (say LibreOffice).

Again, porting webkit/gecko or an office suite on a new architecture is a brute force development task once some prerequisites (like a libc implementation and a graphics stack) are there. I sure would like to see some complex applications around to see how well they perform, but Kochise's example does show that it's possible to write a simple GUI desktop OS with haskell.

Wake me up when they reach this point..
And then there is also the issue of hardware support..

Again, that's not the point of such a proof-of-concept OS. They only have to show that it's possible to implement support for any hardware, by implementing support for various hardware (which they've done), the rest is only a matter of development time.

Of course, I'd never use this OS as it stands. But it does prove that it's possible to write a desktop OS in this language, which is my original concern.

Edited 2011-02-08 10:15 UTC

Reply Score: 1

RE[4]: Not always rational
by Morin on Mon 7th Feb 2011 08:58 UTC in reply to "RE[3]: Not always rational"
Morin Member since:
2005-12-31

1. Code could be written in a type safe language under a VM such as Java or Mono. The calls for IPC could be implemented by exchanging data pointers between VMs sharing a common heap or memory space without changing CPU rings.


I used to consider this a plausible approach, too. However, any shared-memory approach will make the RAM a bottleneck. It would also enforce a single shared RAM by definition.

This made me consider isolated processes and message passing again, with shared RAM to boost performance but avoiding excessive IPC whenever possible. One of the concepts I think is useful for that is uploading (bytecode) scripts into server processes. This avoids needless IPC round-trips and even allows server processes to handle events like keypresses in client-supplied scripts instead of IPC-ing to the client, avoiding round-trips and thus be more responsive.

The idea isn't new, though. SQL does this with complex expressions and stored procedures. X11 and OpenGL do this with display lists. Web sites do this with Javascript. Windows 7 does it to a certain extent with retained-mode draing in WPF. There just doesn't seem to be an OS that does it everywhere, presumably using some kind of configurable bytecode interopreter to enable client script support in server processes in a generalized way.

Example: a GUI server process would know about the widget tree of a process and has client scripts installed like "on key press: ignore if the key is (...). for TAB, cycle the GUI focus. On ESC, close window (window reference). On ENTER, run input validation (validation constraints), and send the client process an IPC message is successful. (...)"

There you have a lot of highly responsive application-specific code, running in the server process and sending the client an IPC message only if absolutely needed, while still being "safe" due to being interpreted and any action checked.

2. Segmentation has been declared a legacy feature in favor of flat memory models, but hypothetically memory segmentation could provide isolation among microkernel modules while eliminating the need for expensive IPC.


That would be a more elegant way to do the same as could be done with paging. On 64-bit CPUs the discussion becomes moot anyway. Those can emulate segments by using subranges of the address space; virtual address space is so abundant that you can afford it. The only thing you don't get with that is implicit bounds checking, but you still can't access memory locations which the process cannot access anyway.

3. User mode CPU protections may not be necessary if the compiler can generate binary modules which are inherently isolated even though running in the same memory space.


If used for "real" programs, this argument is the same as using a JVM or .NET runtime.

On the other hand, if you allow interpreted as well as compiled programs, and run them in the context of a server process, you get my scripting approach.

Reply Score: 2

RE[5]: Not always rational
by Alfman on Mon 7th Feb 2011 16:17 UTC in reply to "RE[4]: Not always rational"
Alfman Member since:
2011-01-28

Morin,

"I used to consider this a plausible approach, too. However, any shared-memory approach will make the RAM a bottleneck. It would also enforce a single shared RAM by definition."

That's a fair criticism - the shared ram and cache coherency model used by x86 systems is fundamentally unscalable. However, considering that shared memory is the only form of IPC possible on multicore x86 processors, we can't really view it as a weakness of the OS.

"This made me consider isolated processes and message passing again, with shared RAM to boost performance but avoiding excessive IPC whenever possible. One of the concepts I think is useful for that is uploading (bytecode) scripts into server processes."

I like that idea a lot, especially because it could be used across computers on a network without any shared memory.

Further still, if we had a language capability which could extract and submit the logic surrounding web service calls instead of submitting web service calls individually, that would be a killer feature of these "bytecodes".

"That would be a more elegant way to do the same as could be done with paging. On 64-bit CPUs the discussion becomes moot anyway."

See my other post as to why this isn't so if we're not using a VM for isolation, but your conclusion is correct.

Reply Score: 1

RE[6]: Not always rational
by Morin on Tue 8th Feb 2011 17:12 UTC in reply to "RE[5]: Not always rational"
Morin Member since:
2005-12-31

That's a fair criticism - the shared ram and cache coherency model used by x86 systems is fundamentally unscalable.


I was referring to the shared RAM and coherency model used by Java specifically. That one is a lot better than what x86 does, but it still makes the RAM a bottleneck. For example, a (non-nested) "monitorexit" instruction (end of a non-nested "synchronized" code block) forces all pending writes to be committed to RAM before continuing.

However, considering that shared memory is the only form of IPC possible on multicore x86 processors, we can't really view it as a weakness of the OS.


If you limit yourself to single-chip, multi-core x86 systems, then yes. That's a pretty harsh restriction though: There *are* multi-chip x86 systems (e.g. high-end workstations), there *are* ARM systems (much of the embedded stuff, as well as netbooks), and there *are* systems with more than one RAM (e.g. clusters, but I'd expect single boxes that technically contain clusters to be "not far from now").

Reply Score: 2

RE[7]: Not always rational
by Alfman on Wed 9th Feb 2011 01:12 UTC in reply to "RE[6]: Not always rational"
Alfman Member since:
2011-01-28

Morin,

"There *are* multi-chip x86 systems (e.g. high-end workstations), there *are* ARM systems (much of the embedded stuff, as well as netbooks), and there *are* systems with more than one RAM..."

Sorry, but I'm not really sure what you're post is saying?

Reply Score: 1

RE[2]: Not always rational
by renox on Sun 6th Feb 2011 18:38 UTC in reply to "RE: Not always rational"
renox Member since:
2005-07-06

Well, it depends on your goals: if you want your OS to be used by a lot of people then the time to write/adapt the big number of drivers needed is probably much more than the time to adapt FreeBSD or Linux kernel(*)..

Remember than even with Linux, the number 1 criticism is that there is still not enough good drivers for it
(as seen with the recent discussion about Firefox and HW acceleration)!

For a toy OS/specialised OS, sure, writing your own kernel/reusing a small one make sense, but the number of usable HW configuration will probably stay very small.

*For example, con kolivas maintain his own scheduler for better 'desktop usage', there was BeFS for Linux but it's obsolete (2.4).

Reply Score: 2

RE: Not always rational
by demetrioussharpe on Mon 7th Feb 2011 20:28 UTC in reply to "Not always rational"
demetrioussharpe Member since:
2009-01-09

Frankly when I look at things like Haiku and Wayland, I'm not 100% sure that the decision behind them are very rational..

For Haiku, not starting from a FreeBSD or Linux kernel, is probably a severe case of NIH(*), for Wayland, I still don't understand why they didn't create a major version change of the X protocol instead, but I'm not yet 100% sure that it's only NIH syndrome.

* Apple and Google have shown that you can reuse a kernel core to something totally different in the userspace..


If that's your stance, then you've missed the whole point of the BeOS & why Haiku wants to recreate it. Be's motto was a play on Apple's motto: Don't just THINK different, BE different. There are plenty of 'would-be' OSs using the BSD & Linux kernels. That's not exactly BEing different, now is it? Haiku's kernel is based off of an OS that was started by an ex-Be software engineer (NewOS - http://newos.org/ ). It's has it's own heritage; & it's a heritage that's just as valid as any other OS's heritage.

Reply Score: 1

RE[2]: Not always rational
by renox on Mon 7th Feb 2011 21:44 UTC in reply to "RE: Not always rational"
renox Member since:
2005-07-06

There are plenty of 'would-be' OSs using the BSD & Linux kernels.


Not really, there are plenty of distributions (which configures mostly the applications), for OS really different using the BSD kernels|Linux, I can only think of MacOS X (not really BSD, it's a Mach kernel) and Android (Linux).

(NewOS - http://newos.org/ ). It's has it's own heritage; & it's a heritage that's just as valid as any other OS's heritage.

Uhm, the webpage doesn't give any specific reason why it would as good as *BSD or Linux kernels which have a lot more drivers.

And frankly, marketing BS like 'BE different' is for fanboys not for developers..

Reply Score: 2

RE[3]: Not always rational
by demetrioussharpe on Tue 8th Feb 2011 16:08 UTC in reply to "RE[2]: Not always rational"
demetrioussharpe Member since:
2009-01-09

Not really, there are plenty of distributions (which configures mostly the applications), for OS really different using the BSD kernels|Linux, I can only think of MacOS X (not really BSD, it's a Mach kernel) and Android (Linux).


More BSD peeks out from under the MacOS X hood than Mach. Mach is less exposed to the parts of the kernel that actually reach out into the userland. & though It does have Mach inside of it, most people (developers, users, Apple, & even journalists) tally MacOS X in the BSD column. Also, a distribution doesn't constitute as a different OS, All Linux distributions are basically the same OS. And why is that? Because the systemcalls, memory management, & other kernel parts didn't change & the basic userland framework didn't change. Slapping on new paint & a different package management system isn't enough to call it a new OS.

Uhm, the webpage doesn't give any specific reason why it would as good as *BSD or Linux kernels which have a lot more drivers.

And frankly, marketing BS like 'BE different' is for fanboys not for developers..


Nobody said that NewOS was better, better is relative. The key point is that it's not the same rehashed bs that keeps getting praised simply for the sake of being the same rehashed bs. To be honest, purely from a desktop prospective, Be was more successful on the desktop in the 90's than Linux is now. Though, Be's OS would never have the slightest chance in the server world. And just to give you a hint, it's not about who has more drivers, it's about who's drivers are better & which drivers are available. No one cares if NetBSD or Linux are ultra-portable with thousands of drivers, if they don't have the drivers for the particular hardware that they are using at that particular time. Also, on the topic of marketing, there's only one company that's a marketing genuis & that's Microsoft. They've constantly found a way to shove their OSs down most people's throats whether they want it or not. With that in mind, surely, you understand that marketing isn't a measure of how good a system is 'technically'. However, great marketing usually means that you'll be around a lot longer regardless of how good or bad your OS is. Fanboys are inevitable regardless of the OS & with or without marketing. So, if you think that 'Be different' is just marketing bs that's for fanboys & not developers, bring your development skills to the table. Let's see you create a better OS than the BeOS from scratch & come up with a better motto.

Reply Score: 1

Almafeta
Member since:
2007-02-22

"First, heavy reliance on tutorials is bad for creativity." - but good for arcane technical bits!

After you've got your very own Hello World OS running, you have the bare minimum platform with which to experiment. Don't be afraid to make lots of these, and see how they work; compiling a bad idea is cheap, designing for a bad idea is expensive.

(Maybe this is obvious too? I dunno.)

Reply Score: 2

Thanks,!
by scribe on Mon 7th Feb 2011 15:12 UTC
scribe
Member since:
2009-07-14

Just a quick "Thanks!" to Hadrien for the information and needed "prodding" to continue with my own OS! It's good to know that others "suffer" as I do and share the same concerns and constraints. Regardless of other comments, I greatly appreciated the article. Peace! Fritz

Reply Score: 1