Linked by Nicholas Blachford on Thu 15th Jul 2004 20:14 UTC
Editorial This series explores the sort of technologies we could use if we were to build a new platform today. In the first part I described a system with a multi-core multi-threaded CPU, FPGA and Cell processors. In this second part we start looking at the Operating System.
Order by: Score:
Why reinvent the wheel?
by dr_gonzo on Thu 15th Jul 2004 20:40 UTC

If MS had used an open source OS to base their Longhorn OS on, they could have cut their development time drastically. This is why Apple will have a stable, mature OS (I'm talking about Tiger here) with lots of 3rd party support and all the features of Longhorn (and more) out about 2 years before Longhorn will be out.

What's the point in writing a kernel from scratch when there are a few free ones out there already that work fine? What's the point in developing a new printing system when CUPS is already there? There's lots of free stable software out there already for UNIX (and Linux and all the other *nixes out there) so why not use it?

Wouldn't it be better to use an existing OS that's stable and secure and then build cool interesting stuff on top of it like searching, video, sound, nice UI, easy networking etc... I'm not talking about Apple here specifically but they've obviously made really great decisions with OS X.

RE: Why reinvent the wheel?
by Thom Holwerda on Thu 15th Jul 2004 20:45 UTC

Because people want to differantiate, set themselves apart. If you are an OS enthousiast like me, then after distro #478 you've seen it, y'know, you want something else.

That's why people still creaite other OSs, it's called: innovation. I mean, if we were to follow your idea, we'd all drive the same cars excet they'd have a different color.

NOT Unix?
by Anonymous on Thu 15th Jul 2004 20:50 UTC

I'm a little confused by his statement that you should not use Unix (because it uses a monolithic kernel, I guess?), and then cites Linux and OSX as examples of Unix. I know we like to refer to them as Unix, because they behave like Unix in multiple ways, but there are major architectural differences between each of these OS's. I'm not sure where we draw the line between "Unix" and 'Not Unix", and why.

For example, OSX's kernel is actually MACH, which is a microkernel, which is what he's advocating. Why is Haiku the best choice? Didn't BeOS have many Unix-like qualities, too? Didn't it have a bash-like command line (I don't remember that well). If MacOS is Unix because it mimicks some Unix behaviors, then might we say BeOS is too, or might we say that everything is, except Windows. Or even Windows has stolen some things from Unix.

I mean, I liked BeOS, and was really hoping it'd succeed, but if you wanted to make the 'ideal OS', might it be better to take the best fully working OS and fix what you don't like about it then to take a non-working OS and make it work and then fix what you don't like about it?

Summary: You say Haiku is the best candidate for the 'Next Generation OS' because it uses a microkernel. But why is this not-yet-working attempt to reverse engineer BeOS superior to other (working) OS's which have microkernels?

Hurd Day's Night?
by Yuval on Thu 15th Jul 2004 20:51 UTC
ADDITION TO: "NOT Unix?"
by Anonymous on Thu 15th Jul 2004 20:52 UTC

From the author: This wasn't a flame or anything. It's an honest question.

Throw it on L4 or Fiasco.
by Anonymous on Thu 15th Jul 2004 20:54 UTC

I believe you will get more milage for your money by using parts of OSS. Designing an entirely new OS from the ground up is a cool hobby but could never become mainstream.

RE: Throw it on L4 or Fiasco
by Anonymous on Thu 15th Jul 2004 21:00 UTC

Designing an entirely new OS from the ground up is a cool hobby but could never become mainstream.

Yes, because no one has ever created an OS from scratch....

[remembers all OS' were, at some point, made from scratch]

...I mean...no private individual has ever started an OS from scratch as a hobby, and had it become mainstream. Use Linux instead....

[starts making rapid hand movements to distract you from mention of Linux]

...Linus who?

UNIX?
by Daan on Thu 15th Jul 2004 21:00 UTC

UNIX IS NOT PERFECT!!!

Indeed, it is NOT.

Sorry for putting it in bold-face, but a common misconception here in the comments section seems to be that UNIX is perfect, because it is open source and stable and reliable and and and and and and that therefore every OS that is not based on UNIX can only be crap.

Yet that is not true. You only have to look at the tons of Is linux ready for the desktop articles to see that it is not perfect. And the argument about open-source is no more valid than that the Itanium, the POWERPC and the AMD64 should be abandoned because there is more software for ix86.

Yet I hear you already: But Mac OS X? Doesn't it solve many of the problems of other unices?. Yes, it does. But to do this, it hides everything that reminds of UNIX from the end-users. So in the end, I think Apple could just as well have started from scratch, it would only have costed more money. And the result wouldn't have been a nice layer to cover up for the problems with the underlying system.

One more option
by Anonymous on Thu 15th Jul 2004 21:06 UTC

It appears to me that the best candidate for a 'Next Generation OS' would be based on the QNX microkernel. The kernel is quick and has great interrupt handling for drivers. Although it is not open source, the OS also has great potential in its core set of applications. The photon GUI is a very clean graphical interface. There have been many open-source development tools and libraries already ported over to the OS already. It seems to me that with more development on that platform, it would make a wonderful Next-Gen OS.

RE: Anonymous (IP: ---.adelphia.net)
by Thom Holwerda on Thu 15th Jul 2004 21:09 UTC

I believe you will get more milage for your money by using parts of OSS. Designing an entirely new OS from the ground up is a cool hobby but could never become mainstream.

And Linus started Linux as what, hmm?

RE: RE: Anonymous (IP: ---.adelphia.net)
by Anonymous on Thu 15th Jul 2004 21:23 UTC

Linus Torvalds did not write an operating system from scratch. He only wrote the kernel (or at least the largest part of the kernel), which comprises only a small part of an entire operating system.

That said, I believe there are a few operating systems which have been written mostly from scratch by a single person. SkyOS and AtheOS comes to mind. Could anyone comment on those?

RE: Unix
by Anonymous on Thu 15th Jul 2004 21:25 UTC

'a common misconception here in the comments section seems to be that UNIX is perfect, because it is open source and stable and reliable'

Unix isn't open source. Linux is Open source. Linux was designed to behave like unix under many conditions, but it isn't Unix. Neither is OSX. At least if you want to get technical or anything. Both Darwin and Linux have a lot of the same strengths and weaknesses of Unix, but that doesn't make them the same operating systems.

Yes, they are often called "Unices" or "Flavors of Unix" or some variation on that theme, but it's an oversimplification. They've all chosen to adhere to certain file-system conventions and certain ways of dealing with devices, and they use similar (or the same) command-line shell. Things like that. And they tend to share things in common, like tools, utilities, and applications, and windowing systems, but that's not the OS.

Under the surface, they aren't all the same thing. And when you get down to it, you can keep a lot of the 'under the surface' and ditch the surface, and make the OS not appear very Unix-y at all. Or, you can do what Apple did (or is it more correct to say NeXT, or someone else?), and take a microkernel, add a surface to make it look/work like Unix (for the benefit of unix-geeks), and add another surface to look/work sort of like the old MacOS (for the benefit of MacOS users).

But don't let BASH fool you. These are all different operating systems.

RE: Thom Holwerda
by Anonymous on Thu 15th Jul 2004 21:27 UTC

Don't forget it was a UNIX clone and that was a long time ago. Writing a full OS from the ground up is too large of a task becasue it would take so long and so much time before it is functional enough to use (and make money to support it). This just is not practical, to build a new OS and not use at least parts of open source (gcc, toolkits, applications, shell env etc.) is a costly task.

The Linux kernel is fine, and using it (or some of it) reduces a great deal of overhead in writing drivers etc. An example of this is L4Linux.

Doing all of this from the ground up is something even Microsoft could not afford to do. Even if they could, it would take them several years.

RE: Throw it on L4 or Fiasco.
by Ignacio on Thu 15th Jul 2004 21:34 UTC

Linus who ??? ;0)

so is Itanium/ EPIC dead?
by Anonymous on Thu 15th Jul 2004 22:13 UTC

supposedly VLIW and EPIC was as far above RISC as RISC was beyond CISC. itanium was supposed to embody the VLIW/EPIC architecture, yet AMD came out with amd64.

i noticed in both articles no mention of ITANIUM and VLIW as the cpu of the future.

i would think a 16-bit RISC like SuperH or ARM with multicore be the future.

the game consoles do not have to worry as much about backward compatibility.

why doesn't m$ or sony or nintendo implement these futuristic technologies? i know sony is going multicell.

QNX anyone ?
by Jean-Louis on Thu 15th Jul 2004 22:17 UTC

Nice article Nicholas! .. good reading :-)

Why linux is not the right kernel
by Michael Matloob on Thu 15th Jul 2004 22:18 UTC

I did not read the article- I just skimmed through it, so please excuse any errors I have.

I think that it is a bad idea to use linux for a nexe generation compuser because linux is not the next generation. The author has a point about the macro-kernel design of Linux, but the real prbolem with that is that any OS that uses the linux kernel is going to look a lot like linux/unix. GNOME/KDE do, so does Mac OS X (it does not use the linux kernel, but has a unix-like base) Sure, you might say something about TiVo, but you wouldn't know it ran linux unless you found out from someone else. If you spend all your time in one program and put a heavy cover on it it will not look like linux, but what is the point of doing that?

A next-generation OS needs new ideas, but I have not seen many new Ideas in linux (not the kernel, the whole package so that might be better said GNU/Linux, but I am saying Anything/Linux If anyone reading this knows of an operating system package like GNU that runs on linux that does not "taste" like linux, please tell me. I don't think it is too easy. We need to write OSs from scratch with no old ideas if we want it to be on a next-generation computer

one problem ...
by houp on Thu 15th Jul 2004 22:44 UTC

I see one big problem with most of comments here. Many people are asking why linux "is bad" - "why not use linux"? Well I think that you all should re-read this article, which is talking about an next-generation OS and NOT about todays-genartion OS. There is no perfect OS, and there is no perfect solution. Linux might be good for now, it may be good for next 5 or 10 years, but world is changing. We need new OSes, we need new hardware, we need progess - new fresh ideas. And it's quite simple to undertsand - when we have NEW hardware, new type of CPU architecures - we also need new models of OS kernels. I don't know if exokernels will be the future, and I don't know if they are the best solution. But I belive we need to think about new types of kernels ... and simply try them. Try to write a fast, secure and stable exokernel based OS ... I think is a great task for a hobbyst, for a scientis and even for a commercial company. I peresonally can't just wait to see something really new! And form my point of view discussion like - "why not use linux?" are quite pointless.

microkernels
by jonas.kirilla on Thu 15th Jul 2004 22:46 UTC

BeOS hasn't been a true microkernel in a very long time, if ever, and neither has Linux. (Minix is though) Both Linux and the BeOS kernel can load modules, like drivers, etc, running in kernel space - as opposed to being completely monolithic, where everything would have to be compiled into one single kernel image. Some honest-to-God microkernels are QNX/Neutrino, GNU Hurd, L4, Mach, and many more. MacOS X does not, however, have a microkernel. MacOS X, or rather it's open-source underpinnings, named Darwin, is a combination of Mach and a BSD kernel, in the same memory space, almost the opposite of a microkernel. It may sound like L4Linux, but it's not. At least that's not the impression I've got. Feel free to correct me! ;)

That's old, not innovative.
by Anonymous on Thu 15th Jul 2004 23:15 UTC

You're rehashing the same old Andrew Taunembaum and Richard
Stallman tripe that got us GNU Hurd and other even more
hopeless projects. Microkernels were politically correct
back in 1990.

An async API is neat, but not neat enough to justify
breaking away from UNIX.

If you want something cool, implement a persistant system.
When I yank out the power cord and then put it back in,
I should get back to where I was, losing no more than 5
seconds worth of work.

Make filenames optional or entirely eliminated. Let me
look up files by attributes that a human would remember:
"open the project that I was working on 2 weeks before
last Christmas" or "open the pictures that I sent to
Jason"

v Why Windows ?
by Anonymous on Thu 15th Jul 2004 23:35 UTC
Interesting ideas and problems:
by jbett on Thu 15th Jul 2004 23:40 UTC

When you start trying to implement revolutionary changes into an operating system you have to look at more than just the foundation of the system, you have to look at the theories and concepts that are working in the foundation. Computers operate in a binary state on or off, our languages are built on the fundamentals of being 0 or 1 true or false all binary. To have a truly smart and innovative system we have to think of ways to move away from a two state system. Maybe I'm talking out of my ass but it's just an idea.

Please someone enlighten and correct me.

RE: Why windows
by Michael Matloob on Thu 15th Jul 2004 23:51 UTC

I have heard enough of that. This article was about creating a next generation system. Windows (longhorn) is much more likely to do that than a Unix based system. I have seen NOTHING 'next-generation' from a unix-like system after the 1970's.

Also if you were a 'power user' of windows the way tou probably are of a unix based system, none of that would happen to you. Do you even use windows or have you experienced any of that yourself? I think that sure, Windows is buggy and insecure, but much of the problems that people have with Windows are caused by their own stupidity.

- Adware is caused by people clicking the 'yes' button blindly in the internet explorer 'do you want to run this program dialog'.

- Popups can be avoided by switching to a better web-browser

- Viruses and Worms are downloaded from emails

- If you even know what a trojan is, you can just as easily have one on a unix-like system.

If you knew what you were doing wou wouldn't let your computer store any of that information.

Use windows and not stupidly and then say those things.

(By the way I am using Mac OS 9)

scriptable OS
by asko on Thu 15th Jul 2004 23:55 UTC


"Writing an OS from scratch these days is a very serious business, it is a multi-year undertaking and requires many skilled individuals to do it. And that's just the OS, then you need drivers, a compiler, developer documentation and applications."


Well, yes, but aren't you now "building another Unix" that you were just a few lines before opposing to. Imho, much of these issues become irrelevant (or at least, less in weight) with the NewOS approach.

Reason? An OS based on Ruby or other such technology, could _build upon_ any given flavor of Unix, or other TraditionalOS. Therefore, current compilers will do, current drivers will do, even current OSes will do. ;) People just won't see them and -more importantly- applications won't see them either. That's why they became (nearly) weightless.

Java tries to do this, .NET tries to do it. The real, totally scriptable OS environment still remains to be seen. But I think we will.

-ak

Linux is a kernel.
by Anonymous on Fri 16th Jul 2004 00:10 UTC

Linux didn't write my desktop operating system, he started and wrote parts of the kernel. If every person that has contributed to what is "Linux" (like distro) today was paid for all their time the cost would be in the ballpark of hundreds of billions of dollars.

Blah blah blah
by Steve on Fri 16th Jul 2004 00:14 UTC

But even the best next generation OS will fail without some sort of goal.

What exactly is the goal of such a next generation OS?

What will it do?

Why build it?

You've taken too much of a bottom up design, maybe it's coming in the next article, but what's the point of such an OS without a design for the applications that will run on it? After all nobody computes with an OS, they use applications, the OS allows applications to share the hardware.

That's all the OS is, a set of APIs and code that allows the hardware to be shared (hopefully by multiple programs at the same time).

i like
by eric on Fri 16th Jul 2004 00:19 UTC

good article.

And good point by poster about persistent system.

That would be very useful. Heads up on that.


linux vs Tanenbaum flameware article.
http://www.dina.dk/~abraham/Linus_vs_Tanenbaum.html

micro . exo.
http://www.cbbrowne.com/info/microkernel.html

Slideshow. traditional versus exokernal. good stuff.
http://amsterdam.lcs.mit.edu/exo/exo-slides/sld003.htm



exokernel.
http://www.pdos.lcs.mit.edu/exo.html

Re: dr_gonzo (IP: ---.bas502.dsl.esat.net)
by drsmithy on Fri 16th Jul 2004 00:23 UTC

If MS had used an open source OS to base their Longhorn OS on, they could have cut their development time drastically.

They already have a perfectly good OS to "base" Longhorn on - Windows NT.

This is why Apple will have a stable, mature OS (I'm talking about Tiger here) with lots of 3rd party support and all the features of Longhorn (and more) out about 2 years before Longhorn will be out.

It remains to be seen if it will have all the features.

The difference between Apple with OS X and Microsoft with Longhorn, is that Microsoft already had already done the hard yards of writing a new OS with NT and could use that - Apple had nothing (after trying several times) so they bought NeXT.

The language is the OS
by Lunar on Fri 16th Jul 2004 00:28 UTC

Language, API and Operating System are tighly coupled together as Nicholas states.

Thinking about a FPGA, DSP, and multiple core processors means than you actually need languages to use them easily. Specific processors actually need specific languages, otherwise it is actually hard to get good performances. Ideally, these specific languages could be embedded in the main operating system language. Think about projects like Lava, a language to design FPGA embedded in Haskell ( http://www.xilinx.com/labs/lava/ ), Bossa to design schedulers (recently featured on OSNews) or Devil for writing drivers ( http://compose.labri.fr/prototypes/devil/ ).

Old languages like C (or C++) are not well-suited for concurrent programming, because for efficiency and safeness. Some features (locking, thread management, memory management, communication) need to be deeply wired-in the language to be easy to use. Java or C# are quite successful on these points.

Memory management is something that should be done by computers, because computers are meant to track small details over time, not human. To be called "next-gen", and operating system would need OS wide garbage-collected memory management.

Haiku is missing all these features: it's written in C/C++ without using templates or exceptions. Even if the API was really designed, it was designed with these languages in mind: just look at how could the BeBook be shortened without programmer memory management.

Great article!
by Duncan Domingue on Fri 16th Jul 2004 00:41 UTC

I read the article and the author makes some really good points, especially about context switching, asynchronous communication between programs and parts of the kernel, and OS'es in general. The idea of using messages between parts of the kernel and programs is really good, and is basically what VTech used in their OS for the Helio, their one and only PDA. Abstraction of the hardware and software by using an exokernel is also a really good idea, kinda of a hardware implementation of Java. All in all, some really good ideas that should be understood before bashing. And Unix is definitely not perfect, but rewriting it, or any other variant of Unix to suit the purpose would be an arduous task that would take longer than writing a new kernel.

RE: scriptable os
by Duncan Domingue on Fri 16th Jul 2004 00:44 UTC

Wasn't there a LISP OS some time ago? The whole OS was written in LISP! I feel sorry for the first developer!

Tojan
by Pliskin. on Fri 16th Jul 2004 01:14 UTC

I'm going to put my money on that Tojan is a ....at that, I won't go there.

Anyone else want to take it?

Moving along, it seems more like what he wants IS Mac OS X.4/5 and so on.

Pliskin.

well
by hobgoblin on Fri 16th Jul 2004 01:23 UTC

the article was interesting, atleast the kernel bit. but i want to comment on some of the comments. every time you do something revolitinary with a os you end up with a whole lot of bugs and problems that have never been encounterd before. the reason for linux and the BSD's sucess is that they are evolutionary just like we humans have evolved from monkeys over years of refinement. a hammer is a hammer on the basis that its shape is perfect to get the job done. a *nix is not perfect but its one of the oldest os's out there that are still in developemnt in one form or another and just that alone gives points to the underlying ability of the system.

windows survives on the basis that you cant take existing knowhow and go to any other os but a new version of windows. but a person can go from red hat linux to openbsd to solaris to mac os x and still find many of the same stuff (with os x being the most odd child out i guess, never realy looked into it)...

you can implement a microkernel or even a exokernel and still have a unix. nothing changes. basicly the os described in the article makes me think of a extreme version of hurd.

to have a app access hardware in unix you have it locate it under /dev. there you will find everything the kernel supports. what you want to do is to add a part to the microkernel that works with the drivers to fill that filetree rather then have one static as some of the old unixes have (linux is in fact getting this to, hell when its done you will see icons pop up on your desktop when you insert a usb storage device just like in osx. yes im talking of project utopia). and you can add and remove drivers in linux without shutting the system down, just have it compiled as a module and modprobe it in place. its slighty slower then haveign it the kernel itself but i cant say im complaining.

yes it would be great to have memory protection for drivers so that a memfault in one didnt total the entire system, but this does not look like its saveing windows. i have seen that os bluescreen so many times over a flawed driver that well i dont think memory protection alone will help. basic sanity checking of the i/o of the drivers may be even more important.

Microkernel master plan
by QuantumG on Fri 16th Jul 2004 01:23 UTC

Why doesn't someone write a linux kernel module that implements the L4 api? Then you could migrate the parts in the linux kernel that you want in user space out of the kernel one at a time maintaining a working system. Indeed you could have a system that is tuned to having just the right amount of components in the kernel and outside the kernel for your desired stability / performance trade off. I think it's just a case of extremism of ideals.

re: article
by grey on Fri 16th Jul 2004 01:33 UTC

weird, I've been thinking about a lot of these same concepts (especially exokernels and new OS stuff) for a while. some meanderings on the subject in an aug2003 blog: http://advogato.org/person/grey/ I've written about some of the exokernel stuff elsewhere.

That said, is this just an article of wishful thinking, or are you planning on writing something?

Re: LispOS
by asko on Fri 16th Jul 2004 01:42 UTC


:) Thanks! ...but syntax matters. It does.

That's why C won over Pascal and so on.. There's simply too many (((((these))))) in Lisp for me.. ;) Guess again, which is my favorite!

-ak

RE: That's old, not innovative.
by juggernaut on Fri 16th Jul 2004 02:31 UTC

>Let me look up files by attributes that a human would remember:
>"open the project that I was working on 2 weeks before
>last Christmas" or "open the pictures that I sent to Jason"

Check out MacOS-X 10.4 -- that's what Spotlight is supposed to be able to do.

Lisp OS
by zeph on Fri 16th Jul 2004 02:34 UTC

There Have been a few I'm sure (google for a list I guess)

I remember a guy working on one sort of based on Scheme and used a design with no mem protection etc., all supposed to use a Trusted compiler to enforce what Scheme objects a program could touch.

http://vapour.sourceforge.net/ it seems DOA though.

re: Liisp OS
by Debman on Fri 16th Jul 2004 02:50 UTC

please tell me that the project was a joke because the name just screams irony ;-)

Starting over vs. Standing on the Shoulders of Giants
by Erik on Fri 16th Jul 2004 02:53 UTC

His arguments for not using Linux are not very strong, and are getting weaker and weaker as time goes by. Right now I'm running on a Gentoo system with a patched kernel -- it uses Nick Piggin's patch set. Performance? Very smooth, no jitter or freezing, and this is on a box that runs like a 500Mhz PIII (it's a Hush PC 1Ghz on a Via chip). You want to differentiate? Patch the kernel, display, or whatever. But starting completely over makes almost no sense whatsoever. It would only make sense in the scenario that there is nothing that can be easily patched to get what you want (there is, several options in fact -- if you needed a proprietary solution, use BSD and not Linux). I've seen alternative OS after alternative OS after alternative OS -- they poke around for a while, then suddenly, "take off", developmentally, at the point where the developers start to port a lot of existing open-source apps and tools to them. Face it, creating a full, usable, app-ridden desktop OS (in my opinion, there are only 3 modern OS's that qualify, Linux/BSD+open-source apps, Windows, and Mac OS) is not possible anymore for one person or even a small team. Yeah yeah I know about Syllable, AROS, and numerous others. None qualify, none are "usable out of the box" for real work, never will be. BeOS once was, in it's specific niche, but no more -- it's obsolete. It's drivers don't run on modern hardware, it's apps and even ports of apps are outdated, and frankly, the other OS's, all three of them, have caught up in usefulness for BeOS's niche. Yes, I'm aware that there are still die-hard Amiga fans plugging away on their boxes, god bless em, but it's not the image of the "next gen OS" that we are talking about here.

I really do believe that defining what we want in a desktop OS (low-latency and responsiveness, ease-of-use, apps, eye-candy, for example), then taking either of the mature existing open-source OS's (BSD or Linux) and building it out of that is the best way to go. A lot of work is being done, independantly, on each of these features and huge progress is being made already.

But if you wanted to differnetiate, you could grab a hold of some existing technologies that are yet immature (Y-Windows, kdrive and so on) or put an experimental new paradigm for interface on it, or whatever, and there ya go. Throw out the excess and don't cry over dramatic changes, which are necessary for what you want, but break compatibility (although each change like that does have a cost -- compute it in)

Starting over just isn't going to work.

Erik


re:
by arielb on Fri 16th Jul 2004 04:13 UTC

I have to say that i think it is quite sad to have to compile for hours and patch up your OS just to get respectable performance. Sorry but good performance should be standard right out of the box.

@drsmithy
by Mike Reid on Fri 16th Jul 2004 04:40 UTC

> Microsoft already had already done the hard yards of writing a new OS with NT and could use that

Ahem. IIRC IBM also did a large amount of the shared work (with Microsoft) that would become NT (Microsoft extended the joint project into NT (after they both decided to split the project) and IBM into OS/2). Some of the stuff you laud should be credited to the talented IBM engineers. No I don't work for IBM, nor ever have, but it wasn't soley Microsoft that came up with the technology they use today.

Next Gen OS
by BFG on Fri 16th Jul 2004 04:43 UTC

Dragonfly BSD is taking a pragmatic approach to the future of kernel design. They are migrating a majority of the current kernel functions to userland and making evolutionary changes towards a microkernel-like kernel (including messaging where it matters, mp-safe kernel internals, lwkt, etc) with a goal of SSI, while still maintaining a high degree of stability and compatability - including porting in patches and updates from other BSDs. Couldn't this qualify as a next-gen OS project that still keeps with its UNIX roots?

Using the GPU
by Jon Smirl on Fri 16th Jul 2004 04:44 UTC

A big trend is missing, using the GPU. Withing five years we should be able to completely run the windowing system on the GPU processor instead of the main CPU. Things like animiations and font generation will also be done on the GPU.

We are going to get true resolution independent displays where progams don't worry about pixels anymore. This is coupled with font generation by the GPU. With these features you can do zoom-in on the GPU and more detailed fonts will be computed on the fly by the GPU.

no memory protection
by PlatformAgnostic on Fri 16th Jul 2004 04:47 UTC

In a future system, why do you even need a division between kernel and user space? With the increasing acceptance of managed languages, everything could be in kernel space with no security context switches. The only distinction that would remain would be the managed versus unmanaged services. Legacy apps could be run in a usermode, memory-protected space, but they would be the exception. Perhaps the extra overhead of managed execution would be less than the savings of eliminating ring changes.

I read the article and i found it very interesting- just one comment- What about virtualization??? I have been using vmware for quite some time now and the performance has increased dramaticly since i first installed VMware 2.0. I understand the point of this article is to envision the operating system of the future, but i think there is a historical trend that is being followed. The first computing systems ran application right on the metal (or the bulb for that matter) as computer hardware becomes more powerful, the Operating Environment increases in size, now looking at *nix, and windows based systems i can't but ignore that fact that applications that run on these operating systems have so many ties to the underlying operating system that in intense compute environments it is impractical to run more than 1 app per hardware box due to stablity issues. Virtualization allows you to consolodate multiple work loads within there own virtual environment without having to worry about windows .dll hell or *nix depenency conflicts.
Virtualization in enterpise mainframes has been common for a while (system partitioning), and i believe that future of computing will have us running our home machines on some virtual environment hosted somewhere on the net with ultra high bandwidth links to our home, car, office everywhere.

The PC box sitting by the desk will be gone, the computer and the operating evironment will be everywhere our TV, tablet type machines and whatever other form-factors make sense.

Sony "cell-based" Playstations will be our computer in the living room. it wont matter what the kernel type is, each device will have the type of kernel architecture that makes the most sense. the "services" will we will access will be provided via virtualization.

Look that the specs for the Xbox-2 it will have enough power to virtualize an Xbox-1 for backwards game compatiblity.
With the awesome hardware of the future. virtualizing any type of os, service, system, program will be practical.

Sorry for the Rant, i hope that i have not offended anyone.

Thank you for your patience,

RE: Starting Over...
by Michael Matloob on Fri 16th Jul 2004 04:55 UTC

The problem with using linux is that you are not writing something from scratch. You might ask what the problem with that is, and it is this: Linux has not had many new ideas in the operating system department. Maybe I have not been following linux too well, but I don't think that anything 'next-generation' has come from linux. On the other hand when things are written from scratch (and no copying ideas!) you get something cool. Take Unix, it introduced many good ideas. Or BeOS. If we use things like linux and backward compatibility and the first thing we do with our OS is write in the POSIX API so we can port the coolest programs. We won't have any time for new 'next-generation' ideas.

AROS and Syllable and the others will be usuable, just not as early as linux. So they need some time to work on ideas instead of stealing them all from unix. That does not mean that they won't be usuable later. It takes time to write programs. Just because unix had a head-start of 30 years! does not mean that the others are not going to catch up they can, and hopefully they will.

I don't think that linux is usuable out of the box now either. Get a windows user and I am sure that even with Gnome 2.x or KDE 3.x and they won't think it is usuable out of the box.

It is a waste that all the open-source developers are working on the unix-copy and not using their talents for anything new.

LOL, wow, this is out of touch...
by V. Velox on Fri 16th Jul 2004 05:00 UTC

Look at FreeBSD 5x branch and the like for what the future holds...

Why not use binary compat or the like instead of shoe horning in a special enviroment for it?

Dude, unix does not mean Linux. Linux is a unix, but far from the only one.

Unix means either a broad family of OSes inluding the BSDs, Linux, and all the other assorted ones. The front end does not matter, since you can make X look like what ever you want. Just requires a proper CLI, /dev, <distro>/bin, and the like...

BTW UNIX = AIX or Solaris(depending on the year in question possibly SCO or HPUX)

Unix will dominate all and will in the end most likely be some BSD/Linux bastardization. ^_^

He did not start from scratched. It started as a minix clone.

Hence UNIX and unix...

There is no next generation...
by V. Velox on Fri 16th Jul 2004 05:25 UTC

Simply put there is no next generation of OS that will not be badly designed.

Unix is the end of the line product for what needs to be taken care of, it is just what do you want to do with it that you need to ask. That has never been the job of a OS. The job of the OS is to allow you to accomplish that and do it in the best way possible.

BTW there is no need to build rewrite and throw out the past for stuff of today. Just change the parts as needed. Far simpler than redesigning from the ground up every time you have the slightest change. About the only thing I can think of that would justify it is going to trinary or the like.

Hurd ??
by joking on Fri 16th Jul 2004 05:30 UTC

last i checked there is hardly anything to report from hurd mailing list. just a few messags and lots of spam.

--------------------------------
March 6th, 2004

After a long time of not being updated, new CVS snapshots of the Hurd and GNU Mach are uploaded.
July 31st, 2003

The K4 CD images are now available. See the Hurd CD page for further information.
April 30th, 2003
----------------------------------

wow so after 1 year they did comeout with nothing.

hurd is dead, face it.

There is no such thing as a waste of talent when it comes to OSS, for the most part.

If you are not getting paid for it, it is not for your own waste. You are not doing any thing for any higher good. You do it becuase you want it to exist, you want it, something is ticking you off, you want to see something go down in flames, or the like. But it is all ways becuase you want it. Any one that says otherwise is delusional.

BTW show me a unix user that thinks windows is usable out of the box. It goes both ways and is pointless to play that lame game... it all depends on what you where learned how to use...

POSIX and other APIs
by Best on Fri 16th Jul 2004 05:39 UTC

I see the biggest problem with any new OS project being a lack of drivers, and software. The easiest way that I can see to allieviate this is to impliment the POSIX standard (at least mostly) so that you at least are source compatable with a large base, and you could then code a service that would allow you to use the linux graphics, sound, network, etc drivers.

Also while you may not want to use Gnome or KDE wholesale for whatever reason, the source compatability would allow you to take advantage of underlying technologies like gstreamer or dbus. Also in the Gnome HIG you'd have a good set of interface guidelines.

Re: Mike Reid (IP: ---.cable.paradise.net.nz)
by drsmithy on Fri 16th Jul 2004 05:53 UTC

Ahem. IIRC IBM also did a large amount of the shared work (with Microsoft) that would become NT (Microsoft extended the joint project into NT (after they both decided to split the project) and IBM into OS/2). Some of the stuff you laud should be credited to the talented IBM engineers. No I don't work for IBM, nor ever have, but it wasn't soley Microsoft that came up with the technology they use today.

No, the NT project was run under Dave Cutler at Microsoft and initially almost completely by a bunch of ex-DEC people Cutler had brought with him. IBM was working on the "old" OS/2 that went on to become the OS/2 2.0 and Warp (which was a completely separate code base and product to the "OS/2 NT" that was renamed to Windows NT). AFAIK there was little (if any) IBM involvement in the NT project - it would barely have been into pre-Alpha coding stages when IBM and Microsoft split (1989-90, the NT project started in 1988). Indeed, it would be more correct to reverse your statement, since the early versions of OS/2 were produced by Microsoft for IBM. Certainly HPFS and large chunks of the OS guts were Microsoft's work - IBM were still paying Microsoft royalties for HPFS in the mid 90s.

RE V. Velox
by Michael Matloob on Fri 16th Jul 2004 06:15 UTC

The thing about making X look like whatever you want sounds nice, but you said a good command-line. The unix-like-operating systems (using that so I won't be corrected) were designed for the command line and people who are seriously using unix-like-os's (grammatically correct?) have an X-term up or have some unix-like-operating-system-guru at their side. You can't change it. unix-like-operating-systems will generally stay command line (there are a few exceptions- tivo? not really an operating system... Mac-OS-X I think is a really good job by apple, but a whole lot of code to cover it up) I don't think there will ever be a desktop linux that home users (without a guru at their side or without command-line-knowledge) will be able to use -- But who knows? I might be wrong. Back to what I was saying--the front end to the computer includes the command-line if that is what you use to communicate to the computer It does not matter what the WM does.

I don't understand what you meant by the Unix is the end of the line... thing? Were you saying it was a good thing or a bad thing. I probably got this wrong, but what is wrong with communicating with the computer by asking it what to do? (If you are confused so am I please clarify what you said)

Anyways I think that after we finish with the GUI phase of operating-system-interfaces, we will go back to ask the computer what you want it to do, but without learning a commang line- in english (or spanish, french, german, japanese........) if the point of a computer is to make it easier to get stuff done, we should use the method we use to tell people to get stuff done- language I have not ever heard someone complain that that telling someone to do something was not ususble out of the box!

No clue about linux but for X it is entirely possible. Just becuase on one had done it for X yet(or atleast in a way you like), does not mean it is possible, but if you want something learn c/c++ and make it so ;)


What I said was niether good or bad, it just is. Everything in a OS that will be needed from now on has been done in UNIX first, so basically all that comes after it falls under unix. ^_^ Looks may change as well as name, but it will all behave like unix. Windows is getting more and more that way all the time. lol

Making a computer easy to use is simple... require everything thing be pluggable

As far as languages go... just worry about english, in the long run, thanks to the english, it will kill all others off in time. Thus the world is left a better place thanks to their empirlistic policies ^_^

Uhm, it has all ways been to make things as easy to do as possible since day one... the problem boils down to ppl not wanting to learn and lame schools.

Amiga & BeOs
by melgross on Fri 16th Jul 2004 07:35 UTC

Please stop with the "the BeOs (Amiga) is the future, and the best Os ever designed.

Other than the (few) people who just can't let go, there is NO interest in these OS's. That's why they failed.

The Amiga had some great forward looking ideas, but it was poorly implemented. It was very hard (as I remember) to program for, and was very unstable as a programming environment. Despite it's sophistication for it's day, it's hopelessly outclassed now. We keep hearing promises about new versions, new machines, new programs, etc, but it never happens. It won't either because they will have to sell a million machines to be viable, and that can't happen.

The BeOs was originally expected to be sold to Mac users. I know, because Gasse´ (I spelled that wrong) came down to My MUG in NYC to show the BeBox and said that he considered it an alternative OS for Mac users. It only went to '86 when Apple stopped giving them the API's for it to run on Mac's, and the BeBox died. It was very nice, and we ooed and ahhed when it ran two video streams at once. But most people, such as myself, had many problems with it, and most of the promises Be made never came to pass. I bought almost all of the software made for it, but never opened most of it.

For either of these OS's to be thought of as the future OS, even with extensive mods is a bit naive.

Linux started as a server OS with Apache, when that fairly simple task seemed to work, it slowly moved further afield, being added to bit by bit (literally).

Then the hobbyists programmers came into it, and now there is a slew of distro's, each one thinking that it's the only ONE. Most of those people are here on the Linuxnews.com, er, OSnews.com, web site.

Somehow, whatever the topic starts out as, it ends up with "my distro is better than yours".

Please, lets have an author who is a pro, with years of experience giving us articals of this complexity. And Linux, for all it's vertues, is not the OS of the future in the sense that that is meant, though it will no doubt be succesful.

I think that Microsoft has "Coplanded" Longhorn. It seems that they have gotten themselves into something that they can't finish. Someone here said something about Longhorn and NT. Longhorn is not based on NT. It will be at least three years late, and if it does come out, will arrive without the most defining features. Microsoft said that the OS as database concept will be pushed back to 2009. That's assuming that Longhorn will come out "sometime in 2006", and not in 2007, or 2008, as some have said.

Good luck to them.

I think that we will see Windows, Mac OS X, and Linux around for a long time to come. It's too late for new systems at this point at the desktop level.

Re: melgross (IP: ---.nycmny83.covad.net)
by drsmithy on Fri 16th Jul 2004 07:53 UTC

Someone here said something about Longhorn and NT. Longhorn is not based on NT.

That was me, and Longhorn *is* NT. It is (or will be) Windows NT 6.0 (maybe even 6.1 by the time it gets released). Do not believe the marketing spin - it's not a "from scratch" project in any way, shape or form (for a start, they don't have enough time). If you want something a bit more concrete, consider screenshots like this: http://www.winsupersite.com/images/reviews/lh_alpha_054.gif

It's a significant revision, to be sure - even more significant than the shifts from 3.51 to 4.0 and 4.0 to 5.0 (Win2k) were, but it's still just a major point revision, not a new product or codebase.

Microsoft said that the OS as database concept will be pushed back to 2009. That's assuming that Longhorn will come out "sometime in 2006", and not in 2007, or 2008, as some have said.

Microsoft have been talking about this concept and other pie-in-the-sky ideas for NT (veterans of the industry should remember the codename "Cairo") since the early 90s. They've been pusing it back since then as well, so I wouldn't be holding my breath either.

I think that we will see Windows, Mac OS X, and Linux around for a long time to come. It's too late for new systems at this point at the desktop level.

If Microsoft went bankrupt tomorrow it would still take 5+ years to reduce Windows to a non-majority marketshare and probably closer to 10 to get it down to the levels than non-Windows OSes have today. The computer industry is starting its maturation process and inertia is becoming an even more significant issue.

RE: V. Velox
by Daan on Fri 16th Jul 2004 08:02 UTC

"

'a common misconception here in the comments section seems to be that UNIX is perfect, because it is open source and stable and reliable and and and and and and (...)'

Dude, unix does not mean Linux. Linux is a unix, but far from the only one."


Sure, I know that, I have run Solaris for some time myself and am typing this on OpenBSD!

And why I chose the word Linux? Simple, because all UNIX-like OS'es have the same reputation of being stable and reliable. And in the comments section of an article that doesn't talk about Mac OS X, Solaris or some other commercial OS, the general idea seems to be that UNIX(-like) == open-source. So that's why I wrote UNIX instead of Linux.

os for mainframes?
by PdC on Fri 16th Jul 2004 08:45 UTC

most comments are about osses that run on a desktop like computer.
but what about all techniques used in a mainframe os?

i think the future os wil and can run on your wristwatch as wel as on distributed/clustered/networked mainframe hardware.

RE: melgross
by Thom Holwerda on Fri 16th Jul 2004 08:48 UTC

Please stop with the "the BeOs (Amiga) is the future, and the best Os ever designed.

Other than the (few) people who just can't let go, there is NO interest in these OS's. That's why they failed.


Errm, Zeta is trying to commercially revive BeOS, Haiku is trying to do that in a open-source way. And, SkyOS uses more or less the same vision as the creators of BeOS had: power thourgh simplicity.

BeOS itself might be pretty much dead (although some of us still use it, including me, on a daily basis), its vision and goals and ideas are far from dead.

Sorry for the preacherlike tone everyone ;) .

I know, I know
by hmmm on Fri 16th Jul 2004 08:55 UTC

Hey everyone, let's build a Next-Next-Generation-OS. Lets start early and beat these Next-Gen-OS guys to the punch.

Can someone pass me that J?

After reading some posts I swear I'm the only pothead commenting in this thread who isn't high.

Linux is cool technology, but don't let that stop you. I encourage creativity. Even when you're stoned.

about the mac os x kernel....
by resonate on Fri 16th Jul 2004 09:12 UTC

here is some rant....
http://www.linuxjournal.com/article.php?sid=6105&mode=&order=&thold...

and let's not forget the excellent kernelthread...
http://www.kernelthread.com/mac/osx/

for me, in the end is the user experience that counts
sure i would like some working _desktop_ linux that windows people can turn to and never look back, but looking at the last 10 years of development.....its not going to happen
what do we need another linux flavour for?

Unix = Neanderthal OS
by Futt Bucker on Fri 16th Jul 2004 11:08 UTC

Unix was deprecated the moment plan9 came out. And plan9 was developed by the same people that developed unix.

What matters more is the UI
by dr_gonzo on Fri 16th Jul 2004 12:44 UTC

Let's face it, users decide if they like an OS or app from the way they interact with it. I wonder if people would hate windows as much if it had all the nice GNU CL utilities and KDE and KDM running on it.

This is the point I was trying to make in my initial post. If you want to make a cool and interesting OS, use the already made "boring" stuff like networking and kernel and then innovate with the GUI and the apps.

BeOS, It's THE OS!
by Galley on Fri 16th Jul 2004 14:29 UTC

Micorosft should've bought BeOS a few years ago, and based Longhorn on that. They would've had an incredibly reposnisve OS, and they would've saved millions of dollars and several years of development time.

re:BeOS, It's THE OS!
by helf on Fri 16th Jul 2004 16:04 UTC

not really, what would have happened is that they'd have seriously f*cked beos up to get it compatible with 'legacy' windows programs and then add a bunch of thier other crap to it. it would have ended up bloated and slow ;) or maybe not . who knows.. windows 2k is plenty fast on my 1ghz box.

Bull, complete bull.

Linux is just a OSS unix clone and by far not the only one. Unix does mean Linux, but UNIX does not mean Linux.

Exokernels, baby, YEAH!
by EyeAm on Sat 17th Jul 2004 00:40 UTC

Quoting melgross...

"Please stop with the "the BeOs (Amiga) is the future, and the best Os ever designed.

Other than the (few) people who just can't let go, there is NO interest in these OS's. That's why they failed.

The Amiga had some great forward looking ideas, but it was poorly implemented. It was very hard (as I remember) to program for, and was very unstable as a programming environment. Despite it's sophistication for it's day, it's hopelessly outclassed now. We keep hearing promises about new versions, new machines, new programs, etc, but it never happens. It won't either because they will have to sell a million machines to be viable, and that can't happen."



I wouldn't say that it has "NO" interest; just that the interest Amiga/OS has currently is extremely small, by comparison to other operating systems, or companies.

There were a lot of good ideas inherent to the Amiga OS, way back when. Those ideas could have been extended and ehanced to rival anything today, if the people in charge had only listened. For example, I tried to get Amiga, Inc. to rewrite the kernel to be an exokernel, so that they wouldn't come to the market with anything less than the hundreds of thousands of Classic Amiga programs (including the shareware, etc., found at AmiNet), because a new 64-Bit OS could be made backward compatible with all previous Classic Amiga software.

Like this article illustrates, exokernels rock! High flexibility there. It's not going away, it's the next new thing. Evolve or die, so to speak.

--EyeAm
"Rebel to the status quo!"
http://s87767106.onlinehome.us/mes/NovioSite/index.html

lisp machines, etc.
by grey on Sat 17th Jul 2004 00:51 UTC

For those asking about LispOS/etc. Google for Lisp machines, you'll see that Symbolics was the last to manufacture any (though there were other vendors) about 10 years ago. LISPM users were widely responsible for the Unix Haters Handbook, which decried many of the same criticisms on Unix that you hear MS bashers rant about now. There's some very interesting reading and writing on the subject, particularly Genera (the OS) which for being over 20 years old is still ahead of where we tend to be today - buffer overflows and the like did not exist on the platform, so there goes a huge chunk of problems we deal with security-wise now. They appear to have been ahead of their time, though hopefully the reason for needing such systems will come again and perhaps we'll see something taking their concepts up again at the helm (the language Genera was implemented in isn't necessarily as important as some of the other concepts it allowed).

Nicholas: did you do more digging around MIT's XOK papers? There's actually already a paper on virtual machines with xok. Granted, a few years old (everything in the xok project appears to have gotten dusty). I think at a minimum some sort of x86 emu is a requirement for any future successful OS attempts. But again, looking at history, emulators used to run at a lower level, the first emulator I used for the Amiga, emulated a Mac - and despite running on the same 68000, was actually _FASTER_ than a mac plus. By implementing emulation functions in an exokernel styled OS performance would likely not suffer much (there are other problems, but still other interesting solutions, again refer to xok documentation on File System and partitioning possibilities). Not building everything in a userland on top of a bloated OS can afford some real improvements, but at the same time with exokernel/emulation mindset, you can still leverage existing products, albeit perhaps in a sandbox of sorts (which given the lack of security most of those products have inherently, would probably be a good thing)

Makeing a new OS
by Hagge on Sat 17th Jul 2004 12:13 UTC

I don't understand how you people think. No, because "unix" (linux) works it's not the best alternative. If that was the case building on Windows would be the best alternative because obviously it works best for most people.

Bullshit like "Don't forget it was a UNIX clone and that was a long time ago. Writing a full OS from the ground up is too large of a task becasue it would take so long and so much time before it is functional enough to use (and make money to support it). This just is not practical, to build a new OS and not use at least parts of open source (gcc, toolkits, applications, shell env etc.) is a costly task.", is also boring.

Linux was written from scratch as someone said. So was the OS the author proposes, so was SkyOS, so is MorphOS, and so on. Linux, SkyOS and MorphOS seems reasonable enough developed to have a chance to become good OSes. So it's NOT impossible.

Reads on... (I started to read the article a long time ago but not the comments.)

Migrating Threads
by Marven Lee on Sat 17th Jul 2004 15:20 UTC

"Unfortunately much of the research into microkernels has been on Unix and the synchronous nature of Unix's APIs [async] means asynchronous messaging is not used and this results in more context switches and thus lower performance."

Actually a lot of microkernel research over the past decade has focused on optimizing synchronous IPC / RPC between clients & servers on the same machine, usually in the form of Migrating Threads.

Migrating Threads allows the active entity (the thread) to transfer execution from one address space to another in a similar manner to how a system call or interrupt transfers a thread from user-mode to kernel-mode in the same address space. In other words threads are able to jump across address spaces in a controlled manner.

Examples include the Pebble Operating System, Doors in Solaris/Spring Nucleus, Migrating Threads added to Mach and a few others.

The paper, "Microkernels Should Support Passive Objects", covers some of the arguments in favour of the migrating thread model. Most of the research papers can be downloaded from the CiteSeer website.

plan9
by kmiller on Sat 17th Jul 2004 15:41 UTC

Whenever you're looking into the future for OSes, I recommend not forgetting one very special os. Plan9. On a whole, it sucks. Nobody uses it, and reasons for that exist - but I think anyone interested in OS design/implementation should at least check it out.

It takes a drastically different approach to so many of the ideals we've long forgot even had room to be different.

It was not from scratch, but was meant as a replacement for the minix clone, so what it was to do was all ready known.

missed point
by Peter Fergusson on Mon 19th Jul 2004 03:49 UTC

The advantage / point of a fresh start is to abandond the bagage from past mistakes. It allows the best ideas, currently available, to be used whether or not they are new or 30 years old.
With the new hardware, which is so differnt from what we have today, old OS asumptions should be to be looked at: because many will fail in the new enviroment, and so should be left behind.
I see the best way to procede is to decide on what features are wanted, eg. the best features from all other OSes and any new ideas, and then look to see if any other system could be a good foundation.
Now I suppect than the suggested haiku / openbeos OS is more suitable than most for the new OS. Be Inc started with a fresh design, with many upto date ideas, and the haiku / openbeos OS team reimplemention avoids junk/mistakes from old versions, so it is probabily the closest to what is wanted.

New OS needs new hardware
by Charles Esson on Mon 19th Jul 2004 10:53 UTC

While we use the Von Neumann architecture and it's derivates the current available OS's are more than adequate.

Things will get interesting when the basic structure of the compute engine changes. Think small things happening in parallel, how to deal with vision properly, how the mind creates a model of the environment, love and hate.

This article is just another whistle in the wind. Linux will be a satisfactory commodity OS for the current day commodity compute engine.

Licenses and ideas
by Bob Robertson on Mon 19th Jul 2004 12:01 UTC

the commercially friendly ultra-free MIT license.

I am so sick of FUD concerning the GPL. Lots of commercial enterprises are using GPL'd software, the only thing they're not doing is stealing the code for their own closed products and selling it as if it were their own. I would rather put my work into the public domain if I want people to use my work without attribution. Remember the Public Domain? Anyone? Beuler?

Technically, I very much liked the article. What I'd thought was "micro" kernel is now "exo" kernel, but I'm not a kernel developer. I'm just a long time user and administrator, from TRS-80 to ES/9000, RSTS/E to CP/M to Linux.

The article also makes an error, I think, in saying "Not unix" before saying "Not Macrokernel", since in a micro/exo kernel environment, whether it looks like "unix" to a program or not, is just a matter of having the API available. The user will see the applications, not the OS.

QuantumG's comment about implementing the L4 API in a Linux module goes a long way to demonstrating that imagination is the most important technical skill. I realize that Micro/Exo kernel goes one better, since there would be no need to load such a module into kernel space in order to have the API available. The API becomes just another process.

BeOS, along with AtariOS and CP/M, I think have demonstrated one of the problems with "commercial" answers to technical problems. Commercial answers are a matter of marketing, not technical quality. That's why DOS, then Windows, beat so many technically superior OS's.

I like the exo kernel concept, it's fun, and I think it is the way the OS "aught" to be going. That said, I will use whatever OS provides the applications that I use. For a while, that was Windows, now it's Linux, only time will tell what comes next.

OS development
by Nicholas Donovan on Mon 19th Jul 2004 14:46 UTC

Let me see if I can add a little something to this argument.
As an OS developer, I've seen many trends come and go. Most of my companies efforts in the the development of exokernel-like OS to serve real-time functions (i.e. communication arrays etc.)

Exokernels at this time do not make a viable general purpose OS in my opinion as the threading mechanisms (while having fine pre-emptive capabilities) don't have the cast/drop_thread abilities of say (a Linux 2.6 series or BSD 5.x series)

Essentially if we are to do a allocation of memory (which is arbitrary and goes to FIFO on Exo, this would cause serious issues with a general purpose OS where many applications are being run.

Lack of pre-emptive caching aside, exo makes a fine real-time scenario as in the realtime space, such caching is more of a detriment than an assett.


Hope this helps,


Nick


http://www.eros-os.org

http://www.dragonflybsd.org

Given that Matt Dillon is aware of capability-based systems (like Eros) and has slated ssome of those features as a future nice-to-have, and that DragonFly is literally tailor-made for dual-core chips, I'd bet my money on DragonFly being the OS of the future.