OK, so we have some cutting edge hardware and performance which is going to leave even the best desktop systems in the dust. But, we need an OS.
One thing is it should not be is Unix. That's right, it should NOT be Unix. How many versions of Unix do we need? If you want a desktop get OS X, if you want a server pretty much any version of Unix will do. If you want "Freedom" get Linux, or BSD depending on which flavour of Freedom you like, if you want Security get OpenBSD. There is a Unix or variant for almost every task, and that's before you start looking at Linux distros. Do we really need another Unix?
I'd much rather have something new, something which can take advantage of advances in technology and use the best features from different OSs, the Ruby OS (ROS) project even have a pages online where they collect these best ideas [RubyList].
Writing an OS from scratch these days is a very serious business, it is a multi-year undertaking and requires many skilled individuals to do it. And that's just the OS, then you need drivers, a compiler, developer documentation and applications. The alternative - which almost everyone does these days - is to build an OS either based on an existing one or to clone an existing OS. In this case I would do both, I'd take Haiku [Haiku] (formally known as OpenBeOS) and branch it.
There are a number of choices for Operating Systems but I would choose Haiku because being based on BeOS it's going to thrive on multiple processors and multiple threads, it also doesn't have a lot of legacy so there's not a lot of workarounds and breaking things is not that important (in this case it's only serving as the base of a new OS so backwards compatibility can be completely ignored). The modern API and media focus will also be beneficial as is the commercially friendly ultra-free MIT license. In any commercial project - which this would have to be by it's scope - licenses issues are a lot more complicated than advocates would suggest and their implications need to be considered carefully [Licenses].
What is very important is the fact Haiku will of course be structurally similar to BeOS, that is it will be "microkernel-like". I would change it however, moving as much outside the kernel so it becomes more like a pure microkernel [Micro], in fact the kernel would technically become an exokernel [Kernel] but the system would act like an microkernel OS.
Microkernel systems are used in hard real-time and safety critical systems (i.e. where a system failure is likely to kill someone). Hard real-time and bomb proof stability are very nice attributes to aim for but pure microkernels are almost never used commercially in desktop systems, when they are they are often modified later to become "microkernel like" when parts are migrated into the kernel (i.e. Graphics in Windows NT, Networking in BeOS-Dano).
With a microkernel parts of the OS's kernel are moved into the "user-land", that is run as individual tasks. Doing this has advantages from a stability, security and simplicity point of view but breaking up the kernel into tasks means the CPU has to "context switch" between them and this can incur a heavy penalty which lowers performance. For this reason parts are often put back into the kernel to reduce the switching, thus "pure" microkernels are not generally used in desktop systems. Exokernels remove even more from the kernel so all it does is act as a hardware multiplexer, I do not know of any common OS which uses this approach at this time.
Why not Linux?
So, I'm choosing a new OS which isn't finished at time of writing, then I want to tear the core apart and use a technique known to reduce performance... There is method in my madness, Linux is a tried and tested with many drivers, good software support and it's fast. Lets be pragmatic: Apart from being different is there any good reason for not just starting with Linux?
Yes, Macrokernel based OSs (i.e. Linux) are faster than a microkernel OSs because they were primarily designed for and run on single processor systems. I assume most microkernel experiments have also been on single processor systems.
This is important because as I explained in part 1 this new system will not be a single processor system, it is based on the idea that the hardware will have at least 2 general purpose cores and that number will increase in the future, as such the OS should take full advantage of this fact.
By breaking up the components of the kernel into user-space tasks as an microkernel does they can be run in parallel across multiple CPU cores, the individual components can do their own task and do not need to explicitly support multiprocessing.
Running a macrokernel based OS over multiple CPUs adds complexity, many already complex parts have to be able to run simultaneously. Since a macrokernel uses a single address space they do not have the benefit of a protected memory: A toy app like Xeyes gets memory protection, the network stack, file system and all the most critical parts work together in a single shared memory space.
Breaking up functionality into discrete parts means components are simpler so is less likely to have bugs and easier to maintain it if they do. This design also reduces the possibility of bugs in one part crashing another since everything is compartmentalised in their own memory protected spaces, reliability is built into the design, not just the code.
This is not to say Linux in unreliable, in my own experience Linux is a highly reliable system. What it does mean is that Linux could suffer the same issue as Windows where badly written drivers can lead to system instability. In the system I am advocating a badly written driver could not crash other system components (in reality it probably can cause problems but it would have to be really badly written to do so).