This series explores the sort of technologies we could use if we were to build a new platform today. In the first part I described a system with a multi-core multi-threaded CPU, FPGA and Cell processors. In this second part we start looking at the Operating System.
OK, so we have some cutting edge hardware and performance which is going to leave even the best desktop systems in the dust. But, we need an OS.
One thing is it should not be is Unix. That’s right, it should NOT be Unix. How many versions of Unix do we need? If you want a desktop get OS X, if you want a server pretty much any version of Unix will do. If you want “Freedom” get Linux, or BSD depending on which flavour of Freedom you like, if you want Security get OpenBSD. There is a Unix or variant for almost every task, and that’s before you start looking at Linux distros. Do we really need another Unix?
I’d much rather have something new, something which can take advantage of advances in technology and use the best features from different OSs, the Ruby OS (ROS) project even have a pages online where they collect these best ideas [RubyList].
Writing an OS from scratch these days is a very serious business, it is a multi-year undertaking and requires many skilled individuals to do it. And that’s just the OS, then you need drivers, a compiler, developer documentation and applications. The alternative – which almost everyone does these days – is to build an OS either based on an existing one or to clone an existing OS. In this case I would do both, I’d take Haiku [Haiku] (formally known as OpenBeOS) and branch it.
There are a number of choices for Operating Systems but I would choose Haiku because being based on BeOS it’s going to thrive on multiple processors and multiple threads, it also doesn’t have a lot of legacy so there’s not a lot of workarounds and breaking things is not that important (in this case it’s only serving as the base of a new OS so backwards compatibility can be completely ignored). The modern API and media focus will also be beneficial as is the commercially friendly ultra-free MIT license. In any commercial project – which this would have to be by it’s scope – licenses issues are a lot more complicated than advocates would suggest and their implications need to be considered carefully [Licenses].
What is very important is the fact Haiku will of course be structurally similar to BeOS, that is it will be “microkernel-like”. I would change it however, moving as much outside the kernel so it becomes more like a pure microkernel [Micro], in fact the kernel would technically become an exokernel [Kernel] but the system would act like an microkernel OS.
Microkernel systems are used in hard real-time and safety critical systems (i.e. where a system failure is likely to kill someone). Hard real-time and bomb proof stability are very nice attributes to aim for but pure microkernels are almost never used commercially in desktop systems, when they are they are often modified later to become “microkernel like” when parts are migrated into the kernel (i.e. Graphics in Windows NT, Networking in BeOS-Dano).
With a microkernel parts of the OS’s kernel are moved into the “user-land”, that is run as individual tasks. Doing this has advantages from a stability, security and simplicity point of view but breaking up the kernel into tasks means the CPU has to “context switch” between them and this can incur a heavy penalty which lowers performance. For this reason parts are often put back into the kernel to reduce the switching, thus “pure” microkernels are not generally used in desktop systems. Exokernels remove even more from the kernel so all it does is act as a hardware multiplexer, I do not know of any common OS which uses this approach at this time.
Why not Linux?
So, I’m choosing a new OS which isn’t finished at time of writing, then I want to tear the core apart and use a technique known to reduce performance… There is method in my madness, Linux is a tried and tested with many drivers, good software support and it’s fast. Lets be pragmatic: Apart from being different is there any good reason for not just starting with Linux?
Yes, Macrokernel based OSs (i.e. Linux) are faster than a microkernel OSs because they were primarily designed for and run on single processor systems. I assume most microkernel experiments have also been on single processor systems.
This is important because as I explained in part 1 this new system will not be a single processor system, it is based on the idea that the hardware will have at least 2 general purpose cores and that number will increase in the future, as such the OS should take full advantage of this fact.
By breaking up the components of the kernel into user-space tasks as an microkernel does they can be run in parallel across multiple CPU cores, the individual components can do their own task and do not need to explicitly support multiprocessing.
Running a macrokernel based OS over multiple CPUs adds complexity, many already complex parts have to be able to run simultaneously. Since a macrokernel uses a single address space they do not have the benefit of a protected memory: A toy app like Xeyes gets memory protection, the network stack, file system and all the most critical parts work together in a single shared memory space.
Breaking up functionality into discrete parts means components are simpler so is less likely to have bugs and easier to maintain it if they do. This design also reduces the possibility of bugs in one part crashing another since everything is compartmentalised in their own memory protected spaces, reliability is built into the design, not just the code.
This is not to say Linux in unreliable, in my own experience Linux is a highly reliable system. What it does mean is that Linux could suffer the same issue as Windows where badly written drivers can lead to system instability. In the system I am advocating a badly written driver could not crash other system components (in reality it probably can cause problems but it would have to be really badly written to do so).
Will this not lead to performance issues?
In single processor systems there is no doubt that there is a negative performance impact. However as I explained in part 1 the hardware for this system is based on a multi-core processor and this changes things.
In order to understand what will happen we first need to know exactly what causes the microkernel performance impact in the first place.
A context switch occurs when one task has to stop and let another run, a context switch involves saving the processor “state” to RAM (the contents of the data and control registers), this operation can take tens of thousands of clock cycles. These happen more in a microkernel based OS because the kernel functionality is broken up into different tasks and they all need to be switched in and out of the CPU to operate.
This is a problem because performing the switch takes time during which the CPU cannot do any useful work. Obviously if there are thousands per second performance is going to be impacted in a negative way. More importantly the context switch can cause part of the cache to be flushed and this has a big negative effect on subsequent performance, more so than the actual context switching.
The macrokernel approach does not suffer these performance issues as everything is in the kernel and it doesn’t need to context switch when instruction flow switches between different internal parts of the kernel.
All that said, a well designed microkernel need not be slow, they can to some extent get around part of the context switch speed hit by putting messages together and transferring them on mass (asynchronously), this reduces the number of context switches necessary. Unfortunately much of the research into microkernels has been on Unix and the synchronous nature of Unix’s APIs [async] means asynchronous messaging is not used and this results in more context switches and thus lower performance. As such microkernel’s reputation for being slow may be at least partially undeserved.
Remember though, at the base of this system is an Exokernel. A traditional microkernel passes messages through the kernel to their destination. An Exokernel doesn’t deal with the message itself, it just tells the destination there is a message and leaves it to deal with it. This reduces the messaging overhead.
BeOS used asynchronous messaging technique and it indeed is a very fast OS. However, the network stack proved to be a performance burden being outside the kernel and was later moved inside and indeed this boosted the performance of the networking. This was never commercially released by Be but it is part of Zeta [Zeta] (the only legal way to get a full BeOS today).
Using Multiple cores
The difference with multiple CPU cores is separate kernel components can run simultaneously on different cores so will not need to context switch as often. Messages still need to be sent between the cores but this will not have the same overhead as a context switch and will not have any impact on the cache performance.
Using multiple cores along with asynchronous message passing could lead to our exo/microkernel based OS outperforming a synchronous macrokernel OS. Each message pass or function call will take time and this is fixed, the asynchronous message passing allows bigger chunks of data to be passed in one go and this will reduce the number of times messages are passed compared to a system which uses synchronous API calls.
The very technique which reduces performance on a single core for microkernels and boosts macrokernels may have the complete opposite effect on multicore CPUs leaving microkernels as the higher performing OS architecture. This wont happen immediately but the effect will become more apparent as the number of cores increase and the different parts of the OS can get their own core.
It could be argued that even when spread apart like this a application running on multiple cores will make the OS components switch out causing performance loss. This is of course a risk but anything with high computation needs is more likely to be running on the Cell processors which will not be handling the OS. Remember, this is a desktop so the CPU cores are likely to be sitting around doing nothing most of the time. Many like to discuss the relative merits of the performance of OSs and hardware but very, very few actually utilise that performance.
It should be pointed out that the 2.6 Linux kernel includes asynchronous I/O. Asynchronous messaging is being added to a FreeBSD based OS by the DragonFly BSD project [Dragon].
So no Linux?
Linux (or *BSD) still have advantages of course but the microkernel approach looks like it can deliver not only all the inherent advantages of a microkernel design but may also have a performance advantage in it’s favour. This approach is also consistent with the guiding principle of simplicity I set out in part 1 and gives us a chance to explore OS design from a new angle and see what the results are.
While the system will for the most part act like a microkernel based OS the fact it’s really using an exokernel adds the ability to have applications almost completely bypass the OS and hit the hardware directly in a safe, shared manner. Hitting the hardware is a somewhat frowned upon approach but it will be useful for applications which use the FPGA and has the potential for allowing massive application speed ups [Exo]. It also allows something else which will be rather useful…
Compatibility With Legacy Systems
Just because you are producing a new system doesn’t mean you have to do without any useful software, these days there are some interesting ways of getting good applications running quickly even in a completely new system.
A major problem for any new platform is the lack of applications. Emulators are available of course but are complex and don’t generally perform as well as native processing, often a great deal worse.
These days you can get almost every application you need for Linux or other Unix like operating systems, Open Source means that one option of achieving legacy compatibility is by including an entire OS, all you need to do is make sure it works on your hardware.
Dual booting is a pain however and ultimately you want to run the applications from within a single environment alongside your new platform applications.
You can do something like this using a virtualising layer, i.e MacOnLinux allows you to run OS X on top of PowerPC variants of Linux, but the virtualisation as always impacts performance and the applications are still in their own environment.
There is another way however. By combining a specific high performance virtualisation layer with the existing X Windows technology.
The virtualisation layer is called Xen [Xen], it enables multiple operating systems to run on the same hardware at almost 100% performance [XenPerf] by sharing the hardware between the OSs. The use of X Windows would allow the display to be used within the main OS.
So, we can get a whole set of up to date applications without ever having to leave the main OS by including a Unix or like OS and running it alongside the main OS. This could potentially be a usability nightmare but that can be fixed with care.
The main OS could also take advantage of subsystems in the secondary OS, i.e. the USB support in the secondary OS could be used if the primary OS did not support all USB devices (many USB devices do not adhere to the standard and thus have to be specially supported). This is a hack but a useful and justifiable one for a new OS which will be limited when it is starting out.
By incorporating a Xen like technology into our new OS we can get full performance in the main system and close to 100% in the second OS, the microkernel / exokernel design I have proposed for the new OS would allow this, Xen itself is an exokernel and drivers already exist which allow Linux to run on it.
So after all that we do use Linux (or BSD) after all…
What about drivers?
The advantage of integrated systems (where hardware and software is done by one company) is that the main drivers are going to be relatively easy to get from companies since you are purchasing their products. The problem comes when you want to support products you do not supply as anyone who has worked on alternative OSs can tell you. If you can get documentation that’s fine but short of paying for them getting drivers for any new platform is going to be a hard slog.
Starting from a clean sheet will always have costs but also has many advantages. By using the microkernel approach drivers in particular can be treated differently giving the OS abilities like drag and drop installation of drivers while the system is running (ala BeOS). It will also be possible to restart even fairly critical parts while the system is live – rather useful if you are working on something you haven’t saved yet.
So we have a exokernelified (spell check that!) version of Haiku for a base OS but the system can equally run existing Unix software in the same environment. The OS services would be provided by the upper level Haiku “kits” so we get the same capabilities as BeOS but that’s not all, if we are talking about a new system we can go further and make them better.
In the part 3 I shall cover what I’d like to do with Security, File system and File management.
References / Further Information[RubyList] Ruby OS list of Best Features. [Haiku] Haiku (previously OpenBeOS). [Licenses] Use of Open Source in commercial projects discussion. [Micro] Page about Microkernels. [Kernel] Intro to different Kernel types. [async] Unix uses synchronous messaging and this puts microkernels at a disadvantage, an informative
posting. [Zeta] Zeta is the continuation of BeOS from Yellowtab. [Dragon] There are efforts to add asynchronous messaging to Unix, one of the goals of the DragonflyBSD project is to add asynchronous messaging to a FreeBSD based kernel, DragonFlyBSD. [Exo] Further reading on Exokernels: Slideshow, Papers. [Xen] Xen allows multiple OSs to share hardware Xen project. [XenPerf] Performance is very close to native systems Xen performance.
© Nicholas Blachford July 2004
About the Author:
Nicholas Blachford is a 33 year old British ex-pat, who lives in Paris but doesn’t speak French (yet). He is interested in various geeky subjects (Hardware, Software, Photography) and all sorts of other things especially involving advanced technologies. He is not currently working.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.