This series explores the sort of technologies we could use if we were to build a new platform today. In the first part I described a system with a multi-core multi-threaded CPU, FPGA and Cell processors. In this second part we start looking at the Operating System.
OK, so we have some cutting edge hardware and performance which is going to leave even the best desktop systems in the dust. But, we need an OS.
One thing is it should not be is Unix. That’s right, it should NOT be Unix. How many versions of Unix do we need? If you want a desktop get OS X, if you want a server pretty much any version of Unix will do. If you want “Freedom” get Linux, or BSD depending on which flavour of Freedom you like, if you want Security get OpenBSD. There is a Unix or variant for almost every task, and that’s before you start looking at Linux distros. Do we really need another Unix?
I’d much rather have something new, something which can take advantage of advances in technology and use the best features from different OSs, the Ruby OS (ROS) project even have a pages online where they collect these best ideas [RubyList].
Writing an OS from scratch these days is a very serious business, it is a multi-year undertaking and requires many skilled individuals to do it. And that’s just the OS, then you need drivers, a compiler, developer documentation and applications. The alternative – which almost everyone does these days – is to build an OS either based on an existing one or to clone an existing OS. In this case I would do both, I’d take Haiku [Haiku] (formally known as OpenBeOS) and branch it.
There are a number of choices for Operating Systems but I would choose Haiku because being based on BeOS it’s going to thrive on multiple processors and multiple threads, it also doesn’t have a lot of legacy so there’s not a lot of workarounds and breaking things is not that important (in this case it’s only serving as the base of a new OS so backwards compatibility can be completely ignored). The modern API and media focus will also be beneficial as is the commercially friendly ultra-free MIT license. In any commercial project – which this would have to be by it’s scope – licenses issues are a lot more complicated than advocates would suggest and their implications need to be considered carefully [Licenses].
What is very important is the fact Haiku will of course be structurally similar to BeOS, that is it will be “microkernel-like”. I would change it however, moving as much outside the kernel so it becomes more like a pure microkernel [Micro], in fact the kernel would technically become an exokernel [Kernel] but the system would act like an microkernel OS.
Microkernel systems are used in hard real-time and safety critical systems (i.e. where a system failure is likely to kill someone). Hard real-time and bomb proof stability are very nice attributes to aim for but pure microkernels are almost never used commercially in desktop systems, when they are they are often modified later to become “microkernel like” when parts are migrated into the kernel (i.e. Graphics in Windows NT, Networking in BeOS-Dano).
With a microkernel parts of the OS’s kernel are moved into the “user-land”, that is run as individual tasks. Doing this has advantages from a stability, security and simplicity point of view but breaking up the kernel into tasks means the CPU has to “context switch” between them and this can incur a heavy penalty which lowers performance. For this reason parts are often put back into the kernel to reduce the switching, thus “pure” microkernels are not generally used in desktop systems. Exokernels remove even more from the kernel so all it does is act as a hardware multiplexer, I do not know of any common OS which uses this approach at this time.
Why not Linux?
So, I’m choosing a new OS which isn’t finished at time of writing, then I want to tear the core apart and use a technique known to reduce performance… There is method in my madness, Linux is a tried and tested with many drivers, good software support and it’s fast. Lets be pragmatic: Apart from being different is there any good reason for not just starting with Linux?
Yes, Macrokernel based OSs (i.e. Linux) are faster than a microkernel OSs because they were primarily designed for and run on single processor systems. I assume most microkernel experiments have also been on single processor systems.
This is important because as I explained in part 1 this new system will not be a single processor system, it is based on the idea that the hardware will have at least 2 general purpose cores and that number will increase in the future, as such the OS should take full advantage of this fact.
By breaking up the components of the kernel into user-space tasks as an microkernel does they can be run in parallel across multiple CPU cores, the individual components can do their own task and do not need to explicitly support multiprocessing.
Running a macrokernel based OS over multiple CPUs adds complexity, many already complex parts have to be able to run simultaneously. Since a macrokernel uses a single address space they do not have the benefit of a protected memory: A toy app like Xeyes gets memory protection, the network stack, file system and all the most critical parts work together in a single shared memory space.
Breaking up functionality into discrete parts means components are simpler so is less likely to have bugs and easier to maintain it if they do. This design also reduces the possibility of bugs in one part crashing another since everything is compartmentalised in their own memory protected spaces, reliability is built into the design, not just the code.
This is not to say Linux in unreliable, in my own experience Linux is a highly reliable system. What it does mean is that Linux could suffer the same issue as Windows where badly written drivers can lead to system instability. In the system I am advocating a badly written driver could not crash other system components (in reality it probably can cause problems but it would have to be really badly written to do so).
Will this not lead to performance issues?
In single processor systems there is no doubt that there is a negative performance impact. However as I explained in part 1 the hardware for this system is based on a multi-core processor and this changes things.
In order to understand what will happen we first need to know exactly what causes the microkernel performance impact in the first place.
A context switch occurs when one task has to stop and let another run, a context switch involves saving the processor “state” to RAM (the contents of the data and control registers), this operation can take tens of thousands of clock cycles. These happen more in a microkernel based OS because the kernel functionality is broken up into different tasks and they all need to be switched in and out of the CPU to operate.
This is a problem because performing the switch takes time during which the CPU cannot do any useful work. Obviously if there are thousands per second performance is going to be impacted in a negative way. More importantly the context switch can cause part of the cache to be flushed and this has a big negative effect on subsequent performance, more so than the actual context switching.
The macrokernel approach does not suffer these performance issues as everything is in the kernel and it doesn’t need to context switch when instruction flow switches between different internal parts of the kernel.
All that said, a well designed microkernel need not be slow, they can to some extent get around part of the context switch speed hit by putting messages together and transferring them on mass (asynchronously), this reduces the number of context switches necessary. Unfortunately much of the research into microkernels has been on Unix and the synchronous nature of Unix’s APIs [async] means asynchronous messaging is not used and this results in more context switches and thus lower performance. As such microkernel’s reputation for being slow may be at least partially undeserved.
Remember though, at the base of this system is an Exokernel. A traditional microkernel passes messages through the kernel to their destination. An Exokernel doesn’t deal with the message itself, it just tells the destination there is a message and leaves it to deal with it. This reduces the messaging overhead.
BeOS used asynchronous messaging technique and it indeed is a very fast OS. However, the network stack proved to be a performance burden being outside the kernel and was later moved inside and indeed this boosted the performance of the networking. This was never commercially released by Be but it is part of Zeta [Zeta] (the only legal way to get a full BeOS today).
Using Multiple cores
The difference with multiple CPU cores is separate kernel components can run simultaneously on different cores so will not need to context switch as often. Messages still need to be sent between the cores but this will not have the same overhead as a context switch and will not have any impact on the cache performance.
Using multiple cores along with asynchronous message passing could lead to our exo/microkernel based OS outperforming a synchronous macrokernel OS. Each message pass or function call will take time and this is fixed, the asynchronous message passing allows bigger chunks of data to be passed in one go and this will reduce the number of times messages are passed compared to a system which uses synchronous API calls.
The very technique which reduces performance on a single core for microkernels and boosts macrokernels may have the complete opposite effect on multicore CPUs leaving microkernels as the higher performing OS architecture. This wont happen immediately but the effect will become more apparent as the number of cores increase and the different parts of the OS can get their own core.
It could be argued that even when spread apart like this a application running on multiple cores will make the OS components switch out causing performance loss. This is of course a risk but anything with high computation needs is more likely to be running on the Cell processors which will not be handling the OS. Remember, this is a desktop so the CPU cores are likely to be sitting around doing nothing most of the time. Many like to discuss the relative merits of the performance of OSs and hardware but very, very few actually utilise that performance.
It should be pointed out that the 2.6 Linux kernel includes asynchronous I/O. Asynchronous messaging is being added to a FreeBSD based OS by the DragonFly BSD project [Dragon].
So no Linux?
Linux (or *BSD) still have advantages of course but the microkernel approach looks like it can deliver not only all the inherent advantages of a microkernel design but may also have a performance advantage in it’s favour. This approach is also consistent with the guiding principle of simplicity I set out in part 1 and gives us a chance to explore OS design from a new angle and see what the results are.
While the system will for the most part act like a microkernel based OS the fact it’s really using an exokernel adds the ability to have applications almost completely bypass the OS and hit the hardware directly in a safe, shared manner. Hitting the hardware is a somewhat frowned upon approach but it will be useful for applications which use the FPGA and has the potential for allowing massive application speed ups [Exo]. It also allows something else which will be rather useful…
Compatibility With Legacy Systems
Just because you are producing a new system doesn’t mean you have to do without any useful software, these days there are some interesting ways of getting good applications running quickly even in a completely new system.
A major problem for any new platform is the lack of applications. Emulators are available of course but are complex and don’t generally perform as well as native processing, often a great deal worse.
These days you can get almost every application you need for Linux or other Unix like operating systems, Open Source means that one option of achieving legacy compatibility is by including an entire OS, all you need to do is make sure it works on your hardware.
Dual booting is a pain however and ultimately you want to run the applications from within a single environment alongside your new platform applications.
You can do something like this using a virtualising layer, i.e MacOnLinux allows you to run OS X on top of PowerPC variants of Linux, but the virtualisation as always impacts performance and the applications are still in their own environment.
There is another way however. By combining a specific high performance virtualisation layer with the existing X Windows technology.
The virtualisation layer is called Xen [Xen], it enables multiple operating systems to run on the same hardware at almost 100% performance [XenPerf] by sharing the hardware between the OSs. The use of X Windows would allow the display to be used within the main OS.
So, we can get a whole set of up to date applications without ever having to leave the main OS by including a Unix or like OS and running it alongside the main OS. This could potentially be a usability nightmare but that can be fixed with care.
The main OS could also take advantage of subsystems in the secondary OS, i.e. the USB support in the secondary OS could be used if the primary OS did not support all USB devices (many USB devices do not adhere to the standard and thus have to be specially supported). This is a hack but a useful and justifiable one for a new OS which will be limited when it is starting out.
By incorporating a Xen like technology into our new OS we can get full performance in the main system and close to 100% in the second OS, the microkernel / exokernel design I have proposed for the new OS would allow this, Xen itself is an exokernel and drivers already exist which allow Linux to run on it.
So after all that we do use Linux (or BSD) after all…
What about drivers?
The advantage of integrated systems (where hardware and software is done by one company) is that the main drivers are going to be relatively easy to get from companies since you are purchasing their products. The problem comes when you want to support products you do not supply as anyone who has worked on alternative OSs can tell you. If you can get documentation that’s fine but short of paying for them getting drivers for any new platform is going to be a hard slog.
Starting from a clean sheet will always have costs but also has many advantages. By using the microkernel approach drivers in particular can be treated differently giving the OS abilities like drag and drop installation of drivers while the system is running (ala BeOS). It will also be possible to restart even fairly critical parts while the system is live – rather useful if you are working on something you haven’t saved yet.
Conclusion
So we have a exokernelified (spell check that!) version of Haiku for a base OS but the system can equally run existing Unix software in the same environment. The OS services would be provided by the upper level Haiku “kits” so we get the same capabilities as BeOS but that’s not all, if we are talking about a new system we can go further and make them better.
In the part 3 I shall cover what I’d like to do with Security, File system and File management.
References / Further Information
[RubyList] Ruby OS list of Best Features. [Haiku] Haiku (previously OpenBeOS). [Licenses] Use of Open Source in commercial projects discussion. [Micro] Page about Microkernels. [Kernel] Intro to different Kernel types. [async] Unix uses synchronous messaging and this puts microkernels at a disadvantage, an informativeposting. [Zeta] Zeta is the continuation of BeOS from Yellowtab. [Dragon] There are efforts to add asynchronous messaging to Unix, one of the goals of the DragonflyBSD project is to add asynchronous messaging to a FreeBSD based kernel, DragonFlyBSD. [Exo] Further reading on Exokernels: Slideshow, Papers. [Xen] Xen allows multiple OSs to share hardware Xen project. [XenPerf] Performance is very close to native systems Xen performance.
© Nicholas Blachford July 2004
About the Author:
Nicholas Blachford is a 33 year old British ex-pat, who lives in Paris but doesn’t speak French (yet). He is interested in various geeky subjects (Hardware, Software, Photography) and all sorts of other things especially involving advanced technologies. He is not currently working.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.
If MS had used an open source OS to base their Longhorn OS on, they could have cut their development time drastically. This is why Apple will have a stable, mature OS (I’m talking about Tiger here) with lots of 3rd party support and all the features of Longhorn (and more) out about 2 years before Longhorn will be out.
What’s the point in writing a kernel from scratch when there are a few free ones out there already that work fine? What’s the point in developing a new printing system when CUPS is already there? There’s lots of free stable software out there already for UNIX (and Linux and all the other *nixes out there) so why not use it?
Wouldn’t it be better to use an existing OS that’s stable and secure and then build cool interesting stuff on top of it like searching, video, sound, nice UI, easy networking etc… I’m not talking about Apple here specifically but they’ve obviously made really great decisions with OS X.
Because people want to differantiate, set themselves apart. If you are an OS enthousiast like me, then after distro #478 you’ve seen it, y’know, you want something else.
That’s why people still creaite other OSs, it’s called: innovation. I mean, if we were to follow your idea, we’d all drive the same cars excet they’d have a different color.
I’m a little confused by his statement that you should not use Unix (because it uses a monolithic kernel, I guess?), and then cites Linux and OSX as examples of Unix. I know we like to refer to them as Unix, because they behave like Unix in multiple ways, but there are major architectural differences between each of these OS’s. I’m not sure where we draw the line between “Unix” and ‘Not Unix”, and why.
For example, OSX’s kernel is actually MACH, which is a microkernel, which is what he’s advocating. Why is Haiku the best choice? Didn’t BeOS have many Unix-like qualities, too? Didn’t it have a bash-like command line (I don’t remember that well). If MacOS is Unix because it mimicks some Unix behaviors, then might we say BeOS is too, or might we say that everything is, except Windows. Or even Windows has stolen some things from Unix.
I mean, I liked BeOS, and was really hoping it’d succeed, but if you wanted to make the ‘ideal OS’, might it be better to take the best fully working OS and fix what you don’t like about it then to take a non-working OS and make it work and then fix what you don’t like about it?
Summary: You say Haiku is the best candidate for the ‘Next Generation OS’ because it uses a microkernel. But why is this not-yet-working attempt to reverse engineer BeOS superior to other (working) OS’s which have microkernels?
Hurd Day’s Night?
http://www.gnu.org/software/hurd/hurd.html
From the author: This wasn’t a flame or anything. It’s an honest question.
I believe you will get more milage for your money by using parts of OSS. Designing an entirely new OS from the ground up is a cool hobby but could never become mainstream.
Designing an entirely new OS from the ground up is a cool hobby but could never become mainstream.
Yes, because no one has ever created an OS from scratch….
[remembers all OS’ were, at some point, made from scratch]
…I mean…no private individual has ever started an OS from scratch as a hobby, and had it become mainstream. Use Linux instead….
[starts making rapid hand movements to distract you from mention of Linux]
…Linus who?
UNIX IS NOT PERFECT!!!
Indeed, it is NOT.
Sorry for putting it in bold-face, but a common misconception here in the comments section seems to be that UNIX is perfect, because it is open source and stable and reliable and and and and and and that therefore every OS that is not based on UNIX can only be crap.
Yet that is not true. You only have to look at the tons of Is linux ready for the desktop articles to see that it is not perfect. And the argument about open-source is no more valid than that the Itanium, the POWERPC and the AMD64 should be abandoned because there is more software for ix86.
Yet I hear you already: But Mac OS X? Doesn’t it solve many of the problems of other unices?. Yes, it does. But to do this, it hides everything that reminds of UNIX from the end-users. So in the end, I think Apple could just as well have started from scratch, it would only have costed more money. And the result wouldn’t have been a nice layer to cover up for the problems with the underlying system.
It appears to me that the best candidate for a ‘Next Generation OS’ would be based on the QNX microkernel. The kernel is quick and has great interrupt handling for drivers. Although it is not open source, the OS also has great potential in its core set of applications. The photon GUI is a very clean graphical interface. There have been many open-source development tools and libraries already ported over to the OS already. It seems to me that with more development on that platform, it would make a wonderful Next-Gen OS.
I believe you will get more milage for your money by using parts of OSS. Designing an entirely new OS from the ground up is a cool hobby but could never become mainstream.
And Linus started Linux as what, hmm?
Linus Torvalds did not write an operating system from scratch. He only wrote the kernel (or at least the largest part of the kernel), which comprises only a small part of an entire operating system.
That said, I believe there are a few operating systems which have been written mostly from scratch by a single person. SkyOS and AtheOS comes to mind. Could anyone comment on those?
‘a common misconception here in the comments section seems to be that UNIX is perfect, because it is open source and stable and reliable‘
Unix isn’t open source. Linux is Open source. Linux was designed to behave like unix under many conditions, but it isn’t Unix. Neither is OSX. At least if you want to get technical or anything. Both Darwin and Linux have a lot of the same strengths and weaknesses of Unix, but that doesn’t make them the same operating systems.
Yes, they are often called “Unices” or “Flavors of Unix” or some variation on that theme, but it’s an oversimplification. They’ve all chosen to adhere to certain file-system conventions and certain ways of dealing with devices, and they use similar (or the same) command-line shell. Things like that. And they tend to share things in common, like tools, utilities, and applications, and windowing systems, but that’s not the OS.
Under the surface, they aren’t all the same thing. And when you get down to it, you can keep a lot of the ‘under the surface’ and ditch the surface, and make the OS not appear very Unix-y at all. Or, you can do what Apple did (or is it more correct to say NeXT, or someone else?), and take a microkernel, add a surface to make it look/work like Unix (for the benefit of unix-geeks), and add another surface to look/work sort of like the old MacOS (for the benefit of MacOS users).
But don’t let BASH fool you. These are all different operating systems.
Don’t forget it was a UNIX clone and that was a long time ago. Writing a full OS from the ground up is too large of a task becasue it would take so long and so much time before it is functional enough to use (and make money to support it). This just is not practical, to build a new OS and not use at least parts of open source (gcc, toolkits, applications, shell env etc.) is a costly task.
The Linux kernel is fine, and using it (or some of it) reduces a great deal of overhead in writing drivers etc. An example of this is L4Linux.
Doing all of this from the ground up is something even Microsoft could not afford to do. Even if they could, it would take them several years.
Linus who ??? ;0)
supposedly VLIW and EPIC was as far above RISC as RISC was beyond CISC. itanium was supposed to embody the VLIW/EPIC architecture, yet AMD came out with amd64.
i noticed in both articles no mention of ITANIUM and VLIW as the cpu of the future.
i would think a 16-bit RISC like SuperH or ARM with multicore be the future.
the game consoles do not have to worry as much about backward compatibility.
why doesn’t m$ or sony or nintendo implement these futuristic technologies? i know sony is going multicell.
Nice article Nicholas! .. good reading 🙂
I did not read the article- I just skimmed through it, so please excuse any errors I have.
I think that it is a bad idea to use linux for a nexe generation compuser because linux is not the next generation. The author has a point about the macro-kernel design of Linux, but the real prbolem with that is that any OS that uses the linux kernel is going to look a lot like linux/unix. GNOME/KDE do, so does Mac OS X (it does not use the linux kernel, but has a unix-like base) Sure, you might say something about TiVo, but you wouldn’t know it ran linux unless you found out from someone else. If you spend all your time in one program and put a heavy cover on it it will not look like linux, but what is the point of doing that?
A next-generation OS needs new ideas, but I have not seen many new Ideas in linux (not the kernel, the whole package so that might be better said GNU/Linux, but I am saying Anything/Linux If anyone reading this knows of an operating system package like GNU that runs on linux that does not “taste” like linux, please tell me. I don’t think it is too easy. We need to write OSs from scratch with no old ideas if we want it to be on a next-generation computer
I see one big problem with most of comments here. Many people are asking why linux “is bad” – “why not use linux”? Well I think that you all should re-read this article, which is talking about an next-generation OS and NOT about todays-genartion OS. There is no perfect OS, and there is no perfect solution. Linux might be good for now, it may be good for next 5 or 10 years, but world is changing. We need new OSes, we need new hardware, we need progess – new fresh ideas. And it’s quite simple to undertsand – when we have NEW hardware, new type of CPU architecures – we also need new models of OS kernels. I don’t know if exokernels will be the future, and I don’t know if they are the best solution. But I belive we need to think about new types of kernels … and simply try them. Try to write a fast, secure and stable exokernel based OS … I think is a great task for a hobbyst, for a scientis and even for a commercial company. I peresonally can’t just wait to see something really new! And form my point of view discussion like – “why not use linux?” are quite pointless.
BeOS hasn’t been a true microkernel in a very long time, if ever, and neither has Linux. (Minix is though) Both Linux and the BeOS kernel can load modules, like drivers, etc, running in kernel space – as opposed to being completely monolithic, where everything would have to be compiled into one single kernel image. Some honest-to-God microkernels are QNX/Neutrino, GNU Hurd, L4, Mach, and many more. MacOS X does not, however, have a microkernel. MacOS X, or rather it’s open-source underpinnings, named Darwin, is a combination of Mach and a BSD kernel, in the same memory space, almost the opposite of a microkernel. It may sound like L4Linux, but it’s not. At least that’s not the impression I’ve got. Feel free to correct me!
You’re rehashing the same old Andrew Taunembaum and Richard
Stallman tripe that got us GNU Hurd and other even more
hopeless projects. Microkernels were politically correct
back in 1990.
An async API is neat, but not neat enough to justify
breaking away from UNIX.
If you want something cool, implement a persistant system.
When I yank out the power cord and then put it back in,
I should get back to where I was, losing no more than 5
seconds worth of work.
Make filenames optional or entirely eliminated. Let me
look up files by attributes that a human would remember:
“open the project that I was working on 2 weeks before
last Christmas” or “open the pictures that I sent to
Jason”
When you start trying to implement revolutionary changes into an operating system you have to look at more than just the foundation of the system, you have to look at the theories and concepts that are working in the foundation. Computers operate in a binary state on or off, our languages are built on the fundamentals of being 0 or 1 true or false all binary. To have a truly smart and innovative system we have to think of ways to move away from a two state system. Maybe I’m talking out of my ass but it’s just an idea.
Please someone enlighten and correct me.
I have heard enough of that. This article was about creating a next generation system. Windows (longhorn) is much more likely to do that than a Unix based system. I have seen NOTHING ‘next-generation’ from a unix-like system after the 1970’s.
Also if you were a ‘power user’ of windows the way tou probably are of a unix based system, none of that would happen to you. Do you even use windows or have you experienced any of that yourself? I think that sure, Windows is buggy and insecure, but much of the problems that people have with Windows are caused by their own stupidity.
– Adware is caused by people clicking the ‘yes’ button blindly in the internet explorer ‘do you want to run this program dialog’.
– Popups can be avoided by switching to a better web-browser
– Viruses and Worms are downloaded from emails
– If you even know what a trojan is, you can just as easily have one on a unix-like system.
If you knew what you were doing wou wouldn’t let your computer store any of that information.
Use windows and not stupidly and then say those things.
(By the way I am using Mac OS 9)
“Writing an OS from scratch these days is a very serious business, it is a multi-year undertaking and requires many skilled individuals to do it. And that’s just the OS, then you need drivers, a compiler, developer documentation and applications.”
Well, yes, but aren’t you now “building another Unix” that you were just a few lines before opposing to. Imho, much of these issues become irrelevant (or at least, less in weight) with the NewOS approach.
Reason? An OS based on Ruby or other such technology, could _build upon_ any given flavor of Unix, or other TraditionalOS. Therefore, current compilers will do, current drivers will do, even current OSes will do. People just won’t see them and -more importantly- applications won’t see them either. That’s why they became (nearly) weightless.
Java tries to do this, .NET tries to do it. The real, totally scriptable OS environment still remains to be seen. But I think we will.
-ak
Linux didn’t write my desktop operating system, he started and wrote parts of the kernel. If every person that has contributed to what is “Linux” (like distro) today was paid for all their time the cost would be in the ballpark of hundreds of billions of dollars.
But even the best next generation OS will fail without some sort of goal.
What exactly is the goal of such a next generation OS?
What will it do?
Why build it?
You’ve taken too much of a bottom up design, maybe it’s coming in the next article, but what’s the point of such an OS without a design for the applications that will run on it? After all nobody computes with an OS, they use applications, the OS allows applications to share the hardware.
That’s all the OS is, a set of APIs and code that allows the hardware to be shared (hopefully by multiple programs at the same time).
good article.
And good point by poster about persistent system.
That would be very useful. Heads up on that.
linux vs Tanenbaum flameware article.
http://www.dina.dk/~abraham/Linus_vs_Tanenbaum.html
micro . exo.
http://www.cbbrowne.com/info/microkernel.html
Slideshow. traditional versus exokernal. good stuff.
http://amsterdam.lcs.mit.edu/exo/exo-slides/sld003.htm
exokernel.
http://www.pdos.lcs.mit.edu/exo.html
If MS had used an open source OS to base their Longhorn OS on, they could have cut their development time drastically.
They already have a perfectly good OS to “base” Longhorn on – Windows NT.
This is why Apple will have a stable, mature OS (I’m talking about Tiger here) with lots of 3rd party support and all the features of Longhorn (and more) out about 2 years before Longhorn will be out.
It remains to be seen if it will have all the features.
The difference between Apple with OS X and Microsoft with Longhorn, is that Microsoft already had already done the hard yards of writing a new OS with NT and could use that – Apple had nothing (after trying several times) so they bought NeXT.
Language, API and Operating System are tighly coupled together as Nicholas states.
Thinking about a FPGA, DSP, and multiple core processors means than you actually need languages to use them easily. Specific processors actually need specific languages, otherwise it is actually hard to get good performances. Ideally, these specific languages could be embedded in the main operating system language. Think about projects like Lava, a language to design FPGA embedded in Haskell ( http://www.xilinx.com/labs/lava/ ), Bossa to design schedulers (recently featured on OSNews) or Devil for writing drivers ( http://compose.labri.fr/prototypes/devil/ ).
Old languages like C (or C++) are not well-suited for concurrent programming, because for efficiency and safeness. Some features (locking, thread management, memory management, communication) need to be deeply wired-in the language to be easy to use. Java or C# are quite successful on these points.
Memory management is something that should be done by computers, because computers are meant to track small details over time, not human. To be called “next-gen”, and operating system would need OS wide garbage-collected memory management.
Haiku is missing all these features: it’s written in C/C++ without using templates or exceptions. Even if the API was really designed, it was designed with these languages in mind: just look at how could the BeBook be shortened without programmer memory management.
I read the article and the author makes some really good points, especially about context switching, asynchronous communication between programs and parts of the kernel, and OS’es in general. The idea of using messages between parts of the kernel and programs is really good, and is basically what VTech used in their OS for the Helio, their one and only PDA. Abstraction of the hardware and software by using an exokernel is also a really good idea, kinda of a hardware implementation of Java. All in all, some really good ideas that should be understood before bashing. And Unix is definitely not perfect, but rewriting it, or any other variant of Unix to suit the purpose would be an arduous task that would take longer than writing a new kernel.
Wasn’t there a LISP OS some time ago? The whole OS was written in LISP! I feel sorry for the first developer!
I’m going to put my money on that Tojan is a ….at that, I won’t go there.
Anyone else want to take it?
Moving along, it seems more like what he wants IS Mac OS X.4/5 and so on.
Pliskin.
the article was interesting, atleast the kernel bit. but i want to comment on some of the comments. every time you do something revolitinary with a os you end up with a whole lot of bugs and problems that have never been encounterd before. the reason for linux and the BSD’s sucess is that they are evolutionary just like we humans have evolved from monkeys over years of refinement. a hammer is a hammer on the basis that its shape is perfect to get the job done. a *nix is not perfect but its one of the oldest os’s out there that are still in developemnt in one form or another and just that alone gives points to the underlying ability of the system.
windows survives on the basis that you cant take existing knowhow and go to any other os but a new version of windows. but a person can go from red hat linux to openbsd to solaris to mac os x and still find many of the same stuff (with os x being the most odd child out i guess, never realy looked into it)…
you can implement a microkernel or even a exokernel and still have a unix. nothing changes. basicly the os described in the article makes me think of a extreme version of hurd.
to have a app access hardware in unix you have it locate it under /dev. there you will find everything the kernel supports. what you want to do is to add a part to the microkernel that works with the drivers to fill that filetree rather then have one static as some of the old unixes have (linux is in fact getting this to, hell when its done you will see icons pop up on your desktop when you insert a usb storage device just like in osx. yes im talking of project utopia). and you can add and remove drivers in linux without shutting the system down, just have it compiled as a module and modprobe it in place. its slighty slower then haveign it the kernel itself but i cant say im complaining.
yes it would be great to have memory protection for drivers so that a memfault in one didnt total the entire system, but this does not look like its saveing windows. i have seen that os bluescreen so many times over a flawed driver that well i dont think memory protection alone will help. basic sanity checking of the i/o of the drivers may be even more important.
Why doesn’t someone write a linux kernel module that implements the L4 api? Then you could migrate the parts in the linux kernel that you want in user space out of the kernel one at a time maintaining a working system. Indeed you could have a system that is tuned to having just the right amount of components in the kernel and outside the kernel for your desired stability / performance trade off. I think it’s just a case of extremism of ideals.
weird, I’ve been thinking about a lot of these same concepts (especially exokernels and new OS stuff) for a while. some meanderings on the subject in an aug2003 blog: http://advogato.org/person/grey/ I’ve written about some of the exokernel stuff elsewhere.
That said, is this just an article of wishful thinking, or are you planning on writing something?
🙂 Thanks! …but syntax matters. It does.
That’s why C won over Pascal and so on.. There’s simply too many (((((these))))) in Lisp for me.. Guess again, which is my favorite!
-ak
>Let me look up files by attributes that a human would remember:
>”open the project that I was working on 2 weeks before
>last Christmas” or “open the pictures that I sent to Jason”
Check out MacOS-X 10.4 — that’s what Spotlight is supposed to be able to do.
There Have been a few I’m sure (google for a list I guess)
I remember a guy working on one sort of based on Scheme and used a design with no mem protection etc., all supposed to use a Trusted compiler to enforce what Scheme objects a program could touch.
http://vapour.sourceforge.net/ it seems DOA though.
please tell me that the project was a joke because the name just screams irony 😉
His arguments for not using Linux are not very strong, and are getting weaker and weaker as time goes by. Right now I’m running on a Gentoo system with a patched kernel — it uses Nick Piggin’s patch set. Performance? Very smooth, no jitter or freezing, and this is on a box that runs like a 500Mhz PIII (it’s a Hush PC 1Ghz on a Via chip). You want to differentiate? Patch the kernel, display, or whatever. But starting completely over makes almost no sense whatsoever. It would only make sense in the scenario that there is nothing that can be easily patched to get what you want (there is, several options in fact — if you needed a proprietary solution, use BSD and not Linux). I’ve seen alternative OS after alternative OS after alternative OS — they poke around for a while, then suddenly, “take off”, developmentally, at the point where the developers start to port a lot of existing open-source apps and tools to them. Face it, creating a full, usable, app-ridden desktop OS (in my opinion, there are only 3 modern OS’s that qualify, Linux/BSD+open-source apps, Windows, and Mac OS) is not possible anymore for one person or even a small team. Yeah yeah I know about Syllable, AROS, and numerous others. None qualify, none are “usable out of the box” for real work, never will be. BeOS once was, in it’s specific niche, but no more — it’s obsolete. It’s drivers don’t run on modern hardware, it’s apps and even ports of apps are outdated, and frankly, the other OS’s, all three of them, have caught up in usefulness for BeOS’s niche. Yes, I’m aware that there are still die-hard Amiga fans plugging away on their boxes, god bless em, but it’s not the image of the “next gen OS” that we are talking about here.
I really do believe that defining what we want in a desktop OS (low-latency and responsiveness, ease-of-use, apps, eye-candy, for example), then taking either of the mature existing open-source OS’s (BSD or Linux) and building it out of that is the best way to go. A lot of work is being done, independantly, on each of these features and huge progress is being made already.
But if you wanted to differnetiate, you could grab a hold of some existing technologies that are yet immature (Y-Windows, kdrive and so on) or put an experimental new paradigm for interface on it, or whatever, and there ya go. Throw out the excess and don’t cry over dramatic changes, which are necessary for what you want, but break compatibility (although each change like that does have a cost — compute it in)
Starting over just isn’t going to work.
Erik
I have to say that i think it is quite sad to have to compile for hours and patch up your OS just to get respectable performance. Sorry but good performance should be standard right out of the box.
> Microsoft already had already done the hard yards of writing a new OS with NT and could use that
Ahem. IIRC IBM also did a large amount of the shared work (with Microsoft) that would become NT (Microsoft extended the joint project into NT (after they both decided to split the project) and IBM into OS/2). Some of the stuff you laud should be credited to the talented IBM engineers. No I don’t work for IBM, nor ever have, but it wasn’t soley Microsoft that came up with the technology they use today.
Dragonfly BSD is taking a pragmatic approach to the future of kernel design. They are migrating a majority of the current kernel functions to userland and making evolutionary changes towards a microkernel-like kernel (including messaging where it matters, mp-safe kernel internals, lwkt, etc) with a goal of SSI, while still maintaining a high degree of stability and compatability – including porting in patches and updates from other BSDs. Couldn’t this qualify as a next-gen OS project that still keeps with its UNIX roots?
A big trend is missing, using the GPU. Withing five years we should be able to completely run the windowing system on the GPU processor instead of the main CPU. Things like animiations and font generation will also be done on the GPU.
We are going to get true resolution independent displays where progams don’t worry about pixels anymore. This is coupled with font generation by the GPU. With these features you can do zoom-in on the GPU and more detailed fonts will be computed on the fly by the GPU.
In a future system, why do you even need a division between kernel and user space? With the increasing acceptance of managed languages, everything could be in kernel space with no security context switches. The only distinction that would remain would be the managed versus unmanaged services. Legacy apps could be run in a usermode, memory-protected space, but they would be the exception. Perhaps the extra overhead of managed execution would be less than the savings of eliminating ring changes.
I read the article and i found it very interesting- just one comment- What about virtualization??? I have been using vmware for quite some time now and the performance has increased dramaticly since i first installed VMware 2.0. I understand the point of this article is to envision the operating system of the future, but i think there is a historical trend that is being followed. The first computing systems ran application right on the metal (or the bulb for that matter) as computer hardware becomes more powerful, the Operating Environment increases in size, now looking at *nix, and windows based systems i can’t but ignore that fact that applications that run on these operating systems have so many ties to the underlying operating system that in intense compute environments it is impractical to run more than 1 app per hardware box due to stablity issues. Virtualization allows you to consolodate multiple work loads within there own virtual environment without having to worry about windows .dll hell or *nix depenency conflicts.
Virtualization in enterpise mainframes has been common for a while (system partitioning), and i believe that future of computing will have us running our home machines on some virtual environment hosted somewhere on the net with ultra high bandwidth links to our home, car, office everywhere.
The PC box sitting by the desk will be gone, the computer and the operating evironment will be everywhere our TV, tablet type machines and whatever other form-factors make sense.
Sony “cell-based” Playstations will be our computer in the living room. it wont matter what the kernel type is, each device will have the type of kernel architecture that makes the most sense. the “services” will we will access will be provided via virtualization.
Look that the specs for the Xbox-2 it will have enough power to virtualize an Xbox-1 for backwards game compatiblity.
With the awesome hardware of the future. virtualizing any type of os, service, system, program will be practical.
Sorry for the Rant, i hope that i have not offended anyone.
Thank you for your patience,
The problem with using linux is that you are not writing something from scratch. You might ask what the problem with that is, and it is this: Linux has not had many new ideas in the operating system department. Maybe I have not been following linux too well, but I don’t think that anything ‘next-generation’ has come from linux. On the other hand when things are written from scratch (and no copying ideas!) you get something cool. Take Unix, it introduced many good ideas. Or BeOS. If we use things like linux and backward compatibility and the first thing we do with our OS is write in the POSIX API so we can port the coolest programs. We won’t have any time for new ‘next-generation’ ideas.
AROS and Syllable and the others will be usuable, just not as early as linux. So they need some time to work on ideas instead of stealing them all from unix. That does not mean that they won’t be usuable later. It takes time to write programs. Just because unix had a head-start of 30 years! does not mean that the others are not going to catch up they can, and hopefully they will.
I don’t think that linux is usuable out of the box now either. Get a windows user and I am sure that even with Gnome 2.x or KDE 3.x and they won’t think it is usuable out of the box.
It is a waste that all the open-source developers are working on the unix-copy and not using their talents for anything new.
Look at FreeBSD 5x branch and the like for what the future holds…
Why not use binary compat or the like instead of shoe horning in a special enviroment for it?
Dude, unix does not mean Linux. Linux is a unix, but far from the only one.
Unix means either a broad family of OSes inluding the BSDs, Linux, and all the other assorted ones. The front end does not matter, since you can make X look like what ever you want. Just requires a proper CLI, /dev, <distro>/bin, and the like…
BTW UNIX = AIX or Solaris(depending on the year in question possibly SCO or HPUX)
Unix will dominate all and will in the end most likely be some BSD/Linux bastardization. ^_^
He did not start from scratched. It started as a minix clone.
Hence UNIX and unix…
Simply put there is no next generation of OS that will not be badly designed.
Unix is the end of the line product for what needs to be taken care of, it is just what do you want to do with it that you need to ask. That has never been the job of a OS. The job of the OS is to allow you to accomplish that and do it in the best way possible.
BTW there is no need to build rewrite and throw out the past for stuff of today. Just change the parts as needed. Far simpler than redesigning from the ground up every time you have the slightest change. About the only thing I can think of that would justify it is going to trinary or the like.
last i checked there is hardly anything to report from hurd mailing list. just a few messags and lots of spam.
——————————–
March 6th, 2004
After a long time of not being updated, new CVS snapshots of the Hurd and GNU Mach are uploaded.
July 31st, 2003
The K4 CD images are now available. See the Hurd CD page for further information.
April 30th, 2003
———————————-
wow so after 1 year they did comeout with nothing.
hurd is dead, face it.
There is no such thing as a waste of talent when it comes to OSS, for the most part.
If you are not getting paid for it, it is not for your own waste. You are not doing any thing for any higher good. You do it becuase you want it to exist, you want it, something is ticking you off, you want to see something go down in flames, or the like. But it is all ways becuase you want it. Any one that says otherwise is delusional.
BTW show me a unix user that thinks windows is usable out of the box. It goes both ways and is pointless to play that lame game… it all depends on what you where learned how to use…
I see the biggest problem with any new OS project being a lack of drivers, and software. The easiest way that I can see to allieviate this is to impliment the POSIX standard (at least mostly) so that you at least are source compatable with a large base, and you could then code a service that would allow you to use the linux graphics, sound, network, etc drivers.
Also while you may not want to use Gnome or KDE wholesale for whatever reason, the source compatability would allow you to take advantage of underlying technologies like gstreamer or dbus. Also in the Gnome HIG you’d have a good set of interface guidelines.
Ahem. IIRC IBM also did a large amount of the shared work (with Microsoft) that would become NT (Microsoft extended the joint project into NT (after they both decided to split the project) and IBM into OS/2). Some of the stuff you laud should be credited to the talented IBM engineers. No I don’t work for IBM, nor ever have, but it wasn’t soley Microsoft that came up with the technology they use today.
No, the NT project was run under Dave Cutler at Microsoft and initially almost completely by a bunch of ex-DEC people Cutler had brought with him. IBM was working on the “old” OS/2 that went on to become the OS/2 2.0 and Warp (which was a completely separate code base and product to the “OS/2 NT” that was renamed to Windows NT). AFAIK there was little (if any) IBM involvement in the NT project – it would barely have been into pre-Alpha coding stages when IBM and Microsoft split (1989-90, the NT project started in 1988). Indeed, it would be more correct to reverse your statement, since the early versions of OS/2 were produced by Microsoft for IBM. Certainly HPFS and large chunks of the OS guts were Microsoft’s work – IBM were still paying Microsoft royalties for HPFS in the mid 90s.
The thing about making X look like whatever you want sounds nice, but you said a good command-line. The unix-like-operating systems (using that so I won’t be corrected) were designed for the command line and people who are seriously using unix-like-os’s (grammatically correct?) have an X-term up or have some unix-like-operating-system-guru at their side. You can’t change it. unix-like-operating-systems will generally stay command line (there are a few exceptions- tivo? not really an operating system… Mac-OS-X I think is a really good job by apple, but a whole lot of code to cover it up) I don’t think there will ever be a desktop linux that home users (without a guru at their side or without command-line-knowledge) will be able to use — But who knows? I might be wrong. Back to what I was saying–the front end to the computer includes the command-line if that is what you use to communicate to the computer It does not matter what the WM does.
I don’t understand what you meant by the Unix is the end of the line… thing? Were you saying it was a good thing or a bad thing. I probably got this wrong, but what is wrong with communicating with the computer by asking it what to do? (If you are confused so am I please clarify what you said)
Anyways I think that after we finish with the GUI phase of operating-system-interfaces, we will go back to ask the computer what you want it to do, but without learning a commang line- in english (or spanish, french, german, japanese……..) if the point of a computer is to make it easier to get stuff done, we should use the method we use to tell people to get stuff done- language I have not ever heard someone complain that that telling someone to do something was not ususble out of the box!
No clue about linux but for X it is entirely possible. Just becuase on one had done it for X yet(or atleast in a way you like), does not mean it is possible, but if you want something learn c/c++ and make it so
What I said was niether good or bad, it just is. Everything in a OS that will be needed from now on has been done in UNIX first, so basically all that comes after it falls under unix. ^_^ Looks may change as well as name, but it will all behave like unix. Windows is getting more and more that way all the time. lol
Making a computer easy to use is simple… require everything thing be pluggable
As far as languages go… just worry about english, in the long run, thanks to the english, it will kill all others off in time. Thus the world is left a better place thanks to their empirlistic policies ^_^
Uhm, it has all ways been to make things as easy to do as possible since day one… the problem boils down to ppl not wanting to learn and lame schools.
Please stop with the “the BeOs (Amiga) is the future, and the best Os ever designed.
Other than the (few) people who just can’t let go, there is NO interest in these OS’s. That’s why they failed.
The Amiga had some great forward looking ideas, but it was poorly implemented. It was very hard (as I remember) to program for, and was very unstable as a programming environment. Despite it’s sophistication for it’s day, it’s hopelessly outclassed now. We keep hearing promises about new versions, new machines, new programs, etc, but it never happens. It won’t either because they will have to sell a million machines to be viable, and that can’t happen.
The BeOs was originally expected to be sold to Mac users. I know, because Gasse´ (I spelled that wrong) came down to My MUG in NYC to show the BeBox and said that he considered it an alternative OS for Mac users. It only went to ’86 when Apple stopped giving them the API’s for it to run on Mac’s, and the BeBox died. It was very nice, and we ooed and ahhed when it ran two video streams at once. But most people, such as myself, had many problems with it, and most of the promises Be made never came to pass. I bought almost all of the software made for it, but never opened most of it.
For either of these OS’s to be thought of as the future OS, even with extensive mods is a bit naive.
Linux started as a server OS with Apache, when that fairly simple task seemed to work, it slowly moved further afield, being added to bit by bit (literally).
Then the hobbyists programmers came into it, and now there is a slew of distro’s, each one thinking that it’s the only ONE. Most of those people are here on the Linuxnews.com, er, OSnews.com, web site.
Somehow, whatever the topic starts out as, it ends up with “my distro is better than yours”.
Please, lets have an author who is a pro, with years of experience giving us articals of this complexity. And Linux, for all it’s vertues, is not the OS of the future in the sense that that is meant, though it will no doubt be succesful.
I think that Microsoft has “Coplanded” Longhorn. It seems that they have gotten themselves into something that they can’t finish. Someone here said something about Longhorn and NT. Longhorn is not based on NT. It will be at least three years late, and if it does come out, will arrive without the most defining features. Microsoft said that the OS as database concept will be pushed back to 2009. That’s assuming that Longhorn will come out “sometime in 2006”, and not in 2007, or 2008, as some have said.
Good luck to them.
I think that we will see Windows, Mac OS X, and Linux around for a long time to come. It’s too late for new systems at this point at the desktop level.
Someone here said something about Longhorn and NT. Longhorn is not based on NT.
That was me, and Longhorn *is* NT. It is (or will be) Windows NT 6.0 (maybe even 6.1 by the time it gets released). Do not believe the marketing spin – it’s not a “from scratch” project in any way, shape or form (for a start, they don’t have enough time). If you want something a bit more concrete, consider screenshots like this: http://www.winsupersite.com/images/reviews/lh_alpha_054.gif
It’s a significant revision, to be sure – even more significant than the shifts from 3.51 to 4.0 and 4.0 to 5.0 (Win2k) were, but it’s still just a major point revision, not a new product or codebase.
Microsoft said that the OS as database concept will be pushed back to 2009. That’s assuming that Longhorn will come out “sometime in 2006”, and not in 2007, or 2008, as some have said.
Microsoft have been talking about this concept and other pie-in-the-sky ideas for NT (veterans of the industry should remember the codename “Cairo”) since the early 90s. They’ve been pusing it back since then as well, so I wouldn’t be holding my breath either.
I think that we will see Windows, Mac OS X, and Linux around for a long time to come. It’s too late for new systems at this point at the desktop level.
If Microsoft went bankrupt tomorrow it would still take 5+ years to reduce Windows to a non-majority marketshare and probably closer to 10 to get it down to the levels than non-Windows OSes have today. The computer industry is starting its maturation process and inertia is becoming an even more significant issue.
”
‘a common misconception here in the comments section seems to be that UNIX is perfect, because it is open source and stable and reliable and and and and and and (…)’
Dude, unix does not mean Linux. Linux is a unix, but far from the only one.”
Sure, I know that, I have run Solaris for some time myself and am typing this on OpenBSD!
And why I chose the word Linux? Simple, because all UNIX-like OS’es have the same reputation of being stable and reliable. And in the comments section of an article that doesn’t talk about Mac OS X, Solaris or some other commercial OS, the general idea seems to be that UNIX(-like) == open-source. So that’s why I wrote UNIX instead of Linux.
most comments are about osses that run on a desktop like computer.
but what about all techniques used in a mainframe os?
i think the future os wil and can run on your wristwatch as wel as on distributed/clustered/networked mainframe hardware.
Please stop with the “the BeOs (Amiga) is the future, and the best Os ever designed.
Other than the (few) people who just can’t let go, there is NO interest in these OS’s. That’s why they failed.
Errm, Zeta is trying to commercially revive BeOS, Haiku is trying to do that in a open-source way. And, SkyOS uses more or less the same vision as the creators of BeOS had: power thourgh simplicity.
BeOS itself might be pretty much dead (although some of us still use it, including me, on a daily basis), its vision and goals and ideas are far from dead.
Sorry for the preacherlike tone everyone .
Hey everyone, let’s build a Next-Next-Generation-OS. Lets start early and beat these Next-Gen-OS guys to the punch.
Can someone pass me that J?
After reading some posts I swear I’m the only pothead commenting in this thread who isn’t high.
Linux is cool technology, but don’t let that stop you. I encourage creativity. Even when you’re stoned.
here is some rant….
http://www.linuxjournal.com/article.php?sid=6105&mode=&order=&thold…
and let’s not forget the excellent kernelthread…
http://www.kernelthread.com/mac/osx/
for me, in the end is the user experience that counts
sure i would like some working _desktop_ linux that windows people can turn to and never look back, but looking at the last 10 years of development…..its not going to happen
what do we need another linux flavour for?
Unix was deprecated the moment plan9 came out. And plan9 was developed by the same people that developed unix.
Let’s face it, users decide if they like an OS or app from the way they interact with it. I wonder if people would hate windows as much if it had all the nice GNU CL utilities and KDE and KDM running on it.
This is the point I was trying to make in my initial post. If you want to make a cool and interesting OS, use the already made “boring” stuff like networking and kernel and then innovate with the GUI and the apps.
Micorosft should’ve bought BeOS a few years ago, and based Longhorn on that. They would’ve had an incredibly reposnisve OS, and they would’ve saved millions of dollars and several years of development time.
not really, what would have happened is that they’d have seriously f*cked beos up to get it compatible with ‘legacy’ windows programs and then add a bunch of thier other crap to it. it would have ended up bloated and slow or maybe not . who knows.. windows 2k is plenty fast on my 1ghz box.
Bull, complete bull.
Linux is just a OSS unix clone and by far not the only one. Unix does mean Linux, but UNIX does not mean Linux.
Quoting melgross…
“Please stop with the “the BeOs (Amiga) is the future, and the best Os ever designed.
Other than the (few) people who just can’t let go, there is NO interest in these OS’s. That’s why they failed.
The Amiga had some great forward looking ideas, but it was poorly implemented. It was very hard (as I remember) to program for, and was very unstable as a programming environment. Despite it’s sophistication for it’s day, it’s hopelessly outclassed now. We keep hearing promises about new versions, new machines, new programs, etc, but it never happens. It won’t either because they will have to sell a million machines to be viable, and that can’t happen.”
I wouldn’t say that it has “NO” interest; just that the interest Amiga/OS has currently is extremely small, by comparison to other operating systems, or companies.
There were a lot of good ideas inherent to the Amiga OS, way back when. Those ideas could have been extended and ehanced to rival anything today, if the people in charge had only listened. For example, I tried to get Amiga, Inc. to rewrite the kernel to be an exokernel, so that they wouldn’t come to the market with anything less than the hundreds of thousands of Classic Amiga programs (including the shareware, etc., found at AmiNet), because a new 64-Bit OS could be made backward compatible with all previous Classic Amiga software.
Like this article illustrates, exokernels rock! High flexibility there. It’s not going away, it’s the next new thing. Evolve or die, so to speak.
–EyeAm
“Rebel to the status quo!”
http://s87767106.onlinehome.us/mes/NovioSite/index.html
For those asking about LispOS/etc. Google for Lisp machines, you’ll see that Symbolics was the last to manufacture any (though there were other vendors) about 10 years ago. LISPM users were widely responsible for the Unix Haters Handbook, which decried many of the same criticisms on Unix that you hear MS bashers rant about now. There’s some very interesting reading and writing on the subject, particularly Genera (the OS) which for being over 20 years old is still ahead of where we tend to be today – buffer overflows and the like did not exist on the platform, so there goes a huge chunk of problems we deal with security-wise now. They appear to have been ahead of their time, though hopefully the reason for needing such systems will come again and perhaps we’ll see something taking their concepts up again at the helm (the language Genera was implemented in isn’t necessarily as important as some of the other concepts it allowed).
Nicholas: did you do more digging around MIT’s XOK papers? There’s actually already a paper on virtual machines with xok. Granted, a few years old (everything in the xok project appears to have gotten dusty). I think at a minimum some sort of x86 emu is a requirement for any future successful OS attempts. But again, looking at history, emulators used to run at a lower level, the first emulator I used for the Amiga, emulated a Mac – and despite running on the same 68000, was actually _FASTER_ than a mac plus. By implementing emulation functions in an exokernel styled OS performance would likely not suffer much (there are other problems, but still other interesting solutions, again refer to xok documentation on File System and partitioning possibilities). Not building everything in a userland on top of a bloated OS can afford some real improvements, but at the same time with exokernel/emulation mindset, you can still leverage existing products, albeit perhaps in a sandbox of sorts (which given the lack of security most of those products have inherently, would probably be a good thing)
I don’t understand how you people think. No, because “unix” (linux) works it’s not the best alternative. If that was the case building on Windows would be the best alternative because obviously it works best for most people.
Bullshit like “Don’t forget it was a UNIX clone and that was a long time ago. Writing a full OS from the ground up is too large of a task becasue it would take so long and so much time before it is functional enough to use (and make money to support it). This just is not practical, to build a new OS and not use at least parts of open source (gcc, toolkits, applications, shell env etc.) is a costly task.”, is also boring.
Linux was written from scratch as someone said. So was the OS the author proposes, so was SkyOS, so is MorphOS, and so on. Linux, SkyOS and MorphOS seems reasonable enough developed to have a chance to become good OSes. So it’s NOT impossible.
Reads on… (I started to read the article a long time ago but not the comments.)
“Unfortunately much of the research into microkernels has been on Unix and the synchronous nature of Unix’s APIs [async] means asynchronous messaging is not used and this results in more context switches and thus lower performance.”
Actually a lot of microkernel research over the past decade has focused on optimizing synchronous IPC / RPC between clients & servers on the same machine, usually in the form of Migrating Threads.
Migrating Threads allows the active entity (the thread) to transfer execution from one address space to another in a similar manner to how a system call or interrupt transfers a thread from user-mode to kernel-mode in the same address space. In other words threads are able to jump across address spaces in a controlled manner.
Examples include the Pebble Operating System, Doors in Solaris/Spring Nucleus, Migrating Threads added to Mach and a few others.
The paper, “Microkernels Should Support Passive Objects”, covers some of the arguments in favour of the migrating thread model. Most of the research papers can be downloaded from the CiteSeer website.
Whenever you’re looking into the future for OSes, I recommend not forgetting one very special os. Plan9. On a whole, it sucks. Nobody uses it, and reasons for that exist – but I think anyone interested in OS design/implementation should at least check it out.
It takes a drastically different approach to so many of the ideals we’ve long forgot even had room to be different.
It was not from scratch, but was meant as a replacement for the minix clone, so what it was to do was all ready known.
The advantage / point of a fresh start is to abandond the bagage from past mistakes. It allows the best ideas, currently available, to be used whether or not they are new or 30 years old.
With the new hardware, which is so differnt from what we have today, old OS asumptions should be to be looked at: because many will fail in the new enviroment, and so should be left behind.
I see the best way to procede is to decide on what features are wanted, eg. the best features from all other OSes and any new ideas, and then look to see if any other system could be a good foundation.
Now I suppect than the suggested haiku / openbeos OS is more suitable than most for the new OS. Be Inc started with a fresh design, with many upto date ideas, and the haiku / openbeos OS team reimplemention avoids junk/mistakes from old versions, so it is probabily the closest to what is wanted.
While we use the Von Neumann architecture and it’s derivates the current available OS’s are more than adequate.
Things will get interesting when the basic structure of the compute engine changes. Think small things happening in parallel, how to deal with vision properly, how the mind creates a model of the environment, love and hate.
This article is just another whistle in the wind. Linux will be a satisfactory commodity OS for the current day commodity compute engine.
the commercially friendly ultra-free MIT license.
I am so sick of FUD concerning the GPL. Lots of commercial enterprises are using GPL’d software, the only thing they’re not doing is stealing the code for their own closed products and selling it as if it were their own. I would rather put my work into the public domain if I want people to use my work without attribution. Remember the Public Domain? Anyone? Beuler?
Technically, I very much liked the article. What I’d thought was “micro” kernel is now “exo” kernel, but I’m not a kernel developer. I’m just a long time user and administrator, from TRS-80 to ES/9000, RSTS/E to CP/M to Linux.
The article also makes an error, I think, in saying “Not unix” before saying “Not Macrokernel”, since in a micro/exo kernel environment, whether it looks like “unix” to a program or not, is just a matter of having the API available. The user will see the applications, not the OS.
QuantumG’s comment about implementing the L4 API in a Linux module goes a long way to demonstrating that imagination is the most important technical skill. I realize that Micro/Exo kernel goes one better, since there would be no need to load such a module into kernel space in order to have the API available. The API becomes just another process.
BeOS, along with AtariOS and CP/M, I think have demonstrated one of the problems with “commercial” answers to technical problems. Commercial answers are a matter of marketing, not technical quality. That’s why DOS, then Windows, beat so many technically superior OS’s.
I like the exo kernel concept, it’s fun, and I think it is the way the OS “aught” to be going. That said, I will use whatever OS provides the applications that I use. For a while, that was Windows, now it’s Linux, only time will tell what comes next.
Let me see if I can add a little something to this argument.
As an OS developer, I’ve seen many trends come and go. Most of my companies efforts in the the development of exokernel-like OS to serve real-time functions (i.e. communication arrays etc.)
Exokernels at this time do not make a viable general purpose OS in my opinion as the threading mechanisms (while having fine pre-emptive capabilities) don’t have the cast/drop_thread abilities of say (a Linux 2.6 series or BSD 5.x series)
Essentially if we are to do a allocation of memory (which is arbitrary and goes to FIFO on Exo, this would cause serious issues with a general purpose OS where many applications are being run.
Lack of pre-emptive caching aside, exo makes a fine real-time scenario as in the realtime space, such caching is more of a detriment than an assett.
Hope this helps,
Nick
http://www.eros-os.org
http://www.dragonflybsd.org
Given that Matt Dillon is aware of capability-based systems (like Eros) and has slated ssome of those features as a future nice-to-have, and that DragonFly is literally tailor-made for dual-core chips, I’d bet my money on DragonFly being the OS of the future.