Saturday November 8, I received an email from someone, inquiring if I would be interested in “doing a first interview/introduction into a new operating system”. We get these emails and news submissions all the time, and most of the time, “new operating system” means Ubuntu-with-a-black-theme, so we don’t bother. I figured this time things wouldn’t be different, but after a bit of digging around, there’s a little more to it this time.
We all know AROS, the Free software re-implentation of the Amiga operating system. While AROS is mostly feature complete, it’s not yet ready for prime time, and it of course lacks in applications. The whole vibe around AROS is one of, excuse my wording, flipping the finger to the legal bickering and tangled web of intrigue surrounding Amiga. No schedule and rocking, is AROS’ motto. While that might still be the case, a few AROS developers have defected from the motherland, and have started an operating system project of their own, called Anubis.
The two main forces behind Anubis are Michal Schulz and Hogne ‘m0ns00n’ Titlestad, and Paul J. Beel, who hosts the AROS Show, is also involved. Rumour has it that more than just these three are involved. So, what exactly is this Anubis, which has a big “coming soon” sign on its website? As the new project was announced at The AROS Show’s website, let me just quote them:
This will not be a fork of AROS. Anubis will not be aimed at an Amiga 3.1 compatible operating system. However this will be an Amiga inspired OS. Dr. Schulz will kick this off by stripping the Linux kernel. As for the API, it will be something that can be programmed by using C or C++. This gives developers a choice.
That’s all we know. Reading various forum threads and comments within the Amiga communit, it becomes rather clear that most believe this defection is caused by two reasons: one, AROS is moving too slow, and two, it was already outdated the day they begun, and we’re 13 years down the road now. Memory protection is really something an OS should have these days.
This is not the first time someone believes that taking the Linux kernel as a base is a good idea for re-implementing a dead (or near-dead) operating system. Sadly, all those that came before Anubis failed quite miserably. There was BlueEyedOS, an effort to implement BeOS APIs on top of Linux. Failed. Cosmoe, same story. Dead. Zebuntu. Gone. I’m sure there are many others that I’m leaving out. Even though in theory it appears as if using the Linux kernel is a nice leg-up, practice is much different.
I’m reserving judgement until Anubis shows its first code.
All the best with the effort, sadly im as cynical as Thom but hope they prove me wrong!
if you really want ot take a kernel to build on top of dont take linux. Take something like QNX. While there are less drivers for it the kernel itself is more stable and has far less overhead. With all due respect for linux (and i mean try linux, the kernel) there are better things to use as a templete for creating a “new” OS. NetBSD 5’s code base would be another good example, small, fast, moduler. But I wish them the best of luck.
also as far as OS’s that have taking the linux kernel and tried their own thing, there is also The Athene Operating System ( http://www.rocklyte.com/athene/ ). I happen to like them white a bit, and their omega workd bend GUI reminds me of my old Amiga 4000t. good times.
Edited 2008-11-10 23:22 UTC
poundsmack wrote:
–“While there are less drivers for it the kernel itself is more stable and has far less overhead.”
Far less overhead? iirc it’s a hard realtime kernel, hence it sacrifices some efficieny in order to execute prioritized threads at exact times, also since basically everything runs as a user process it seems to me it pretty much must have more overhead than Linux, if you have any facts you can point to that shows otherwise I’d be very interested.
As for Anubis choice of Linux, my guess is that it mainly boils down to hardware support. Don’t quite understand the ‘stripping’ part though, it’s not as if the Linux kernel is bloated, particuarly not for what I assume is a desktop oriented os.
While I’ve always liked Aros due to my Amiga nostalgia, lack of memory protection (guru meditation memories comes back to haunt me) and smp etc isn’t the best foundation on which to build a system for modern use. Add to that an API which really hasn’t stood the test of time (imo).
So yes, I can definately understand that some developers might want to implement an Amiga-ish environment over a small, very fast kernel with broad hardware support. On the other hand I can also see why people who like Aros will see this as a bad thing.
However it is their spare time and I’m absolutely certain that they are the experts on how they want to spend it.
I wish them luck and will file this under ‘another one to keep an eye on’ while I passionately continue to stalk… follow the Haiku development.
while there are tons of QNX vs LInux articles i could fine, none of them were comapiring QNX to the 2.6 linux kernel so I chose this one, ( http://www.qnx.com/news/pr_2870_1.html ).
I can verify this is correct that QNX can boot in a few seconds and is lightning quick and utalizes (at least on intel) multi core CPU’s better than linux currently ( as up 2.6.27.5 ) as i have and develop for both. this has actualy promted me to do a bench marking of the 2 systems, both in just kernel and text mode, as well as light weight GIU’s (linux with something like fluxbox, and QNX with photon).
but as someone who uses embeded version of QNX and linux daily i can honestly say QNX is faster in boot, alication load, and data write to the file system. as far as apication usage and responsiveness, well that usualy depends on the app, so it’s a toss up.
Edited 2008-11-11 16:28 UTC
poundsmack wrote:
–“and utalizes (at least on intel) multi core CPU’s better than linux currently ( as up 2.6.27.5 ) as i have and develop for both. this has actualy promted me to do a bench marking of the 2 systems, both in just kernel and text mode, as well as light weight GIU’s (linux with something like fluxbox, and QNX with photon).”
Well, the link was pretty pointless as it contained no comparison whatsoever. Also, while it’s good that you have benchmarked it, a total lack of data aswell as how you’ve benchmarked them makes the statement pretty pointless and lumps it together with all the other subjective ‘it feels faster’ nonsense scattered across the web. By design, monolithic kernels should be faster than microkernels, is anyone disputing this? There are advantages with running everything in it’s own process (stability being number one, modularity also comes to mind), but speed is not one of them. In a monolithic kernel the system call cost is setting and resetting the supervisor bit, and no overhead at all once in kernel space where all memory is accessable. In a microkernel you have to pass messages through the kernel out to different processes and they again have to respond through the same message mechanism which is alot slower than accessing process memory directly.
Now one can certainly question just how much this overhead is actually costing (I know there has been alot of improvement in the messaging and context switching which should help lower the speed penalty), and this is where some up-to-date hard data benchmarks would come in handy.
AFAIK most kernel’s today that employ micro-kernel characteristics are so-called ‘hybrid’ kernels which uses ideas from both microkernels and monolithic kernels. Haiku (my favourite OS project uses a hybrid kernel where hardware drivers and (I think) the filesystem runs in kernel space (and thus can potentially crash the system), just like they can in a monolithic kernel. Personally I prefer speed over the chance that a buggy driver may cause havoc. If my system goes down due to a buggy driver, I will blame the buggy driver, not the system. If this happened to me often then maybe I’d sing another tune, but I seriously can’t remember when I last had a system crash which was related to hardware/driver malfunction. Of course if the system were somehow responsible for keeping me alive or some such, then I’d probably go with maximum stability
I am surely not expert in the OS internals area, but here’s some possible parameters to consider:
1) “monolithic kernel sucks” is general attitude of many ppl, myself included, and you can’t do anything about it 🙂 (you know, Amigan, message based system)
2) now really – what was interesting, was back then, when Amiga under the Gateway wings, was supposed to use QNX as a base for new OS. I remember when Linus joined the message board, with some claims, and as fast as he joined he left, because real gurus were there – with QNX. You could see many ppl claiming, that QNX had some 20-30 (micro?) sec latency, whereas Linux, at that time, some 600? Well, it was in 1997/8? I do remember Dave Haynie (one of Amiga designers) stating something like Linux was not at all usable for things like multimedia, e.g. sound, like BeOS was at that time – just because of latency. So – why had it so bad latency, while being monolithic? I suppose nowadays, the issue with latency is gone, and who knows, maybe my understanding of the issue is not correct anyway …
haha, that whole gateway/amiga deal was one of the things i was going ot reference. but being as it is now so far out of date i didn’t bother. you are right though about the events that transpired. glad i am not the only person who remembered that/took part in it.
you could implement message based IPC in a macrokernel too 😉
preface: i prefer the word “macrokernel” because “monolithic” is really the opposite of “structured”, as in a system that enforces logical data / function separation between components, and defines boundaries and communication interfaces for them, be they in the same address space or not
actually the microkernel / macrokernel difference depends on the amount of abstractions the kernel supports via its facilities, and is orthogonal to the monolithic / structured one… but anyway
so, the word “monolithic” or “macrokernel” says something about the way a kernel appears when it has been loaded into memory (i.e. what its binary image contain, from a functional point of view – i.e. it may contain device support code – drivers – it may contain VFS node handling code – filesystems – etc)
if one doesnt take the above digression into account, reading “monolithic” may also denote the way the kernel has been organized, code- and structure wise (that is, implemented with global data structures instead of per -subsystem private ones and access APIs )
but it says nothing about the implementation details of those data structures, and the algorithmic efficiency of code that manupilates them
in practice, one can have a kernel with an internal FS layer and drivers performing, or appearing to perform (that is, depending on the use case and the evaluation metric) worse than a system in which they’re external – for some applications global throughput is far more important than latency, for others the converse is true
the problem with latency mainly lies in the cost of resource sharing across processes and now threads
on unix and operating systems derived and inspired from unix, the kernel completely virtualizes the underlying system, and then *is* the system, for all intents and purposes of applications, that can only access the HW via kernel calls
thus the kernel itself is a shared resource, on older kernels this translated into a global mutual exclusion lock, that ensured only one process at a time could be running inside the kernel (meaning, could be waiting on an impending syscall while the kernel executes its part)
this ensured the rest of the kernel code could be lean for the sake of (relative) simplicity, but it also made it not very scaleable
on the other hand, a similar mutual exclusion was introduced by the “hardware layer” of the kernel – when executing some privileged HW operation on a bus or device, HW interrupts were disabled to be reenabled when the operation is completed – this effectively means the systems does not react to user input during that timeframe, since mouse or keyboards events are ignored
what time brought to linux and bsd, was incremental algorithmic updates that pushed locking “down” to individual data structures, making resource contention more granular and increasing scalability at the cost of some added complexity – and, with preemption points, a reduction (yet no complete elimination) of the inactive interrupt window – which yielded higher than before responsiveness and lower *innterrupt* latency (which, it maust be noted, is a separae thing from syscall latency – the time taken by system calls to execute vary greatly across types of calls and with the size of the working sets – thus a worst case value, is usually accounted)
OTOH, other systems developed from scratch have been able to tackle the above problems earlier and more effectively by taking a different design right from the start
many if not all modern microkernels have a couple things in common – one is a fast, usually message based IPC, necessary to achieve adequate throughput while retaining user space servers
another is their claiming to be “fully preemptable” (a new operation may be submitted to the kernel at any time, and the kernel is nearly always ready on interrupts – so the nominal latency becomes the latency of an interrupt serve)
since there are always some lowest level pivileged operations that cannot be preempted, the kernel cannot really be preempted at *any* time, but since these are usually extremely short (orders of magnitude shorter than the normal kernel code path), atomic operations, they might become the actual granularity unit without imposing noticeable overhead
the third is one that partly derives from the former: if you have userland services and a message based IPC system, chance is you’ll implement a flexible and so-not-old-unix transaction dispatch mechanism
one interesting side effect is that, if versatile enough, the dispatch mechanism may go as far as to support both user and in kernel servers
then one may build primary services back into the kernel for performance reasons
but because of that dispatch mechanism and because of what is “at heart”, it probably won’t be a monolithic kernel, rather a structured extended kernel based on a microkernel
you’ll have effectively reinvented NT ^_^
Edited 2008-11-12 16:57 UTC
“Also, while it’s good that you have benchmarked it, a total lack of data aswell as how you’ve benchmarked them makes the statement pretty pointless and lumps it together with all the other subjective ‘it feels faster’ nonsense scattered across the web.”
by “prompted me to do a benchmark, it means i havent done one officialy yet but due to this I will be doing on, and it will be fairly extensive. I will likely do it this weekend when i get time.
We still need developers to complete our world domination plans!
http://moobunny.dreamhosters.com/cgi/mbmessage.pl/amiga/159911.shtm…
🙂
Haiku or Syllable. Both will dominate the world.
If you read the comments at the AROS show blog, they clarified that there won’t be any “stripping”. Also, Michal Shultz told some commenters that they chose Linux mostly because it’s available and stable on the three major platforms (x86, x64, PPC) and because it has the biggest hardware support of the available choices.
Ahh, I love the smell of a fresh operating system
<troll>How does using Linux for the kernel smell fresh?</troll>
Edit: stupid comment preview
Edited 2008-11-11 03:43 UTC
They seem to intend to replace the userspace tools, i. e. the GNU part of common operating systems sometimes subsumed under the name of “Linux”. Replacing GNU certainly is ambitious.
While replacing GNU is ambitous, it is definitely exciting to hear someone is considering it.
No doubt about it!
Replacing the GNU tools will alter system usage more substantially than replacing the kernel, which I think is what most people fail to realize.
I wish them good luck.
If true, why? What will their ls do that other ls commands don’t?
Best of luck to them anyway. AROS is a great project.
<counter troll>About as fresh as recreating an OS from 2000</counter troll>
Edited 2008-11-11 15:26 UTC
…something new to play with. I hope it all works out for them and I’ll be rooting for them if only for trying something new.
I think that building an OS from scratch is a wonderful way to learn a huge amount about the internals of all operating systems. But I don’t think there is any real utility in it aside from improving the skills of the participants. That said, I can’t see any reason to build an OS on top of the Linux kernel, since it seems to me that building the kernel itself is the real deal. Building an OS means getting very well acquainted with memory management, interrupt handling, and all the other myriad details that make up a modern CPU. Go ahead and give it a shot, you will learn a lot.
The kernel isn’t the make or break of an OS. Sure it’s a major deciding factor, but the tools you place on top of the kernel can be just as relevent about how the OS will behave.
Take NT / Vista for example. The NT kernel isn’t at all bad, yet Vistas UAC et al destroy any confidence in the OS that the kernel developed.
Whoever marked me down, I respect your opinion and right to disagree with me, but please at least reply with your reasoning rather than “hit and run” voting.
I’m interested to hear why you disagree with my point that user-space tools can make or break an OS. If I’m wrong, I’d want to know so.
>>That said, I can’t see any reason to build an OS on top of the Linux kernel<<
I’m afraid but you’re lacking imagination:
1) you can say that your OS will only use filesystems which have snapshot, attributes (whatever..) so this change the assumption for the ‘native’ applications running on your OS: as they can be sure that this feature will be available, more application will use these services.
2) More difficult but you could provide a BeOS-like API on top of the Linux kernel with for example the famous ‘one thread per window’ model and the application built with this API would probably have the same responsiveness as BeOS’s application had..
So in effect what you’re doing here is trying to have better userspace applications even if you have fewer of them..
reminds me of syllable. They start out with an interesting unfinished idea, not too many developers to begin with and they decide to work on something else with a linux kernel.
The comparatively tiny amount of work put into Syllable Server does not mean that work on Syllable Desktop has stopped or that we have replaced the current Syllable code with a Linux kernel. The two (Syllable Desktop, Syllable Server) are separate entities with separate purposes.
It’s not stopped but surely it slows down development when you have to work on 2 OS’s with completely different cores.
No. Kaj can provide more detail, but the bulk of the work for Syllable Server has so far been in writing Builder recipes. Builder was already in a state that made it perfect for creating Syllable Server. It has not taken Kaj away from anything he would already be doing for Desktop anyway (I.e. developing Builder).
Writing the compatibility library and Linux-specific drivers for the appserver will take a small amount of my time, but after that everything else pretty much is shared development between both Desktop & Server.
Things are not this way. They are porting GenodeOS to their kernel.
Who is going to take seriously the notion that they’re actually going to finish what they start if they just give up on AROS??
If they had just completed their original goal of porting OS3.1 to the X86, instead of trying to get AROS to work on every latest whiz-bang piece of hardware, they could have been done a long time ago with AROS.
Then again, who is going to take 3.1 API seriously in 2008? I sure don’t see an explosion of new users for OS4 or MOS, do you? There is a reason for that, even though you probably wouldn’t want to admit it. Using a Linux kernel is the smartest thing anyone has done in a long long time. Decent kernel with a ton of drivers, perfect.
Somebody should poke him in the direction of Haiku. They could always do with additional OS experts on the team and they have an alpha-ready OS that is fast, small, and well designed and thusly aligned with the Amiga principles.
If they choose to use a new windowing environment then perhaps we can call it a new OS otherwise if they choose to go with X.org; it will be just another desktop-linux.
I think they should fork the whole AROS windowing environment on top of Linux and start from there and not even consider X.ORG.
AROS already runs on Linux on top of X and has done for years.
i386-hosted:
http://aros.sourceforge.net/cgi-bin/nightly-download?20081110/Binar…
x86-64 hosted:
http://aros.sourceforge.net/cgi-bin/nightly-download?20081110/Binar…
PPC hosted:
http://aros.sourceforge.net/cgi-bin/nightly-download?20081110/Binar…
Have fun!
I think most people don’t realize what AROS is, what it isn’t and what it’s already capable of…
Could you explain why you say this?
X is a low level API so if it was wrapped in something higher level why would your applications care?
The biggest problem I could see with X is the thread safety which could have an impact on the higher level API and the fact that X.org itself requires quite a lot of Linux system libraries why you don’t necessarily want the application to your new OS API to see..
It would be nice to see a successful desktop OS using the Linux kernel. Everyone always says the ‘Linux is only the kernel’, yet there are a zillion distributions with the same crufty crap that makes Linux as a desktop OS suck. Develop an OS around the kernel and start from scratch a new design for everything else.
Thom pointed out several attempts that have failed, but I think those were more proof-of-concept attempts by a 1 man team.
I think MorphOS turned out pretty decent lately… 😉
http://www.morphzone.org/modules/myalbum/photo.php?lid=100
All those “pseudo” OSes which take a Linux Kernel and put their “my concept is the bestest eva!” thing on top are still just another flavour of Linux. So many times all their good work is worth nothing in the end than just another personal nerd playground. Unfortunately…
As for the rest, I believe that piggybacking on an existing kernel (Linux or FreeBSD) is the way to go.
Let me explain, an OS provides two main functionnality:
1- making the hardware work
2- making the software work by providing a base model
All these new OS which reinvents the wheel focus on (2) of course that’s the fun part, the interesting one, but in the meantime there’s several hundred of engineers working on (1) for Linux kernel, so it’s nearly impossible to catch up..
And unfortunately Unix/Linux’s sw model has many limitations: BeOS has really shown this, its application responsiveness has not been reproduced on Linux even on much more powerful hw and I don’t expect this to change anytime soon..
🙁 🙁
Now is-it possible to build a new OS with the Linux or FreeBSD kernel underneath?
I don’t know.. But I would point that Blue Eyed OS isn’t a good failure example: I don’t think that it was a serious effort: if memory serves, at startup, I tried to ask what license was going to be used for this OS, I never got a clear answer..
In this day and age, an open source project without a clear license policy is doomed to fail!
MacOSX is actually such thing, in lower levels, it runs a forked FreeBSD (Darwin).
OSX runs on a Mach kernel, not a FreeBSD kernel.
OSX runs on a Mach kernel, not a FreeBSD kernel.
“Darwin is built around XNU, a hybrid kernel that combines the Mach 3 microkernel, various elements of BSD (including the process model, network stack, and virtual file system),[2] and an object-oriented device driver API called I/O Kit.”
So, it does have BSD elements in it and it is not a pure Mach 3 microkernel either. Also, the userland consists of code taken from NEXTSTEP, FreeBSD and a several others.
http://en.wikipedia.org/wiki/Darwin_(operating_system)
Right, it’s a modified Mach kernel with parts inspired and taken from freebsd. Very different from being based on a freebsd kernel.
This is my observation as well.
I wonder what it is exactly in the Linux stack that makes the responsiveness so slow?
X?
GTK/Qt?
The kernel?
Every OS that makes applications more responsive than Linux has my support.
I wonder what it is exactly in the Linux stack that makes the responsiveness so slow?
X?
GTK/Qt?
The kernel?
I’d probably say the biggest issue is X itself. It is getting rather dated and carries with itself a lot of outdated stuff. X does of course have features not available to Windows users for example, but there’s nowadays many things lurking there that could be done more efficiently with modern hardware and software. For example, there simply is not sold any graphics hardware that can’t accelerate any kind of video output or graphics operations, and many generations old hardware can do those things as well.
I don’t have the skills to do anything even remotely as good as X is now, but someone who has the skills and the knowledge should perhaps start working on something new and optimize it for more modern hardware.
XNU evolved from a merge of Mach 3 (old NextStep’s kernel) with the upper halves of the network and VFS stacks from 4.4BSD, and a new (object oriented) hardware subsystem and api (IOKit, based and written in embedded C++)
Apple took code from FreeBSD, but there’s not enough to consider the kernel a fork – OTOH the whole platform is, because of the inclusion of BSD userland
as it has often been said, one of the main factors BeOS was responsive was pervasive multithreading – if every BeOS program was to be multithreaded yes programmers had to always develop with concurrency concerns in mind, but it also meant the kernel was designed to enable large numbers of lightweight threads (so, say, every new Tracker window could run its event loop in a different thread)
Linux instead wasn’t born with threading as a major feature (i actually recall Torvalds being very vocal against mechanisms for thread support in the kernel for a long time) – threads have been implemented (as an afterthought) mapping them on normal processes, and the (kernel, non userland library – level) api for them was based, on a (costly) syscall that clones a process’s address space to create a new one
with time, internal mechanisms have been refined (for instance all children subprocesses related of a single process would carry the same PID – interestingly, it was not so in the beginning – locks have been optimized and locking influence inside the kernel, reduced, to yield better preemptability and lower latencies)
but the overall structure and low level kernel interfaces seems to not have changed much, to retain self compatibility – so apparently, threads still carry higher inherent overhead than on other systems, and their usage on applications, the “use sparingly” warning
on the other hand, BeOS windows were managed by a single process, also managing the input loop and focus mechanism, which avoided the kernel-X-kernel-WM-kernel-X round trip
the IPC mechanism itself was (as in other microkernel, or desktop optimized OSs) more modern and efficient (due to beos initially, being microkernel based) than what was (and still is nowadays) available from the classic unix kernel
also interesting, message based IPC and hardware events on Linux are other afterthoughts – DBUS actually does in a daemon what other system do with native kernel facilities (message ports) inside the kernel, requiring a round trip (thus, overhead) for the message exchange operation
to reply to the original question, i’d say both each single, and the pretty much conservative overall structure of a server derived system
i suggest the reading of Mark Kilgard’s (former SGI , now nvidia employee) paper about D11
dated 1995, it pretty much sums up X11’s inefficiencies (which were the same we try to tackle today, not much has changed) and redesigns the X11 code path optimizing for the local case – basically bypassing everyting:
bypassing server side rendering (rendering on client private surfaces)
bypassing protocol en/decoding, shared memory, sockets, and IPC altogether (instead, relying on an innovative (for unix) procedure call mechanism to share data between the client and the serv.. pardon, graphic kernel)
the drawback was applications needing to adopt the protected procedure call model and the XClientPrivateSurface api – but as a matter of fact btw, Kilgard also thought about legacy compatibility, and the option of running a ported X11 server on the D11 kernel
those with the skills have already started, or actually done such new solutions
the problem is, the comunity at large is actually ignorant – meaning people often *ignores* the very existence of whatever is born outside of *nix, or outside of FOSS – OTOH X has become so entrenched with the current state of community friendly for the community accepting its replacement anytime soon to be a realistic scenario…
Edited 2008-11-12 00:17 UTC
I think it is the combination of the three. The design of the three components is independent between each other in the case of a Linux system. They are not build with the only purpose of working well together. x.org implementation of X protocol is multiplatform, so its design account for that. The same for GTK and Qt. They are meant to make the development of applications in multiple platforms more easy. Working in different platform has its disadvantages, you can’t take for granted that certain feature is available at all times, nor that the programming model is the same in all platforms (like multi threading model differences). About the kernel, the Linux kernel only provides basic services for the desktop, and as a generic kernel, its objective is not only to improve the desktop performance, it is also to improve the server use case.
In systems like BeOS and others, where all components are designed under the same roof and aim to the same objective, some assumptions can be made to make the system more responsive or efficient. They have the flexibility design their display server, the display driver API and the user API together, without worrying affecting other platforms. This enable to have a more clean, cohesive, consistent and, in some cases, more efficient design.
My answer would be ‘none of the above’: the traditional way to make an application on Unix/Linux is the ‘event loop’: you have a main loop, when the user click on something it executes the corresponding action and then resume, of course when it executes an action the HMI isn’t responsive as the main loop doesn’t process events.
So to avoid being too unresponsive, the application delegate some of the very long action to another process/thread but developers do it as little as possible as it increase complexity and overhead (on a single CPU any additional thread reduce the overall performance).
Whereas BeOS was initially planned for a bi-CPU computer so they used thread everywhere to use efficiently the two CPU and this has the very nice effect of producing a responsive OS even on a single-CPU.
If I’m right, then it means that we’ll get responsive application on Linux only when the userspace applications and framework (X) are recoded to use parallelism (thread|process), this will probably happen as someday everyone is going to have a quadcore under their desk, so the incentive will be there but it’ll take a long time..
Another linux distro.
Yes i know, i’m oversimplifying.
But using the linux kernal doesn’t seem the way to create an ‘Amiga inspired OS’.
Why not?
The UNIX-like “personality” of Linux is given by two factors: its POSIX interface and its GNU tools; the POSIX interface is not a big deal, because a lot of non-unix OSes implement it [including Windows in some way] and the GNU tools, are built in userland. Removing the GNU tools or developing a parallel set of tools instead of them, creates a brand new operating system with a totally different personality (let’s see the case of MacOSX)
“Even though in theory it appears as if using the Linux kernel is a nice leg-up, practice is much different. ”
Actually, that`s quite sad and miserable way of making “new-old” OS`s …