This article is based on a masters thesis project which investigated the possibility of turning Linux into a real-time operating system. The authors primarily focus on available “hard real-time” solutions for Linux, but also look.
This article is based on a masters thesis project which investigated the possibility of turning Linux into a real-time operating system. The authors primarily focus on available “hard real-time” solutions for Linux, but also look.
Has anyone heard any news from the development of the L4 microkernel?
L4 claims to be a better microkernel than mach. There are early ports that run Linux on top of it. I would some day like to see a “distro” that replaces the Linux kernel entirely with L4. Not that I’m an expert, but I think it would scale better than the Linux kernel. The l4ka.org project I believe intends to do this. The IBM SawMill project also uses L4. Linux was created by one guy because he thought it would be cool to make a “free” *unix. His original intention was to duplicate the functionality, bad or good.
that should not be to hard.
Micro-kernaels actualy scale poorly. there is a maximum performance that they can reach becasue of all the message passing that takes place.
I think this used to be the case with the first generation (true) microkernel (like mach or Chorus). The second generation is not a true microkernel design in that it does not use separate modules for every task like memory management, disk I/O etc. And thus, eliminated quite a bit of the overhead from message passing. It’s difficult to benchmark L4 against Linux for apples/oranges reasons but there has been some benchmarks (SawMill page) using L4Linux which runs Linux on top of L4 in user space where the numbers are kind of impressive. Mach is horrible compared to L4 and OSX seems to run (but I didn’t say smoothly on it). The Windows NT microkernel is based on QNX which is a small second gen. microkernel. As Linux grows in code it becomes more difficult to maintain. A modular system would arguably be easier to maintain and there are a number of other advantages as well such as drivers would be easier to load/write. You would have to compile support for things like USB 2 right in the kernel. You can kill one module without bringing down the entire system. Imagine adding USB 2 support to your server without having to reboot it. In windows you can kill explorer in task manager then press windows + R (run) and run explorer. Try that in Linux. The move might create some additional overhead (maybe) but (my opinion) it would also remove quite a few limitations.
“You would have to compile …”
should be
“You wouldn’t have to …”
How would this be an advantage compared to insmod and rmmod?
For starters I would have a working sound card driver. It was not till the end of my old Compaq’s life that most distros were able to install a working sound card driver for the intigrated sis soundcard. My last Athlon system used an intigrated C-Media sound card which most popular distros were able to install a working driver for. So far with my Dell 8250, neither Red Hat, SuSE, or Mandrake have been able to install a working driver for my SoundBlaster Live! sound card. The most popular PC with the most popular sound card on the market and with hundreds of thousands of developers, many millions line of code, after billions of dollars in development costs, the three largest Linux distributers cannot manage to install a working driver for my sound card this late in the game. Admit it or not, this is a problem.
they claim that the Etrax runs the “standard 2.4 kernel”, while the specs document clearly states “Axis’ Linux port is based on the 2.0.38 Linux
kernel modified to work with MMU-less
processors as developed by the uClinux pro-ject.”
Which is neither standard nor is it 2.4.
That said, the Etrax is a fantastic, absolutely super, platform! It would be a joy for me working with the developer board….
As Linux grows in code it becomes more difficult to maintain.
This is true with any large software project.
A modular system would arguably be easier to maintain and there are a number of other advantages as well such as drivers would be easier to load/write.
Linux already is modular and becomes more so with each version.
You {wouldn’t} have to compile support for things like USB 2 right in the kernel.
You don’t have to as it stand – this is what the loadable module system exists for. On this machine, for example, USB services are provided by the uhci and usbcore modules. I can remove these from the running kernel with a “rmmod” and reload them with an “insmod”.
What you’ve just described is exactly what Linux currently offers.
In windows you can kill explorer in task manager then press windows + R (run) and run explorer.
…
/me kills nautilus
/me presses alt-f2 and runs nautilus
what on earth has this got to do with microkernels?
For starters I would have a working sound card driver.
Uh, how does that follow? Having a working sound card is dependant on two things: having a suitable driver and having this driver loaded (alternatively, having a suitable driver and a daemon to probe for new hardware and load said driver).
the three largest Linux distributers cannot manage to install a working driver for my sound card this late in the game.
Three of my machines use SBLive cards and I haven’t had a problem with RedHat or Suse loading drivers for it for a couple of years now. It’s not like Windows gets it right all the time either…
You really haven’t got the faintest idea about what you’re talking about, do you?
Considering that you have never asked for help, why should you be taken seriously?
btw, disable PnP in the BIOS, it wrecks havoc when installing linux. For some reason the sound card can’t be detected properly.
Just so cool….
Speaking of which, do you want to weigh in on why a monolithic architecture is better than a microkernel system, or is “shut up, Linux is fine” your official stance on the matter?
This Article suggests adding real time support by adding in a Real Time Application Interface extension into the kernel. It’s not a real time operating system like QNX or DROPS (which runs on the L4 kernel).
This sort of clears up what I was getting at, it’s from the RTAI Page.
“The Linux kernel design is similar to that of classic Unix systems: it uses a monolithic architecture with file systems, device drivers, and other pieces statically linked into the kernel image to be used at boot time. The use of dynamic kernel modules allows you to write portions of the kernel as separate objects that can be loaded and unloaded on a running system.”
RTAI is not a patch, it would use a hardware abstraction layer (HAL) to run with the Linux kernel not in it, which is more like a microkernel. If this was implemented you would probably see some additional gains afterwards once the kernel was optimized to work in that manner (i.e the kernel is designed to work with its existing scheduler). This is a step in the right direction, but a native microkernel would simplify support for real time processing and probably even interface more efficiently with an X server. This would cut down on bloat in the kernel to support things like specific file systems. I have not seen an updated study but the Linux kernel grew 316% in a 4.5 year period between 2.0.1 and 2.4.0. The 2.4.0 kernel tar.gz is 23.2 meg, 2.5.9 is 32.2 meg.
ps. I download the 2.5.9 source and extracted it, on disk it’s 160 MB.
Speaking of which, do you want to weigh in on why a monolithic architecture is better than a microkernel system, or is “shut up, Linux is fine” your official stance on the matter?
Microkernels are theoretically superior to Monolithic kernels, but they haven’t been conclusively demonstrated to be so in practice, mainly for performance reasons.
All the mainstream kernels have migrated to a similar kind of hybrid design.
Windows NT, for example, was pretty much a classical microkernel in it’s 3.1 guise. With NT3.51 (or it may have been 4.0, my memory is a bit hazy) this changed – GDI was moved into the kernel because performace was abysmal. We’re even seeing Microsoft putting an HTTPD daemon into kernelspace – very much against the classic microkernel philosophy.
Linux has moved from the opposite end it now supports dynamic loadable modules and the design philosophy is very much “can this be done in userspace?”.
The end result is that you have a system with a large-ish kernel which exists within the same memory space, but with clearly defined internal API’s and lots of functionality achieved through userspace.
This sort of clears up what I was getting at, it’s from the RTAI Page.
The part after your bolded words seems to quite clearly demolish your proclimations as to what linux does and does not support. Are you blind?
but a native microkernel would simplify support for real time processing
Demonstrate how.
and probably even interface more efficiently with an X server.
Demonstrate how.
This would cut down on bloat in the kernel to support things like specific file systems.
No it wouldn’t. You’d still require code to provide support for specific file systems. On my Linux boxes I only have ext3 support statically linked into the kernel (and that’s because I’m too lazy to mess around with initrd’s) – everything else, such as vfat, reiserfs, JFS, UDF, ISO9660 and jffs2 are modules. They’re only loaded into memory when I need to access devices which use these filesystems. Same applies to things like USB, network protocols and firewire.
ps. I download the 2.5.9 source and extracted it, on disk it’s 160 MB.
Great.
My patched 2.4.20 is 189MiB in size. Of that, 117MiB is drivers. Most of the so-called “bloat” can be directly attributed to the addition of new architectures and drivers. The only “bloat” these cause is in the storage of the actual source code. @_@
I have to wonder if they made it to this site.
http://www.realtimelinuxfoundation.org/
Or read this paper:
http://www.ccur.com/realtime/downloads/shielded-cpu.pdf
The Windows NT microkernel is based on QNX
Any references to support this ?
Jim, can you stay focused – article discussed realtime kernels. Not microkernels, not your stinking soundcard.