The good folks over at Debian have released the first alpha CD distribution ‘G1’ of Debian GNU/Hurd. This release consists of three CDs, but only the first one is necessary to get a usable system running. They have a page with install information that you’ll definitely want to read before trying it out. (Hint: Don’t forget to run /native-install and reboot, then repeat, or you’ll be stuck in single-user mode.) The primary download site at gnu.org is often busy, so try one of the GNU mirrors or this temporary mirror if you have trouble getting in. We had fun installing and playing around with the Hurd on a spare machine. As they warn, this is still very raw and experimental, but this kinder, gentler release finally makes the Hurd more accessible to the non-diehard crowd. See the GNU Hurd main website for more project information and history.
Debian GNU/Hurd CD release ‘G1’
2001-10-18 OS News 10 Comments
why does linus hate hurd?
As I understand it, Linus does not like a microkernel architecture. He prefers a monolithic kernel design. HURD uses a microkernel architecture. Can anyone explain the benefits of a monolithic kernel design? Thanks.
Monolithic kernels are generally faster and easier to implement. However adding a new device driver requires integration of source code, and a recompile of the kernel, and reboot. A monolithic kernel also allows a lot of internal short-cutting to be performed (increases speed), however a bad driver will bring down the kernel.
Microkernels are slightly slower (mainly due to IPC/HAL), and harder to implement (again due to IPC/HAL). However Microkernels are more flexible in design, where you can add/substract device drivers/services at run-time, and kernel source can be kept closed. Also since all drivers are isolated from the kernel, it provides increased robustness to the design.
All in all, there are pros and cons to each side, I personally believe that they are both equal when everything is weighed up, so it’s just a personal taste thing.
PS. When I said the Microkernels are slower, I’m only talking about 0.001% slower. But the design robustness easily makes up for this.
Can anyone explain the benefits of a monolithic kernel design?
Because you don’t have “pervasive” message passing going on in your kernel things are much easier to design and debug. One of the most hailed advantages of microkernels is the fact that you can theoretically rip out any subsystem of your running kernel, say the file system server, and replace it with an updated/debugged version, without rebooting. In reality updates and bugfixes to one subsystem usually requires fixes in other areas of the kernel too, so you still end up rebooting anyway. And insertion/deletion of functionality is done with the kernel module system in Linux. Whenever NVIDIA upgrades their videocard drivers for exmaple I can simply quit X, load the new kernel module, and restart X with the new driver, without rebooting.
There’s also the speed advantage. Monolithic kernels will always be faster than microkernels. Microkernels do tend to be more predictable in latencies, but a well designed monolithic kernel can have good/predictable latencies also.
In short, IMHO microkernels look good on paper, but in practice monolithic kernels, or better yet, “Hybrids” (modular monolithic kernels) work out better. No wonder Be decided to pull the networking stack into kernel space in the end. Not that BeOS was ever a true microkernel.
PS. Eugenia, a preview button would be really nice. Thank you
First, a little background.
1) In any OS, the kernel (micro or monolithic) looks like a shared library to the application. When the program makes a system call, the processor switches to a privleged mode, runs the kernel function, and returns to the user code.
2) In a monolithic kernel, major parts of the OS, like filesystems, drivers, networking, etc, are put into the kernel. This has advantages and disadvantages. One advantage is speed an ease of implementation. While in the kernel, memory access are not protected. Thus, the kernel can easily share the data structures it uses between multiple processes. The disadvantage of this is that there is a lot more code in the kernel, and thus a lot more bugs that can crash the system. In a microkernel, the kernel is the exact same, but includes far less functionality. It mainly just implements messaging, low-level hardware management, and (on x86 usually) parts of drivers. The stuff that used to be in the kernel are relegated to seperate processes that serve user programs through a messaging interface. The advantage of this is that the kernel is very small, so it can be made relatively bug-free. It also forces the design to be much more modular and well-thought-out. The disadvantage is that messaging is much slower than shared memory (it involves a lot of expensive switches between processes, and is certainly MUCH bigger than the 0.0001% someone on the board said) and that it is more complex to implement in the sense that you have to use messaging to share data structures.
3) The two designs are converging in terms of advantages and disadvantages. Through the use of dynamically loaded modules, macrokernels (like Linux) have become very modular (adding code doesn’t require a recompile, as someone also said*), and through much work on the code, have become very stable. Meanwhile, new methods of communication have eased the messaging bottleneck and have made microkernels much faster.
4) There is a new type of design, called an exokernel, that puts as much OS code as possible in dynamically loaded libraries in the program. These have the potential to be very fast, because there are no process/process or even kernel/process transitions, and very stable, because buggy code can only crash the program. There are some limitations in current hardware that prevent this type of OS from being as useful as it could. Mainly, current chips don’t have the ability to have different levels of protected code. Such a design ** could be used to give different protection levels to library, kernel, and user code so they couldn’t crash each other.
*> THe reason many things require recompiling on Linux is because Linus is hostile to binary-only modules and changes the interface very often.
**> Something like this exists on x86, except it is based on the obsolete segmentation mechanism. This allows a process to get a privlege based on the privlege of the segment its code is in. Something like this based on a paging architecture would allow you to do a simple jmp to more privleged code.
damn. i didn’t know there was a kernel architexture. but anyway, that’s great that debian has a distro were there is only one cd needed instead of say like before were all three were needed. i hope debian makes a desktop version of linux soon for the normal desktop linux user like me, with pre-install xwindows and graphic installer. mostly cause every desktop version of debian seems to either go bankrupt (storm linux), sells it’s version and rights to no R&d (corel), or just drops it all together (progeny). i hope G1 comes with a graphic installer, pre-loaded xwindows and the latest linux 2.4.X for my drivers.
What about Apple’s Mach kernel? Is this in fact a microkernel based on the same sources as the hurd? If so, then does it show that there is viability to the design?
Yes and no. If they can get off the ground with a Mach microkernel, it shows that the microkernel
doesn’t need to be an unsupportable burden. But The Hurd makes very different use of it, at least
when I read about it a ways back, it doesn’t lay a monolithic OS down on top of the microkernel –
a BSD or Linux (MkLinux) “single server” – rather the rest of the operating system is a bunch of
functionally specialized server components. I think the “viability of design” issues are down there
somewhere. Not “microkernel, yes or no”, but given that a modular, microkernel based OS has
some obvious attractions, how to work out the details. BeOS’ network server is a good example
someone else already mentioned.
Incidentally, interested in Mach + BSD single-server + MacOS? Check out http://www.tenon.com,
they did it years ago and are still there, at least up to now.
The OS-X microkernel and the GNU microkernel are both Mach. However, OS-X is based on Mach version 3.x, while the GNU microkernel, called GNUMach, is based on the 4.x series. GNU is transitioning to oskit-Mach, which is a heavily reworked version of GNUMach with the oskit integrated in. See here: http://debian.fmi.uni-sofia.bg/~ogi/hurd/hurd.gnu.org/gnumach-oskit…
HURD has a much better idea of how to do it than Apple does. Putting a monolithic BSD server on a microkernel eliminates most of the advantages of a microkernel, while retaining the speed loss. By putting code in multiple servers, like HURD does, you gain a lot of stability. A networking bug, for example, will just crash the network server, not the entire monolithic system server.
Trakal, please not that this is not Linux! When you say, “I hope G1 comes with … the latest Linux 2.4.X” I can respond with certainty that you will be disappointed. The Hurd is a totally different kernel than Linux, and it is still very experimental. That doesn’t mean you shouldn’t try it out, but just be aware of what you’re getting into.