Linked by JRepin on Mon 29th Apr 2013 09:24 UTC
Linux After ten weeks of development Linus Torvalds has announced the release of Linux kernel 3.9. The latest version of the kernel now has a device mapper target which allows a user to setup an SSD as a cache for hard disks to boost disk performance under load. There's also kernel support for multiple processes waiting for requests on the same port, a feature which will allow it to distribute server work better across multiple CPU cores. KVM virtualisation is now available on ARM processors and RAID 5 and 6 support has been added to Btrfs's existing RAID 0 and 1 handling. Linux 3.9 also has a number of new and improved drivers which means the kernel now supports the graphics cores in AMD's next generation of APUs and also works with the high-speed 802.11ac Wi-Fi chips which will likely appear in Intel's next mobile platform. Read more about new features in What's new in Linux 3.9.
Thread beginning with comment 560140
To view parent comment, click here.
To read all comments associated with this story, please click here.
RE[5]: Load of works there
by Brendan on Tue 30th Apr 2013 11:01 UTC in reply to "RE[4]: Load of works there"
Brendan
Member since:
2005-11-16

Hi,

The "student project" got 2 million euros from EU for what ? Run what kind of stuff with "more security" ? Students' puppy projects ? Nope, I guess ATM and more commercial things. I bet EU expect a return on investment some days. Just telling...


Let's take a more accurate view of it. Linux was first released in 1991, when people were looking for a free alternative to commercial Unix. Due to very fortunate timing (and no technical reason whatsoever), a large number of people (including large companies) volunteered a massive amount of time and were able to convert the original dodgy/crappy kernel into something that was actually usable/stable.

Minix 1, 1.5 and 2 were intended as a tool for teaching and were never meant to be used as a serious OS. Minix 3 is the first version that was intended as something more (but still leans towards teaching and research rather than actual use). It was released in 2005 (about 14 years after the first release of Linux) at a time when at least 3 good free Unix clones already existed, and therefore didn't attract a large number of volunteers to make it good.

The only thing we can really say from this comparison is that very fortunate timing is far more important than anything else. It doesn't say anything about monolithic vs. micro-kernel. If Minix 3 was released in 1991 and Linux was released in 2005, then I doubt anyone would know what Linux was.

- Brendan

Edited 2013-04-30 11:02 UTC

Reply Parent Score: 3

RE[6]: Load of works there
by Alfman on Tue 30th Apr 2013 12:19 in reply to "RE[5]: Load of works there"
Alfman Member since:
2011-01-28

Brendan,

"The only thing we can really say from this comparison is that very fortunate timing is far more important than anything else. It doesn't say anything about monolithic vs. micro-kernel. If Minix 3 was released in 1991 and Linux was released in 2005, then I doubt anyone would know what Linux was."


Very true, timing was everything. The same is even true of the commercial players as well. In early computing history, there were many competitors. Over time they consolidate and fall to the point were we only have a few options. For better or worse, it would take insane loads of money to budge the current market leaders and get consumers to discard their collective investments in incumbent technologies.

Userspace/Kernel context switches used to be much more expensive, so that may have been a historical factor in microkernels pulling ahead. As CPUs have evolved, this should eliminate the original monolithic kernel motivation, but it's stuck around because alternatives have been marginalized in the market.

http://kerneltrap.org/node/531
(Anyone having more recent benchmarks?)


It's funny that whenever I've talked about being able to write operating systems to less-technical people, many automatically equate that to being filthy rich and they don't realize how many of us there are who struggle to find any work on OS tech. We would be just as good, but we're too late.

Reply Parent Score: 3

RE[7]: Load of works there
by Kochise on Tue 30th Apr 2013 13:21 in reply to "RE[6]: Load of works there"
Kochise Member since:
2006-03-03

The only thing we can really say from this comparison is that very fortunate timing is far more important than anything else. It doesn't say anything about monolithic vs. micro-kernel. If Minix 3 was released in 1991 and Linux was released in 2005, then I doubt anyone would know what Linux was.

Just what I've said in another post :

http://www.osnews.com/thread?560129

It's funny that whenever I've talked about being able to write operating systems to less-technical people, many automatically equate that to being filthy rich and they don't realize how many of us there are who struggle to find any work on OS tech. We would be just as good, but we're too late.

So true, my case.

Kochise

Reply Parent Score: 2

RE[7]: Load of works there
by Brendan on Tue 30th Apr 2013 17:41 in reply to "RE[6]: Load of works there"
Brendan Member since:
2005-11-16

Hi,

Userspace/Kernel context switches used to be much more expensive, so that may have been a historical factor in microkernels pulling ahead. As CPUs have evolved, this should eliminate the original monolithic kernel motivation, but it's stuck around because alternatives have been marginalized in the market.

http://kerneltrap.org/node/531
(Anyone having more recent benchmarks?)


It's not the context switches between user space and kernel that hurt micro-kernels; it's context switches between processes (e.g. drivers, etc).

But it's not really the context switches between processes that hurt micro-kernels; it's the way that synchronous IPC requires so many of these context switches. E.g. sender blocks (causing task switch to receiver) then receiver replies (causing task switch back).

But it's not really the IPC that hurts micro-kernels; it's APIs that are designed to require "synchronous behaviour". If the APIs were different you could use asynchronous messaging (e.g. where a message gets put onto the receiver's queue without requiring any task switching, and task switches don't occur as frequently).

But it's not really the APIs that are the problem (it's easy to implement completely different APIs); it's existing software (applications, etc) that are designed to expect the "synchronous behaviour" from things like the standard C library functions.

To fix that problem; you'd have to design libraries, APIs, etc to suit; and redesign/rewrite all applications to use those new libraries, APIs, etc.

Of course this is a lot of work - it's no surprise that a lot of micro-kernels (Minix, L4, Hurd) failed to try. The end result is benchmarks that say applications that use APIs/libraries designed for monolithic kernels perform better when run on the monolithic kernels (and perform worse on "micro-kernel trying to pretend to be monolithic").

- Brendan

Edited 2013-04-30 17:42 UTC

Reply Parent Score: 2