Linked by Thom Holwerda on Mon 14th May 2007 19:06 UTC, submitted by FreeRhino
Linux Linus Torvalds has announced the first release candidate for version 2.6.22 of the Linux kernel, noting that the changelog itself for this release is just too big to put on the mailing list. According to the kernel-meister himself: "The diffstat and shortlogs are way too big to fit under the kernel mailing list limits, and the changes are all over the place. Almost seven thousand files changed, and that's not double-counting the files that got moved around. Architecture updates, drivers, filesystems, networking, security, build scripts, reorganizations, cleanups... You name it, it's there."
Thread beginning with comment 240215
To view parent comment, click here.
To read all comments associated with this story, please click here.
Member since:

Thinks broke just too often lately

Like what? (and i don't doubt there're regressions)

I've had no problems with the kernel in the last years. The development model seems to work. There's a reason why only the first two weeks are allowed to merge features and the following two months or more are about stabilizing. It's certainly much better than the "keep a stable version, start a development version, keep developing it for two years, and after two years release a new stable version that is full of crap and will take another year to stabilize" model.

The Linux kernel has the big advantage of merging new features slowly, instead of one-time-every-n-years, like most of projects do. Because only a few features are merged each time, it's much easier to debug, fix and stabilize it, than a big amount of crap that hasn't been tested for two years and it's full of thousand of new things.

For example, the tickless feature rewrote a LOT of the x86 timer code, and the scheduler. Very low-level code. Code REALLY prone to create bugs. That was some major feature. And it works. There was only some marginal regressions which are being tracked and fixed for -stable. I'd say it's a big success.

Notice that Opensolaris is also merging new features rapidly, in fact opensolaris is the "unstable" solaris tree. There're reasons why opensolaris exists, and using the community as beta tester is one. So it's not just Linux.

Edited 2007-05-14 22:10

Reply Parent Score: 5

Ford Prefect Member since:

I didn't want to question it generally, and I also know that 2.4/2.5 didn't work as good as the new development model.

I just missed some "step back", because there were many releases which had "silly" bugs in it, making some hardware fail completely, and so on.

I heard about the 2.6.16 as a "maintained" release and think this is going into the right direction. But not maintaining the same release for 2 years while everybody starts backporting ;)

Reply Parent Score: 2

ormandj Member since:

Notice that Opensolaris is also merging new features rapidly, in fact opensolaris is the "unstable" solaris tree. There're reasons why opensolaris exists, and using the community as beta tester is one. So it's not just Linux.

I feel classifying OSOL as "unstable" and comparing it to Linux is rather silly. I'd trust OSOL CE on a server more than I'd trust Ubuntu 6.06LTS, we'll put it that way.

Reply Parent Score: 5

butters Member since:

I also think the development model is working. In traditional software development, you wouldn't think of changing 7,000 source files every two months in any kind of software project, let alone a kernel. But that's what's happening in the Linux kernel these days, and although some objective measurements show some quality decreases, it isn't anywhere near as bad as you'd expect. The quality of the Linux kernel in spite of the ridiculous breakneck pace of development is astonishing.

But I think they have to slow it down a tad. They're trying to manage the patch backlog, and they're succeeding for the time being. But the recent torrent of patch submissions isn't going to abate. It's going to turn into a deluge, and then an all-out assault. As Linux really hits its stride, with ISV/IHVs and enterprise IT really buying into it, the current pace will seem like a trickle in hindsight. Will the current model scale, or will it creak under the enormous pressure?

They're going to have to let the backlog grow, and they're going to have to turn to automated testing. Everyone's going to have to commit unit test code along with their patches. Perhaps someone will make a nifty KVM-based screensaver that makes it brain-dead simple for everybody to use their spare resources to test development builds. We've got to get smarter about the way we ensure quality, because in two years we're going to want to change over 10,000 source files per release cycle.

Reply Parent Score: 5