According to open-source insiders, the move to create separate kernel trees for technology testing and bug fixes, which are then incorporated into the stable kernel when ready, has been a huge success, pleasing both kernel developers and the vendors who distribute the open-source operating system. Torvalds: “I’m certainly pleased, and judging from the reactions we had at the Linux Kernel Summit in Ottawa a few weeks ago, most everybody else is too.”
That is all .
But [Linus] did leave the door open to create a 2.7 tree “if we hit some fundamental change that makes us split into a 2.7.x tree. We haven’t hit anything yet, and people seem to be doing well, but I want to point out that if something really fundamental rears its ugly head, we still accept the possibility that we’d have to do a full unstable branch split.”
One wonders if the kernel isn’t pretty feature-complete, and thus more interested in consolidating the current position.
Nothing evil in that at all.
I doubt it. They just haven’t thought of anything new yet . Besides, in a kernel, perfection is a big feature.
Well, it’s basically feature-complete, until some new crazy hardware comes out, or there becomes a good reason to turn linux into a fully distributed system, etc. Then there’s no telling what will happen.
I don’t know if a fully distributed Linux would ever get ported into the stable kernel. I believe that sort of thing will remain in the realm of Mosix and OpenMosix (www.mosix.org and http://openmosix.sourceforge.net/ respectively). A Mosix like kernel extension <it>could</it> be backported to the 2.6 series kernel, but that would probably be unnecessary for most Linux installations. It will probably remain a kernel extension.
Not to mention it slows down the development cycle. As soon as linux reaches a point where major changes are rare then and only then will it be a viable and supportable platform for third party developers.
are silent, because they code on a job position or just walked away and don’t bother to complain…
No, this is free software development. If people are unhappy about something, they’re never reluctant to make that known.
Use a vendor kernel. Use A Vendor Kernel! USE A VENDOR KERNEL!!! That has been the message on LKML for a long time now.
Choose whichever one you are comfortable with. From Gentoo, to Debian, to RHEL. These have gone through a formal testing process (to one degree or another) and are what the “unhappy” people are (or should be) looking for.
This is not 2001. Times change. Let the kernel developers do what they do best… develop. And let those with whom the QA responsibility should rest, and those who have the resources to do formal QA, the distros, do the formal QA (and for God’s sake, send the fixes back upstream!).
That plan allocates the resources more efficiently than the old “we kernel developers are going to to do both development and QA on 24 different platforms all by ourselves and everything is going to come out perfect” model.
If you are one of those people who insists on running a vanilla kernel.org kernel, you need to redefine yourself as a tester. Now, I’m sure the 2.6.x.x series is reasonably safe. But like I say, times change, and Linux has changed. And that means that the users need to adapt, as well.
I for one run each of the RC releases. I like to know what is going to be in the final release and I just like working with the latest. MM is a times too risky.
First, this article is surprisingly good for Eweek standards. I was not expecting the author to demonstrate any grasp of the facts and quotes presented, but they were actually presented in a way that made sense. In fact the only line that bothered me at all was the reference to Linus being the “founder of the Linux operating system,” which isn’t really something to make a fuss over these days.
I think that the development system is working, that it is a reflection of the kernel maturing, and that there is no other OS kernel project that works as efficiently as the Linux kernel project. It is not the hype or even the license that makes Linux the de facto free software kernel, it’s the development model. It allows for free involvement from individual and corporate contributors, and there is integrity and fairness at the top-level subsystem maintainers as well as Linus and Andrew.
I agree for the most part with the assessment of the 2.4.x kernel series and the associated 2.5.x backporting. I don’t know if the state of the Linux kernel would be where it is today if the project did not identify the flaws in the system. Particularly I appreciate how nimble the project remains despite commercial interests and a growing userbase. The project seems to evolve in the same way that the kernel itself evolves: through consideration of feedback and decisiveness on top. The project doesn’t operate in a particular way because that’s how it always has worked or because of a constitution of sorts. It operates on the basis of learning from the past and letting the good ideas float to the top.
The current model is clearly the result of incremental changes: there was no immediate need for an unstable branch alongside 2.6 because the kernel had finally “caught up” to where it needed to be at the core level. The -mm tree started as an experimental memory management patchset and evolved into a great way to distribute aggressive technologies to early adopters. After a while it started to serve a more-or-less official purpose in 2.6.x development.
The 2.6.8.1 kernel wasn’t in any way inferior because of its fourth digit, and it was used in big distributions nonetheless. It came about because of an unfortunate and embarrasing NFS bug in 2.6.8, but I think it created some precedence for the .y kernels. The need for the .y kernels was in part natural and in part artificial. The development of the kernel was naturally stabilizing because of the relative feature-completeness of 2.6.x, the lack of a true unstable tree, and the maturity of the 2.6.x series (2.6.5 was more mature for its series than 2.4.10 was in its series). Release cycles were growing because the time between “upgrade-worthy” improvements was increasing. This didn’t change the fact that the Linux kernel has always thrived on a rapid release schedule and small diffs to drive the quantity and quality of bug reports. The artificial release delay imposed by the switch off of BitKeeper (roughly two months) made it necessary to get the current development work out to the test community in the meantime. Hence the .y kernels.
Perhaps the best concept emphasized in this article is the size of the diffs generated by .y kernels vs. the early days of 2.6.x development. It is so much easier to test the effectiveness of a bugfixes and check for any side-effect regressions when the diffs are small. There is also a fundamental difference between bugfixes and features when it comes to maintaining the quality of a software product. Customers of all software (proprietary or open source, commercial or free) benefit from different levels of testing for bugfixes and features. They want the bugfixes to be tested for effectiveness and possibly regressions and released as soon as possible. Most customers don’t want a new feature until it is tested in multiple installations, configurations, and loads. The situation that angers customers/users the most is when a feature that they don’t need or use negatively impacts the performance or reliability of a feature that they need to run their business or to be productive. What proprietary software companies do internally in test labs the Linux kernel project does with early adopters and testers using the -mm tree. The .y kernels are a successful adaptation that lends credibility and exposure to what would have previouly been -rc releases, and it allows for more of them.
Finally, we have the formalization of the development cycle timeframe, whereby all patches (mostly features from -mm or from commercial contributors) must be submitted in the week following a stable kernel release. This does a lot of good things. First, it imposes some discipline and consideration on the contributor. You can’t stay up all night on caffeine hacking a poorly conceived patch and submit it in the morning after it compiles and boots… unless that coding binge happens to occur in the submission window. Next, it makes things easier for the maintainers. They receive all their patches in a big bundle, and then they go to work finding the good ones and weeding out the bad ones. They don’t feel pressured to green light a promising patch submitted late in the cycle without taking the time to properly test it. Finally, it separates the processes of functional verification and deployment/stress testing. It takes a significantly more man hours and clock cycles to widely distribute, deploy, and test a piece of software in a real-world environment than to verify that the software in sound and functional in theory. To deploy software and have bug reports stream in saying that there are very noticeable problems that make it impossible to use is a huge waste. This may seem fairly obvious, or just a lack of discipline in what grew out of a hobby project. But this is still a problem in the biggest and most experienced proprietary software development teams on the planet. It remains to be seen whether structuring the development cycle will alienate some potential patch contributors.
I am a longtime Mandrake Linux user, and would definitely like to switch to Debian.
The ONLY thing holding me back is the not existing CD automounter (I know there is a user-level program, but that simply is too dumb).
Because konqueror stays alive after being closed, the auto-umounting is blocked and I cannot get my CD out of the tray, without killing the konqui process and as root manually unmount the CD.
I have a computer which is also used by my girlfriend, and she is used to computers being somewhat strange behavioural, but NOT GETTING THE DAMN CD OUT is the worst thing that can happen to a user.
So what we would need to get a happy desktop experience with Linux would be to include an unfailable automounter to the kernel.
If u need that feature (SuperMount?) go find it and apply the patch ( for ur current kernel) that’s it!
How to apply the patch?
RTFM ( in the kernel source dir … some txts)
actually this is pretty easy,
The grandparent sounds familiar with process handling and mounting things and honestly, running hte patch command is no more difficult.
If he knows how to use the command line, and compile a kernel, he is pretty much set.
>Because konqueror stays alive after being closed, the auto-umounting is blocked
*
I’m confused: why konqueror staying alive would prevent the CD from being unmounted? Are you launching konqueror from the CD?
That’s the only reason I can think why this happens..
I think that in 2.6.13, inotify will be part of Linux kernel so this should allow application to monitor files without preventing unmounting, so hopefully once the applications support it, there will be less CD blocked 🙂
Opensource apps progress goes slowly but surely: I remember being annoyed by stuck CD four years ago like you and in a year (time for the app to be updated, for the distro to integrate the new version), this will hopefully be a problem forgotten..
(*)
“auto-uNmounting”, that’s not because the one who named it made a dumb mistake (how often does one unmount, was it really useful to remove this one character?) that we should perpetuate it.
Isn’t the -mm tree just like -CURRENT in the *BSD family? And we seem to be getting in the linuxy niveau an eqivalent of -STABLE and/or -RELEASE, too.
What suprises even more is that the change is origined by the man, who said “Testing? What’s that? If it compiles, it is good, if it boots up, it is perfect.”
And the same man, who admitted that had he known of FreeBSD, he would have not started his own project.
Oh my, my, a few more moves like this, and I could just loose my certainty, that the project will fail by becoming unmaintainable.
heh, some BSDs zealots are really funny. I see many of them complaining how bad linux development process is and how broken everything that touches linux is. and of course how perfect *BSDs are. for those people: “take your operational system, make it for _general_use_ (and not only for servers and barely for workstations) and _try_to_scale_ your development process to handle linux’ rate of incoming features”. of course it all will start to look like linux is now. furthermore, the linux kernel mailing list remains the same: if you have a better idea you’re most welcome to post it and be ready to prove why it’s better than the current one.
use subfs instead. Suse 9.2 and 9.3 uses it.