Ongoing discussions on improving the kernel release structure and naming has caused the kernel developers to have a more fine grained release number for major fixes.
Ongoing discussions on improving the kernel release structure and naming has caused the kernel developers to have a more fine grained release number for major fixes.
Does everything has to be fine-grained today? (FBSD5)
Where’s the problem with a development branch, and a stable branch? (stable as in feature-freeze-stable)
…but I’d still prefer a longer freeze of the new features (make 2.6 truly stable, open up a new branch, say 2.7, and throw all the new features you could possibly want in it). That way everybody’s happy. 😉
Good to see the debate is finally reaching the masses at the practical level. In practice the debate has already taken place (and was in favor of finer and finer degrees of disceteness) at one level or another in mathematics (calculus), hardware (digital), software (multitasking), and now development. Hopefully everybody is getting their feet wet for the inevitable quantum computer.
Before you come here and tell people that the way to go is a “stable” 2.6 branch and a 2.7 development branch, you would do well to read the long and well-argued discussion on the kernel mailing list.
There are great reasons why waiting 3 years between kernel releases is no longer welcomed or wanted. The 2.6 kernel, problems and all, has been an absolute success by any standard, particularly considering how young it is. The hardware support just gets better all the time, desktop responsiveness is awesome and things will only improve as X.org and other efforts that compliment the kernel work at a higher level continue to improve.
I’m very satisfied of 2.6.x kernel too. There have been some problems (worst for me was the ATAPI change who broke cdrecord in 2.6.8, but I’ve got small problem with radeonfb in one of the early release) but nothing unsurmountable, and I’ve got the feeling that things become better with every kernel release.
There are great reasons why waiting 3 years between kernel releases is no longer welcomed or wanted.
Yes. Think of all the beta-teste…*cough*…users you’d be missing out on.
I agree. Despite all the complaints here and there about the 2.6.x kernel series, I’ve never really had any problems. It has been *very* stable for me… and it certainly has showed its strength when I’m in desktop-mode.
Given so many experimental items, the 2.6 kernel’s track record (major problems et al.) is actually very impressive. I think it shows that this development model works well.
Faster development is important, as Linux really needs to play catch-up in a few certain areas. Linux, Love, Morton, and all the others involved have been doing an outstanding job… and I hope they continue to do so at this pace.
*whoops* … Linux = Linus in that last post =)
I think the kernel is more for developers to mess with than to end users, end users would be using a distro, which would take care of everything the end user need. Actually i understand why Linus leaves kernel stabilizing to distros.
The people who download and compile their own kernels, well they know what they do, and i’m sure they are capable of helping with bug reporting, fixing and whatnot.
If you are an end user, you better stick with your distro kernels, because is very easy to mess it all.
I’m running linux-2.6.20.2.3-4.6.3.45.6.29.4.554.3-1/2.3.5.67.84.34.546.7.6.3.56-2 -rc2.tar.bz2
It is simply pitiful…It it strange the they are unable to stabilize the actual 2.6 branch but they put new features into the kernel. Why they did not open the 2.7 branch?
I had a bad feeling the way of the kernel development should be changed. An expert group should supervise the kernel development instead of Linus.
Just my 2 cents.
You really must read Andrew Morton’s post(about customers of the kernel). He explains why the old model is just not applicable anymore.
He explains why the old model is just not applicable anymore.
You may be right but it does not change the fact that if the Linux Kernel Team is not able to provide a stable kernel then the “Linux dream” will collapse.
Well, I wasnt a linux user when all that 3 year waiting happened. But I can say that without the 2.6 kernel I wouldnt be using gentoo in my laptop right now. I understand both sides, the ones that want stability and the ones that want to be up-to-date hardware speaking. Can the both be achieve at the same time? I move from kernel to kernel, if it brakes well bad luck (I know this doesnt apply for production environments), but that is how Linux work. It is not like Microsoft who keeps the same version for 2, 3, 4 years. Linux is not that mature yet, which by the way is not a bad thing. Linus proposal of changes in a even lower level 2.6.xx.xx doesnt sound like a bad idea. Those who want stability stay on 2.6.xx whith only bug fixes. For the development group, they can go to the xx.xx part.
With this scheme our grandchildren will probably still dabble on kernel 3.0
My problem is that the volume and the complexity of the kernel development reached that level where one person is not enought to review, study and accept all of the patches and changes. The structure of the kernel development is not SCALABLE.
In my mind this is one of the important problems why the kernel development is stagnating today.
Linus’ suggestion only scratches the surface…
i watching this disuccion for a while and made my own thoughts to it + i have ny ohn little open source project(see link) and had there also to think about my own version numbering.
i personally think that it is an good idea to move away from the odd and even branches to a nother model because:
the testing of odd branches is too few. you will need a certain time under heavily testing of the masses anyway.
my solution for my project was it to make goal driven numbering where each number is for different audience. stable 0.2 0.3 0.4 and testing 0.2.2 0.2.3 … and dev: 0.2.2.16 0.2.2.17 …. Thats not exactly the solution for linux because the project differs much but with some little adjustments maybe something better than now.
Whats up with that ?
Why does it have security problems?
Buffer overflow ?
The Linux development model isn’t the same as it was several years ago. Linus isn’t the sole arbiter of patches. Instead, he has setup something of a heirarchy of trees, with other people (like Andrew Morton) vetting stuff in their tree (-mm, in his case), before putting it in mainline.
Was awful. People face it, besides the minor issues (there were no really big issues in 2.6 in the last releases. The new development process works much better, I can remember the 2.4 times when you bought hardware and had to hunt down backports and patches to get your hardware up and running, in the end the kernel was much more unstable than any 2.6 could get away with. The 2.6 had minor issues, but those were all on the desktop driver side not on the server side (mainly server people are complaining here about the new scheme)
For that I welcome the new model, it seems sort of like Debians model of handleing things. An unstable branch, the mm branch, a testing branch the 2.6.x scheme, and a stable branch, the 2.6.x.x scheme.
I really love the new model, because you dont have to wait for ages to get important driver updates or important new features. For all the we want the old 2.4-2.5 scheme criers. Look at the latest incarnations of 2.4 put out by the distros, they resemble more 2.6 with all their backports than 2.4 so I dont think getting back to having a new stable release every 3-4 years is a good idea.
Overall 2.6 in my opinion is a big success and a good kernel but what we need is a rock solid branch which now is basically opened with the 2.6.x.x branches.
Dude… there are CRAZY amounts of patches in the changelogs… what do you mean by stagnant kernel development?
The kernel development model was fine in its current state, I do not know what hte big fuss is.
Distributions should stablize their kernels, when I mean stablize I mean make sure the new kernels do not break user-land like the 2.6.8 cdrw changes. However mainline should still be stable the in the sense that it can maintain performance and have minimal bugs.
I think that Linus should only release stable kernels after OSDL give it a stamp of approval. All the members of OSDL and OSDL labs themselves should be putting the kernel through its Q/A and backwards compatibility testing (after all they have boatloads of machines from all its members like IBM/HP/SUN/etc). People who want to play can do a bk-download and mess around to their hearts content but the community-at-large needs to be a bit more protected from such brown-bag-bug-releases. Nvidia drivers no longer work on 2.6.11 and I kinda feel for Nvidia being put on a treadmill (so are we but we’ve fixed it up in OSS).
OSDL has no major backing to do this. Linus, andrew morton etc knew better than to rely on corporate beurocrary. These developers have been doing this stuff for a decade. its better to rely on them rather than any single company
well, it seems that 2.6.11.1 is out
http://www.ussg.iu.edu/hypermail/linux/kernel/0503.0/1451.html
> OSDL has no major backing to do this.
So IBM and Novell and Redhat don’t matter?
> Linus, andrew morton etc knew better than to rely on corporate beurocrary
But OSDL is not a corporation in the pure sense – they exist at the mercy of Linus and Andrew and inturn Linus and Andrew are being paid by the stake holders of OSDL.
> These developers have been doing this stuff for a decade its better to rely on them rather than any single company
Times have changed and Linux is no longer a hobbyist OS. There’s serious infrastructure (and money) built around it so do you say that companies who’ve bet the farm on Linux be held over a barrel?
A stable driver API or atleast a HAL that sits ontop of the existing driver API which allows backwards compatibility; the problem isn’t binary drivers, it is when binary drivers are available, the company goes under the company that buys up all the remaining assets fails to continue on developing that particular driver.
This is where, IMHO Solaris will really pick up developers; the fact that there is a stable API, and life is made easier for the developer.
I’m not saying that it is impossible for something to occur on Linux, but with Linus’s arrogant attitude of “all or nothing”, little wonder there is very little movement by hardware companies to support Linux in regards to drivers.
I’m not saying that it is impossible for something to occur on Linux, but with Linus’s arrogant attitude of “all or nothing”, little wonder there is very little movement by hardware companies to support Linux in regards to drivers.
I’m sorry, but isn’t the binary driver issue due to GPL requirements? (i.e. “tainting” the kernel with closed-source modules?) In that case Linus has little to do with that: the Linux kernel has been GPL’ed for a long time and there’s no turning back (it would be virtually impossible even if Linus wanted it).
The question is why hardware companies insist on closed-source drivers. They make money from people buying their hardware, not their software…
“I’m not saying that it is impossible for something to occur on Linux, but with Linus’s arrogant attitude of “all or nothing”, little wonder there is very little movement by hardware companies to support Linux in regards to drivers. ”
I’m sorry, but isn’t the binary driver issue due to GPL requirements? (i.e. “tainting” the kernel with closed-source modules?) In that case Linus has little to do with that: the Linux kernel has been GPL’ed for a long time and there’s no turning back (it would be virtually impossible even if Linus wanted it).
Nope, that isn’t the case. There have been closed sourced modules for a while – take the linmodems/winmodems modules that are available, Aureal driver which is another example.
The question is why hardware companies insist on closed-source drivers. They make money from people buying their hardware, not their software…
Oh please, get a clue about software development. If they reveal the source of their drivers, they reveal information about the hardware – if they reveal information about their hardware, they then become uncompetitive.
You assume that EVERYTHING is done via hardware, which is further from the truth as you can get. Just take built in RAID on motherboards, you really think that the whole thing is implemented in hardware? same goes for SCSI, do you *really* think that the el-cheapo cards have everything implemented in hardware?
The issue shouldn’t be about source, what is should be about is catering for your bread and butter – that is, the people who make your platform what it is – or as sweaty Balmer likes to scream, “developers, developers, developers”.
Nope, that isn’t the case. There have been closed sourced modules for a while – take the linmodems/winmodems modules that are available, Aureal driver which is another example.
Err…you have totally missed the point. The issue I was pointing to is that it’s not that Linus has a “all or nothing” attitude towards closed-source, binary drivers: it’s the GPL, i.e the license under which the kernel is released, which has a problem with it. That doesn’t prevent binary-only modules from being used (I myself use the nvidia driver), but doing so “taints” the kernel – there’s a warning as such when the module is loaded.
In other words, when you load a binary-only module in the kernel you are not accepting the terms under which the kernel is licensed. However, no one is going to enforce this and therefore no one really cares.
Oh please, get a clue about software development.
No need to be so arrogant. I work for a software developer.
If they reveal the source of their drivers, they reveal information about the hardware – if they reveal information about their hardware, they then become uncompetitive.
Show me proof that producing open-source drivers for a piece of hardware has ever made uncompetitive. There’s nothing so secret about hardware that you could learn through an open-source driver that you couldn’t also learn through reverse engineering. Sure, it takes a bit more time and manpower, but competitors will have access to that.
You assume that EVERYTHING is done via hardware, which is further from the truth as you can get. Just take built in RAID on motherboards, you really think that the whole thing is implemented in hardware? same goes for SCSI, do you *really* think that the el-cheapo cards have everything implemented in hardware?
That’s completely besides the point. There are excellent open-source SCSI drivers, for example: their existence does not negatively affect the revenue of SCSI hardware makers in any way.
The issue shouldn’t be about source, what is should be about is catering for your bread and butter – that is, the people who make your platform what it is – or as sweaty Balmer likes to scream, “developers, developers, developers”.
…except that we’re talking about hardware companies, here. They just want to sell their stuff, not sell software. I’m sorry but you’re not making logical sense, jumping from hardware manufacturers to developers. These are two completely different things.
The fact is that hardware manufacturers should encourage open-source drivers for their equipment, as it provides them with free labor (i.e. people developing drivers for them, as opposed to them needing to put resources into in-house driver development).
Since I like Linux I think I have credit to criticize that:)
So, IMHO the pace of the kernel development is slower and slower. But it is quite natural since the kernel includes more and more technologies, HW support. But we should face it that the testing and the stabilizing takes much more time than before (and it is definitelly not a linear growing).
2.4.0(01/2001) -> 2.4.11(10/2001) = ~10 months(half team)
2.6.0(12/2003) -> 2.6.11(03/2005) = ~14 months(full team)
Stabilizing of the 2.4.x branch was different than this 2.6.x because 2.4.x was in maintenance mode and only the part of the team was working on that. In case of 2.6.x the whole team on that and it took more time.
The actual way of the development has reached its own limit and became a little bit “chaotic”. Actually there are no well-defined roles and the whole organisation is changing.
Q: Who is the kernel maintener?
A: Andrew.
Q: Does he create and announce new kernel releases?
A: No, Linus does it instead of him.
Q: What is Adrew’s role (as maintener)?
A: Err…-mm tree?
Q: For what?
A: Put new features into -m tree for testing before they go to the main kernel.
Q: Does anybody use -mm tree? I mean, is it well-tested?
A: ?
So in my mind the problems which should be solved are:
1. An expert team should be created to control the development with well-defined voting and decesion-making process.
2. Testing team should be organized the same way as the development team to accelerate the testing and stablizing process.
” OSDL has no major backing to do this.
So IBM and Novell and Redhat don’t matter? ”
no. nobody except developers matter.
I really don’t understand this eagerness for kernel version 3.0, I mean it’s just a number, I personally hate this Opera 8 or MS Office 11 (or 12, I’ve lost count). IMHO changing the first figure in the version number should be done when something really worthy of the change happens, like that rewrite in Visual Basic Linus has promised
I hope one of the new releases is called “2.6.x.we_fixed_the_usb_malloc_bug”.
My webcam is gathering dust
Chuck-
> I hope one of the new releases is called
> “2.6.x.we_fixed_the_usb_malloc_bug”.
I think it was in 2.6.11.
I frankly think that this is the best decision for everyone. The Kernel needs to be stabilized before companies actually start supporting Linux wholeheartedly as a platform in and of itself.
>> These developers have been doing this stuff for a decade >its better to rely on them rather than any single company
>
>Times have changed and Linux is no longer a hobbyist OS. >There’s serious infrastructure (and money) built around it >so do you say that companies who’ve bet the farm on Linux be >held over a barrel?
Times may have changed, but the way the kernel is developed (and its license) has not, not should it.
If people want to bet the farm on Linux they are free to do so, but development should continue as it always has – it’s not the kernel developers problem what you do with the kernel being produced. If you want a super-super-super stable kernel, then use one from a vendor. Vendors take the krnel.org kernel, put it through serious QA, then release and support that in their enterprise products – use *that*.
Just because a lot of companies have all of a sudden desided to invest a lot of money in Linux doesn’t mean the kernel devs have to care – that’s *not* their problem. Their problem is developing a useful kernel that other people can use as a sane base for the stuff they want (including as a base for enterprise vendor kernels).
Linux needs to stay independant of big commercial interrests or it’ll slowly be swallowed up and end up not being a free OS any more.
ohh, and for the people complaining that the nvidia drivers don’t work out-of-the-box on 2.6.11 – well, that’s really nvidias problem. They could GPL their driver and such things would be taken care of. They choose to keep their drivers closed, well tough. And if you need to use those closed proprietary drivers, you have 3 options – stay with the 2.6.10 kernel, wait for nvidia to update their drivers or get a different videocard.
Is this true?
Or should I stick to using Windows XP for my binary-driver only hardware?
The stable 2.6.x.y will stop be maintained once the 2.6.(x+1) comes out. That way they can do whatever they want with 2.6.(x+1)-preX while patches for bugs and other fixes are applied to 2.6.x.y versions. I suppose those patches would also be applied to 2.6.(x+1)-preX so that when 2.6.(x+1) is released it collected all stability patches from the previous version.
And I thought 3 numbers was too many.
>ohh, and for the people complaining that the nvidia drivers >don’t work out-of-the-box on 2.6.11 – well, that’s really >nvidias problem. They could GPL their driver and such things >would be taken care of. They choose to keep their drivers >closed, well tough. And if you need to use those closed >proprietary drivers, you have 3 options – stay with the >2.6.10 kernel, wait for nvidia to update their drivers or get >a different videocard.
Actually there’s a 4th option. Use the patches that are provided in the linux forum at http://www.nvnews.net.
The issue I was pointing to is that it’s not that Linus has a “all or nothing” attitude towards closed-source, binary drivers: it’s the GPL, i.e the license under which the kernel is released, which has a problem with it. That doesn’t prevent binary-only modules from being used (I myself use the nvidia driver), but doing so “taints” the kernel – there’s a warning as such when the module is loaded.
Which clause of the GPL are binary-only modules meant to be in conflict with?
As I understand it, the “tainted” message just means that the kernel developers will not look into problems arising with such a kernel, because they don’t have the complete source.
I could be mistaken. This had always been my interpretation, but then again I was just assuming this was the case.