Accommodating large patch sets in Linux is expected to mean forking off of the 2.7 version of the platform to accommodate these changes, according to Andrew Morton, lead maintainer of the Linux kernel for Open Source Development Labs (OSDL). Commenting on the planned 2.7 release of the Linux kernel, Morton said OSDL expects a few individuals with big patch sets will want to include them in the kernel. But there will be no place to put them.
I don’t think forking is a good idea, on the other side most vendors ship there kernels already with patches etc. But I really hope there will be 1 main kernel source tree.
Are they talking about forking 2.7 from 2.6 (which qualifies as, “well, DUH!”), or forking 2.7 within itself, as in 2.7.3-rh, 2.7.3-sgi, etc?
The article heading is misleading, andrew morton is talking about the possibility of 2.7 development branch. this is not generally considered as a “fork” in the tradional sense of it
there is still only one stable branch. this isnt something new to linux.
I was wondering the same thing.
AM is talking about a *single* 2.7 development branch when disruptive changes are submitted to lkml
that they will at some point have to fork off the 2.7 development branch, just as they had done many times now. There was no doubt that it was coming, but now it seems to be looming on the horizon. The only difference is that there has been more active development in the stable tree itself in 2.6 than other kernels.
So, really nothing unexpected happening here.
I think maybe it will be best to fork the code. The Linux kernel is being used to target the smallest of computers to the biggest of computers. The kernel code for a handheld should be significantly different from the kernel code of a supercomputer. There is no reason to make features that will never be used togeter work together. I personally think they should have three flavors of Linux, one for embedded devices, one for personal computers/ low end servers (up to 8 cpu’s), and one for supercomputers. They can have a general code section, and then specialized code for each type of device. Just my opinion.
OK. If this is the case then a better title may have been *2.6* seeks development code fork or 2.7 development fork imminent. 😉 But thats just me.
This means that the developers are seeing that “One Kernel To Rule Them All” may not be such a good idea. I would love to see kernel projects solely devoted to desktop and multimedia uses, with another for embedded projects, and another for security and high availability. Each of these markets demand specific design philosiphies, which cannot be met with just a single kernel. I would especially love to see a desktop-specific Linux kernel that can handle hardware as easily and simply as a Mac would. Forking will only become a problem when the specific kernels start forking themeselves. For this, a standards body for each kernel rendition would be essential.
i like the current pace of 2.6 – mainly (only?) because of the non-existing 2.7-tree; some may see it as a weak point since if devs are working on it, then it’s not really stable… if there are larger changes to be done i think it’s a good idea to open up the gates of 2.7 – drawback is that good things will have to be ported back, which costs human ressources… i don’t know about how it’d be and since i’m not a kernel dev i’ll leave it up to them, what they wanna do.
exercise for you!
define in simple and clear terms the range of hardware that counts as ‘desktop’.
You’re absolutely right. Five years ago, you would have defined “desktop” as single or maybe dual CPU SMP machines. When the dual-core Opterons come out, a 4-way NUMA architecture is not at all out of the question for a desktop. At that point, I’ll bet you’ll regret that you did all the NUMA scheduler work in the “supercomputer” branch instead of the desktop branch!
Limited forking can be useful, but I doubt that there would be any good way to start forking the kernel. Remember that one of the major reasons that the ‘true’ Unices are dying today is too many forks that were incompatible with each other.
yeah agree with you. the current setup spreads Linux too thin.
i’m not an expert on this nor am i a kernel dev, but i’m thinking if it’s possible to have a “base Linux” kernel that applies to all platforms (from handhelds to the big tin stuff) and have sort of appointees or delegates to oversee the respective forks/trunks. so for example, code that is common to all platforms should be in the base kernel should be in this base Linux thing. then all enhancements made especially for the embedded/handheld, like memory management code, could be placed there. the desktop version of the kernel may have a different scheduling algorithm from the server version of the kernel. since everything is open source GPL i figure it shouldn’t be difficult to backport an enhancement to one version that would be good to apply to all versions across the board.
i’ve been thinking about this for many years. maybe it’s not being done because it’s not as easy as it looks
Bad heading, read the enlightened posts. This is just the usual game of versioning leapfrog with a stable branch and a development branch. Still one standard vanilla kernel…
andre, what you’re talking about has currently been done. Different scheduling techniques, some targeted for desktop use, others for servers, all modified by a patch.
It’s currently up to the individual user to apply whatever patches they want or just use vanilla code. Many distros have patch their own kernel.
The Linux kernel is that general base you were referring to, and all the patchsets are the specialization you mentioned.
Isn’t this a bit like when Alan Cox had his own fork of the 2.2 kernels?
Those tree’s merged back for the 2.2 release. Nothing to see here.
So the point of the article can be summed up with the headline “Linux development to continue as normal?”
Truly groundbreaking.
The real news in this article is not the upcoming creation of the 2.7 branch, as has been pointed out it isn’t real news, but that Andrew Morton claims that “no leading-edge” development occurs in the open-source world. (He includes Linux as non-leading-edge since it’s a Unix clone.) He also adds that anyone who wants to do leading edge development should make it closed source and “go out and make money with it!”
Maybe RMS’s misgivings concerning the Linux / Open Source movement are starting to make *some* sense. I’m not a zealot myself, by any means, but I’m just pointing out that in the article a leading Linux developer is denigrating the free software movement.
“Maybe RMS’s misgivings concerning the Linux / Open Source movement are starting to make *some* sense. I’m not a zealot myself, by any means, but I’m just pointing out that in the article a leading Linux developer is denigrating the free software movement.”
he isnt. read the words more carefully. he never said people should make it proprietary. he said to form a company around
Moreover AM’s transcript the linux symposium
http://www.icims.csl.uiuc.edu/~lheal/doc/Linux/morton_speech.html
“. My talk will be kernel-centric but is presumably, hopefully, applicable to and generalizable to other projects within the free software stack — capital “F” Free.”
her also praises RMS for his contributions to Free software and says that it is important and should be appreciated. Moreover his arguments are that free software tends to be more successful as system software. read the whole article. ery intersting read
This is funny because Linus said a goal of 2.6 and further Kernel development was to avoid forking and several different versions of the kernel.
Anonymous is right. He said to try and make money out of it. That doesn’t imply making it proprietary.
BTW, he’s right in that most open source development is boring thirty year old nuts and bolts stuff. But then, most proprietary software development is, too. Bleeding-edge projects are the exception *everywhere*.
Not again. the title is misleading. this is just talks about the possibility of a development branch as usual.
Linus and AM are on the same track
the OSDL says that in their opinion driver quality for other OS is not the same as that of linux.
i see, because only the linux community can write drivers, give me a break. the hype is starting to warp their brains
Well, they are right. The driver quality for other OSes is not the same: many drivers for some hardware are significantly better in other OSes than in Linux…
But hey, they can’t do miracles when they don’t have the full specs.
So, will the BSD guys do the “forking”?
mod me down
Eu: so why on earth are you still reading this website… ?!
and btw your post is not “inciting flamewar” in itself ???!!
Thanks for the laugh…
I don’t think forking is a good idea
…
I really hope there will be 1 main kernel source tree.
This is truely funny, why on earth would a desktop kernel be the same as an embedded one? rofl
“This is truely funny, why on earth would a desktop kernel be the same as an embedded one?”
Why not? (At least as far as the source goes). While the Linux kernel is ‘monolithic’ you can make it pretty tiny by compiling in only what is required for a devices hardware.
Much of uClinux was merged into mainline 2.6, and a lot more 386isms were removed as well, so I think Linus is trying to keep the same kernel for every platform.
You still need to patch or use ‘real’ uClinux for MMUless chips, but they are getting less common nowadays.
Why not? (At least as far as the source goes). While the Linux kernel is ‘monolithic’ you can make it pretty tiny by compiling in only what is required for a devices hardware.
—
it would be more correc to call it “modular” instead.
“Much of uClinux was merged into mainline 2.6, and a lot more 386isms were removed as well, so I think Linus is trying to keep the same kernel for every platform. ”
of course. linus has clear reasons for this. it helps in end user testing. he has explained it very clearly in lkml
Im not sure if i like the idea. Developers have have lives, that’s why the developement is moving at the pace it is. And i like the pace the developement is at. Forking another kernel tree will split the developers apart and slow down the developement of the 2.6 kernel.
What seems to me like a good idea is to modularize the code so that you can just plug things in and out. That way, if the kernel got forked it wouldn’t be much work to remove and add support. I would also like to see projects dedicated to only certain parts of the kernel. For exampmle, one group does networking and another does video and maybe one that check and approves the code. From then on the code would be piecet together in whatever way it suits people and because there’s ony one group working on a particular part of the kernel, there would be no repetition. “One fit’s all” sort of spreak. One “driver” or piece of code to support some hardware would work an all forks. Then each fork would be kind of like a distribution of pieced together code.
Don’t confuse a running compilation from Linux sources as “The Linux kernel”. Linux (vanilla) is just a reference implementation including many examples for diffrent deployment scenarios. The source tree also contains an exelent tool for selecting suitable codeparts to compile.
For specific tuning there are many options.
* Using the included source selector (i.e. make menuconfig) compile the parts of the source tree suitable for your application.
* Tune the kernel with sysctl and kernel boot arguments.
* If needed: apply suitable patches provided from all non vanilla kernel forks availible.
We need it, the changes in 2.6-mm and more are starting show. Everyone knew that a 2.7 would come *when significant changes* could no longer support keeping 2.6 or 2.6-mm ‘sane’
Let’s bring on 2.7 dev branch