Linked by Thom Holwerda on Thu 14th Feb 2008 17:34 UTC, submitted by anonymous
Linux "Make no mistake about, the Linux 2.6.x kernel is a large undertaking that just keeps getting bigger and bigger. Apparently it's also getting harder to maintain as well in terms ensuring that regressions don't occur and that new code is fully tested. That's where the new 'Linux Next' effort comes in."
Order by: Score:
About time
by diegocg on Thu 14th Feb 2008 19:27 UTC
diegocg
Member since:
2005-07-08

The current "-mm -> mainline" mainteinance is used since before git was announced.

It looks like they're just using git and its decentralized development style to offer a public "unstable" branch, instead of relying on Andrew Morton to do all by hand.

Reply Score: 2

-mm to mainline
by MrEcho on Fri 15th Feb 2008 02:56 UTC
MrEcho
Member since:
2005-07-07

From what I understand mainline is just a bunch of patches from the -mm branch that may or may not have been fully tested with the upcoming release of patches.

The 'Next' would be a pre mainline test branch.
I think this would cut down on the -rc's

Its like lets pull patch B,T,A,R,O,I,M,E and see if they even work together in 'Next', and fix things if they dont. Once all the major issues have been fix, move it to mainline testing for the small things.

Yes it may be more work, but the Kernel is getting to be very huge with so many parts. As a sysadmin I wouldn't mind if it took longer to get a better kernel.

Reply Score: 1

RE: -mm to mainline
by elsewhere on Fri 15th Feb 2008 03:46 UTC in reply to "-mm to mainline"
elsewhere Member since:
2005-07-13

From what I understand mainline is just a bunch of patches from the -mm branch that may or may not have been fully tested with the upcoming release of patches.


No, it's not quite that bad. -mm is the testing ground for patches and new technologies, andrew morton is pretty good about allowing people to submit their work even if he thinks little of it.

The idea is that -mm should be getting enough testing that issues can be worked out on certain components or updates, allowing them to be safely moved to mainline. Doesn't always work that way.

Linus accepts patches from various branches aside from -mm, mainly the maintainers, and he may or may not accept patches pushed up from mm. Or he may pull patches from mm even if they're not ready, because he's said in the past that the best way to get exposure and find the bugs is to make it public (though they will usually be marked as "EXPERIMENTAL" features in the kernel config).

I think the real problem is that these patches and updates get pulled in from various branches, but aren't actually tested together (particularly regression tested) until they're in mainline. Each of the branches generally synch themselves on a regular basis with the git version of Linus' mainline branch, but since each branch generally emphasizes testing of a particular component subset, it's not always possible to find the regressions until they hit mainline. If you scan through the LKML, particularly around an upcoming rc when everyone is scrambling to get their patches pushed upstream, you'll find a constant stream of regressions that often need to be further patched, reverted, or left as is. The growing complexity simply magnifies the problem.


The 'Next' would be a pre mainline test branch.
I think this would cut down on the -rc's


I don't know about that... it would find many of the structural problems that can break kernel compilation or causes clearly defined conflcits, but often times these problems don't become visible until well into the -rc cycle. Frankly, there is the same problem with -mm. While it is supposed to be the testing ground for new tech, the pool of testers that actually makes an effort to use it and document issues is relatively small, which leads to a bit of an inbred environment that can't realistically scope far-reaching issues. They're good at snagging some of the gaping problems, but those issues that only come up with particular hardware or software or hardware/software configs, for instance, frequently will slip through due to the testing pool not being large enough.

Yes it may be more work, but the Kernel is getting to be very huge with so many parts. As a sysadmin I wouldn't mind if it took longer to get a better kernel.


Best to stay away from bleeding edge, then. This is where the enterprise-oriented distros offer value. They will often backport security and stability fixes, and on the rare occassion functionality enhancements, to an existing stable kernel, rather than forcing an upgrade. I think this new process will help streamline the testing process, but it is a bandaid only.

Reply Score: 4

This wil be like nothing compared too...
by Meridian on Sat 16th Feb 2008 09:56 UTC
Meridian
Member since:
2007-12-18

The next grand step for Linux will be to re-architect the core as a C++ microkernel. However, before the official announcement is made, we need to place Mr Torvalds at least 2 AU beyond Earth's orbit, thereby ensuring life on our planet is safe when he reaches critical mass.

Reply Score: 3