Linus Torvalds, the creator of Linux and the maintainer of the development kernel, expressed concerns that the kernel development process may need to be changed to make sure that Morton is not overworked. “One issue is that I actually worry that Andrew will at some point be where I was a couple of years ago – overworked and stressed out by just tons and tons of patches,” said Torvalds. “If Andrew burns out, we’ll all suffer hugely.”
It’s not hard to find amongst the lieutenants some who could relieve the chief maintainer,a bit delegation doesn’t harm.
Todays kernel is nothing without Morton. So it’s good torvalds is worried and I hope it can be avoid Morton being over-worked.
I agree, they also should get more chief maintainers it’s one of the weakest points in the chain (and linux), if i’m correct the freeBSD team is using another aproach. I think I read something about that some months ago
IIRC there are over 300 peaple in the freebsd community with the “commit” access.
It’s good that people are starting to talk about this now.
Of course Andrew does not totally agree with Linus, after this posting by Linus he replied:
“I’m doin OK.” He went on to explain, “patch volume isn’t a problem [with regards to] the simple mechanics of handling them. The problem we have at present is lack of patch reviewing bandwidth. I’ll be tightening things up in that area. Relatively few developers seem to have the stomach to do a line-by-line through large patches, and it would be nice to refocus people a bit on that. Christoph’s work is hugely appreciated, thanks.” He also suggested that the number of major features lined up for the kernel have been slowing down, hinting that some day the kernel will be a completed project, “as I said, famous last words. But we have to finish this thing one day “
Just like they divided the work up between Linus and Andrew because Linus was overloaded, maybe its time to spread out the load that Andrew has a little. Maybe Robert Love or Alan Cox(again) could take on a little more formal role of reviewing incoming patches to help
I am sure this will get worked out in the end.
I doubt the kernel will be totally finished some day, new devices (which need drivers) and new technology (new filesystems etc etc) will always keep hunting the developers…
This is why the various BSD projects have a number of committers, and in the case of FreeBSD and NetBSD, core teams as well.
If one dev or another gets hit by a bus (which seems to be a scenario thrown about far too often WRT Linus, BTW), it’s a tragedy for the dev and his familly, but life in the project goes on.
It is Linus’ fault that the Linux development model sucks so much that there are so many articles of late concerning changes he’s making to address them. There will be more .
Indeed… it’s too bad that the Linux development model keeps Linux SOOO far behind the BSDs in terms of development.
Oh, wait, it’s not behind, and in fact in some (many?) areas it’s ahead. Oops.
That is true. But I think the parent poster was refering to the idea that the BSD development process is far more resilient in some regards; notably that they do not rely so much on one or two developers to maintain the projects.
Sure the Linux kernel has gobs and gobs of submitters, but only one or two people at the helm. And while it is true that if say Torvalds or Morton were to die, some other group would become the “standard” kernel, can you honestly say that the direction of that project would be the same as Linus’ tree?
Something to ponder I guess.
That is true. But I think the parent poster was refering to the idea that the BSD development process is far more resilient in some regards; notably that they do not rely so much on one or two developers to maintain the projects.
BSD model with core groups only applies to centralised SCM’s like CVS. With a distributed system like GIT, the concept of commit access is irrelevant. Besides the development methodology of using Andrew Morton’s mm tree as a staging area is different enough that comparisons are not one to one
Something to ponder I guess.
Yes, I agree with you. What if Morton dies? What if Linux dies? Did they approve crown princes?
You may laugh at this, but since death is the only thing we all have in common, it is by far not unlikely. It would be nice if the kernel that is dependent on by many people in the world had decent backup dev team leaders.
“You may laugh at this, but since death is the only thing we all have in common, it is by far not unlikely. It would be nice if the kernel that is dependent on by many people in the world had decent backup dev team leaders.”
The ever present danger of the cathedral (hierarchical, interdependent) model. What would happen if Bill Gates got hit by a bus?
Seeing as how Bill Gates does not do any actual coding, I’d say that things would go on pretty much the same as before.
There’s more to Microsoft then Bill Gates.
The most underrated thing in technology is leadership.
Gates and Torvalds are both excellent leaders, albeit with different aims.
The question of how to nurture the next generation of such is not a new one.
The death of Bill G. would be disastrous.. Earth would spin out of orbit from all the ppl jumping!
I don’t think you’ll have to worry. There are lots of people ready that could handle the job if needed. As previously stated the SCM is distributed and Linus and Andrew isn’t the only top dogs in the kernel team.
I have said for a while the Linux has outgrown Linus a long time ago. If his genuine concern is the ultimate success of Linux he would pass the torch to a group that can advance the kernel properly and bring Linux to the enterprise quality that it needs to be. Half the reason there is very little 3rd party software for Linux is because of this.
That’s just silly. Linux is already at enterprise-quality level.
Why would the fact that there is little (in your opinion) 3rd party software for Linux be related in any way to Linus being at the helm of kernel development? I’m sorry, but that just doesn’t make sense.
Maybe for your enterprise but not the one i work in. I get tired of explaining why linux is not a serious enterprise player when compaired to Unix. Developers are not going to spend money developing for a platform that changes everyday on the whim of a single individual competent or not.
Maybe for your enterprise but not the one i work in.
Linux is used by enterprises around the world. It is used by Wall Street companies. It is used by DaimlerChrysler. It is used by Unilever. It is used by Autozone. It is used by Weta Digital. It is used by dozens of other large and successful enterprises.
What company do you work for?
I get tired of explaining why linux is not a serious enterprise player when compaired to Unix.
Oh, we should just take your word for it, then? Hey, here’s a novel idea: stop making these claims if you can’t be bothered to back them up.
Developers are not going to spend money developing for a platform that changes everyday on the whim of a single individual competent or not.
What platform would that be? It certainly isn’t Linux: I upgrade kernels often and I’ve never had any app or lib breakage.
So it seems you’re either mistaken, or lying. Which one is it?
Maybe for your enterprise but not the one i work in. I get tired of explaining why linux is not a serious enterprise player when compaired to Unix.
I’m sorry, maybe I’m parsing your comment incorrectly but are you really saying that Linux isn’t ready for the enterprise you work in because you keep telling people it isn’t? That’s pretty comical. And you do know that plenty of large enterprises (I’m guessing Daimler Chrysler and IBM are bigger then where you work) actually use Linux, right?
Maybe this is too hard to control, but it seems like a distributed model might be better. Imagine a wikipedia-like system where each code file had its own page. When you applied patches, the edit was recorded similar to wikipedia so it could be reviewed side-by-side with the old version. If you gave a few hundred people access to the repository to commit changes and then left the actual system open for the public where the public could submit corrections (which might have to be vetted by one of those few hundred people before being committed) you could possibly have a system that’s more distributed amongst more people as well as being more visible and open in the sense that it’d be far easier for developers to do code reviews and try to help out by keeping the kernel clean.
I could be missing something here, though, but it seems like something like that might help.
That is how it works now.
Linus approves patches approved by Andrew who approvs patches approved by various subsystem maintainers. You submit your patch to LKML to have it rated and then cc the subsystem maintainer when you think it’s ready for inclusion.
That doesn’t sound very distributed to me. It sounds like all the patches flow up the chain until they ultimately get to Linus. Perhaps several people have commit access, and once approved, Linus may not do all of the actual committing – but the approval process seems to all still boil down to one guy.
What I suggested was more like wikipedia – once someone with commit access (which there would be hundreds) approves of a patch, they submit it themselves and let the community approve it over time by editing or requesting edits to the repository thus removing the archaic chain of approvals that ultimately depend on one person.
> What I suggested was more like wikipedia
Terrific idea! Make Linux more like Wikipedia! Then we could all settle down happily with Windows, and forget about the rest.
Goran J,
Sweden
Ths thing is that you as the community then approves Linus patch approvements by using his tree. You could use andrews tree if you wanted or any tree for that matter. The only thing special about Linus’ tree is Linus. You could even start your own tree and pull patches from all over the place just as Linus does.
You could say the exact same thing about any of the BSDs CVS repositories. It’s just as stupid simple to pull an entire copy of say the FreeBSD CVS repo as it is to have your own Linux kernel tree, and I know that verious developers for all of the BSD projects do exactly this.
Kind of makes me wonder what real differences there are WRT “centralized” vs. “distributed” SCM systems. You can do the same things with both, as only the specifics vary. The only difference is that in the BSD projects, there is more than one developer with commit access to the “official” source code repositories.
I’m just saying that your suggestion is based on removing Linus as a bottleneck for the main tree.
The problem is that the ONLT thing that makes his tree the main tree is him.
If you forked the tree and put it up as a wiki or something. Noone would ship it. They ship Linus’es tree because they trust him.
“I’m just saying that your suggestion is based on removing Linus as a bottleneck for the main tree.”
I personally have never made the claim that Linus is a bottleneck. I *am* making the claim that only having one or two people in charge of the development of the “Official” tree is a liability.
Single developers in charge of things is often required when a project gets started, but once the project has established itself and has a clear set of goals and a direction, it is foolish to continue with an infrastructure that is best suited to small projects.
I’m sorry, I confused you with the OP.
Still Linus is in “charge” of the “official” tree because it’s his tree and its “official” behasue people like his work.
Remove Linus and you’ll get a new “official” tree and a new man in “charge”. Possibly with a transition period with several “equal” trees and maintainers.
“Single developers in charge of things is often required when a project gets started, but once the project has established itself and has a clear set of goals and a direction, it is foolish to continue with an infrastructure that is best suited to small projects.”
Ohh, ohh…
Let’s form a committee!!!
Better yet a consortium!!!
Maybe would could just elect the lead developers!!!
Maybe even make the kernel democratic, we could vote!!!
Bullshit. I’ll take my kernel with a Linus thanks. The best way to dissolve having a single reference tree (Linus’), and to make the kernel development process moribund, is to design by committee. You show me one large ‘democratic’ / ‘committee run’ open source project, and I’ll show you a project that has rules and bureaucracy that will make you want to bash your head against the keyboard.
The example that comes to mind here is Debian. I use it, I love it, but the ‘Sarge of the ages’ wouldn’t have been tolerated by a Linus. Indeed, if you think the amount of time it took to get ‘Sarge’ out the door was long, you should follow the debate on reforming the process. It’s all good and democratic and all, and you can pry my Debian box away from my cold dead fingers, but kernel development is Utopian by comparison [and I run testing / unstable]. When I buy a new motherboard with some exotic chip-set I want that support in the kernel yesterday, not the great debate about how to get it there…
{I’ll reply to myself, bad form I know}
Note, I think this is probably a good thing for Debian… At the end of they day the existence of ‘stable’ is a good thing… But you don’t really want that for the kernel process– you want the new stuff in yesterday. The distros have always cherry picked a ‘good’ kernel and patched that for the life of a release cycle… With the new ‘sucker’ kernels (x.y.z.d’oh) even this should get easier.
I’ve been giving this some more thought. To have a tree witha a wiki interface could be used as a repository to pull one liners and small janitor fixes from.
If, on the other hand, it was possible to apply major patches against the whole tree I guess it would just explode.
Imagine running a kernel compiled from 50% professional c-code and 50% “Hey check out my porn sire [url]“…
That is assuming anyone would be bother to actually use it…
There was an article in the WSJ recently about how Microsoft has reduced the number of reported bugs in their beta software by an order of magnitude in part by instituting a “bug jail”. If a developer has too many bugs, that person isn’t allowed to write any new code until his/her bugs have been reduced. This makes the developers concentrate harder on writing correct code the first time.
If the same policy could be instituted for the Linux kernel, there would (in theory) be fewer patches and therefore less work for Morton to do.
Linus has about 30 lieutenants but the structure is to much top-down i think.Furthermore i can hardly believe those 30 are the only gifted ones good enough to hack the kernel.
Perhaps it’s time to revise the kernel head-quarters a bit and seek a situation where linus,morton and some others do more designing,innovation and less blunt patching.
Andrew isn’t concerned about patching. GIT does a very good job af automating that process. His concern is the level of testing done on the patches he receives from the subsystem maintainers. Andrew ends up doing a fair bit of debugging and fixing patches that come upstream, whereas he should be skimming the patches and filtering out the ones that don’t embody the correct design decisions.
He needs more commitment from his subsystem maintainers. Perhaps the Microsoft “bug-jail” concept would be somewhat appropriate: “look, if you keep giving me broken patches, I’m going to…”