This study focuses on the distinguishing traits of the Linux managing model. It introduces the concept of process to capture the idea of impermanence, dissolvability and change. Far from being a predictable flow of programming, assembling and releasing activities, it is suggested that the Linux development process displays a stream of activities that keep feeding back into each other, thus creating a complex and unpredictable outcome. Read more articles at FirstMonday’s issue index.
A change from the usual ill-informed propaganda that is the usual staple of IT publishing.
you’re right – but we’d be lucky to see more than 5 comments on this topic. the majorty prefers bashing and trolling ….
I’m still not sure I like the code evolution model though. I do see how the development of the kernel code can look like that from a macro-level, but once you get down to the lower levels of kernel funcionality it starts to break down IMHO. The O(1) scheduler, for example, is most definitely NOT a descendant of the previous scheduler. In evolutionary terms it would be like two cats mating and producing a dog.
IMO the code governing low-level funcionality is initially created using what the article describes as:
“Traditional software development processes are based upon a four-stage cycle involving: a) Planning; b) Analysis; c) Design; d) Implementation.”
It is only after the initial implementation that the feedback loops the article mentions come into effect.
So in brief I think there are actually two processes affecting the kernel development. One short initial phase of isolated creation and a far longer (Sometimes continous) phase of evolutionary progress. This is why I’m not too keen on the author’s ESR model when applied to the creation of new functionality. New functionality is most often driven by there being a specific demand to be met, not by evolutionary selection from a large code pool. So new code HAS a firm direction, a direction which is dictated by the demands made by the users. The kernel might take the scenic route to get there, but the destination is becoming apparent, namely a responsive, secure, desktop orientated kernel and accompanying environment. This seems to be inevitable because that’s what the userbase appears to want.
Yes, this is indeed a nice article, and it is indeed what is happening.
The fact that some parts of open source software is beeing developed in the ‘traditional’ way is impliciet explained in the article. Especialy if you abstract contributer to both an individual and a company.
To come to the O(1) sheduler, it’s also been available for quite some time, as patches or in private trees.
So the exact way something is coded, does not really mather much (for the intention of this article), it’s more how something get’s included in a open source program.
We made it this far..
if we skip planning, analysis and design until the first implementation maybe we’ll save a little time.
I love the ideas of patch management and modular code, libraries of methods, etc. Personally I like to write libraries to wrap those complex libraries into easy and usable functions. But I mostly write perl.
Code is just a bunch of algorithms slapped together. If we could communicate these comcepts and algorithms to eachother and teach eachother (for free) through the use of our modern media and technology maybe we could learn how to write better code faster. Maybe. Or we could all get high and go chill and play video games. Its all good.