*"The whole 'microkernels are simpler' argument is just bull, and it is clearly shown to be bull by the fact that whenever you compare the speed of development of a microkernel and a traditional kernel, the traditional kernel wins. The whole argument that microkernels are somehow 'more secure' or 'more stable' is also total crap. The fact that each individual piece is simple and secure does not make the aggregate either simple or secure. And the argument that you can 'just reload' a failed service and not take the whole system down is equally flawed."*

**My take:**While I am not qualified to reply to Linus, there is one thing I want to say: just because it is difficult to program, does not make it the worse design.

To view parent comment, click here.

To read all comments associated with this story, please click here.

*But it seems to me you are saying you have both a logical model and an implementation model, and that they are better--across the board--than any and all previous approaches to any and all computing problems. Am I right? How do you go about proving this? Is there a single mathematical model I can look at to get a grasp of this?*

The COSA model is a synchronous model. It is the same model used by logic circuits. Essentially, COSA transforms software into a signaling system comparable to hardware. My claim is simple and is not an extraordinary claim that requires extraordinary proof: if logic circuits can be proven correct, so can COSA programs.

*The COSA model is a synchronous model. It is the same model used by logic circuits. Essentially, COSA transforms software into a signaling system comparable to hardware. My claim is simple and is not an extraordinary claim that requires extraordinary proof: if logic circuits can be proven correct, so can COSA programs.*

I recommend that the author read David Gries' polemic against "starwars" software development.

Logic circuts can be proven correct (to the extent that they're accepted as never being metastable) *because* they implement a small enumerable set of state transitions.

You can prove a basic logic circuit correct by inspection of all of its states and transitions in a short period of time.

From this, you can build on the mathematics of formal logic and build up pieces into larger logic circuits and rely on the formalism. This works to a certain level.

But then logic circuits reach a certain level of complexity, and hey presto, they get buggy. (Ever see the errata sheet for a complex processor chip?)

But general software is far more difficult to prove correct. Take a simple mathematical algorithm. It has, in theory, an infinite domain and a potentially infinite range. You can't prove it correct by inspection, and formal techniques don't get you much farther.

Neither of these, however, address Gries' simple but elegant point: You can't prove programs correct, anyway. All you can prove is that they perform according to specification. But that leaves you wondering if the specification is correct....

*The COSA model is a synchronous model. It is the same model used by logic circuits. Essentially, COSA transforms software into a signaling system comparable to hardware. My claim is simple and is not an extraordinary claim that requires extraordinary proof: if logic circuits can be proven correct, so can COSA programs.*

But that answers nothing. I can chain some simple conditional statments together and make a provably correct program. A sufficiently large collection of these can conceivably handle any software problem. But, the whole problem becomes one of the programmer's conception, rather than whether the if-then-else statements actually do what they are supposed to. I don't see how a signal-based program can escape this scenario. May as well try to escape logic altogether.

And by the way, I wasn't asking whether your model is "provably correct", but whether it is provably superior. Big difference. It may have seemed like a non-sequitor, but I brought up the relational model because it is a perfect example of the sort of thing the programming world should be striving towards: an expression of logic in programming that has complete conceptual integrity. I believe it can be demonstrated through logic to be provably superior to any other method of data management created. That sounds like a big claim, but I make it due to a simple syllogism:

1. Logic is the ideal tool to handle computational tasks, since it is the only way to prove correctness at any level.

2. The relational model is simply a complete application of logic to data management, allowing one to express any and all manipulations of data in a logically consistent way.

3. Thus, the relational model is the ideal data management tool, and any deviation from that is only an undesirable limitation.

(For any who want to flame me for this, at least please do yourself the credit of spending some time thinking about this syllogism first)

Within the scope of data management, the relational model allows one to create systems that are provably correct, through data definition and constraints, allowing one to express exactly what should and should not be allowed to happen to the data. Now the problem with the relational model is one of implementation--namely that it has never been truly implemented, and the systems that exist almost all allow grievous violations and circumventions of the logic that one might express. But at the conceptual level it has that certain... completeness.

I fail to see any such conceptual integrity or completeness in the greater scope of programming paradigms, including the COSA model. I'm waiting for it anxiously, and just not seeing it. I think functional programming may be the closest we have come to that sort of thing in the programming world, but it still seems disappointingly difficult to handle any but the most self-contained, algorithmic sorts of tasks. Meanwhile, I still maintain that whether your programming is signal-based, algorithmic, functional, object-oriented, monolithic, distributed--whatever, you still can't escape the fact that complex logical systems are difficult to prove correct, and the more complex they become, the closer you come to requiring an infinite amount of time simply to *make* the proof. It is not a limitation of the paradigm, but of the human mind itself. I don't see any way around it any more than there is a way to escape the scenario we live in called "universe" (to paraphrase Buckminster Fuller).

Member since:

2005-07-18

Programming will not come of age until a single software model and development method is used universally. This is not currently the case because we are doing the wrong way. The truth is that, once one understands the true nature of computing (it's all about communication between sensors and effectors), one realizes that the signal-based synchronous approach is the only way to do things. There is nothing that can be done the old ways that cannot be done better and more reliably with the signal-based model.Wow...

You see, that's the sort of statement that just begs the question.

Maybe you're right, but are you *demonstrably* right? Provably right? How about the next person that comes along with the ultimate paradigm? How do I judge between that paradigm and yours? Will it take me 10+ years of study to understand which is truly the right one? The fact is, we developers have to make judgement calls all the time. And the fact is, in real life one has to make judgement calls all the time. Is there one and only one approach to mechanical engineering? Manufacturing? Management? Finance?

In my life as a developer there is only one paradigm I have come across that one can argue is "provably" better than the others that predated it, and that is the relational model, and the reason it is provably better than previous DB models is because one can use formal logic TO prove it, and that is because the relational model is purely a logical model and has nothing to say about specific implementation. It is also easier to prove because it is of limited scope and fits well within a recognized logical framework: set theory.

But it seems to me you are saying you have both a logical model and an implementation model, and that they are better--across the board--than any and all previous approaches to any and all computing problems. Am I right? How do you go about proving this? Is there a single mathematical model I can look at to get a grasp of this?

Don't get me wrong... I agree with much of your assessment about what is wrong in computing today. There is a ridiculous plethora of different formats, protocols, syntaxes, official and unofficial standards. It gets dizzying to think of it all. But I am not sure the real world can support the polar opposite of that, which is one standard to handle everything.

(I'm imagining that a more capable theorist than I could come up with some sort of

reductio ad absurdum, but I am more interested in exploring right now than arguing.)