Linked by Thom Holwerda on Tue 9th May 2006 21:25 UTC, submitted by luzr
OSNews, Generic OSes Torvalds has indeed chimed in on the micro vs. monolithic kernel debate. Going all 1992, he says: "The whole 'microkernels are simpler' argument is just bull, and it is clearly shown to be bull by the fact that whenever you compare the speed of development of a microkernel and a traditional kernel, the traditional kernel wins. The whole argument that microkernels are somehow 'more secure' or 'more stable' is also total crap. The fact that each individual piece is simple and secure does not make the aggregate either simple or secure. And the argument that you can 'just reload' a failed service and not take the whole system down is equally flawed." My take: While I am not qualified to reply to Linus, there is one thing I want to say: just because it is difficult to program, does not make it the worse design.
Thread beginning with comment 123062
To view parent comment, click here.
To read all comments associated with this story, please click here.
Mapou
Member since:
2006-05-09

I suggest the author read Minsky's early work. "signal-based" programming is the Nth in an infinite series of attempts to model the asynchronous processes that make up software as if they were synchronous. It is doomed to the same failure of all such attempts.

This makes no sense, IMO. Synchronous, signal-based applications abound. There are spiking (pulsed) neural networks, spreadsheet cells, etc... all over the place and they work perfectly.

Reply Parent Score: 1

corentin Member since:
2005-08-08

> This makes no sense, IMO. Synchronous, signal-based applications abound. There are spiking (pulsed) neural networks, spreadsheet cells, etc... all over the place and they work perfectly.

You fail to understand the following: there is no "one size fits all" solution to every imaginable engineering problem. Synchronous programming languages are great to solve certain classes of problems, as do functional languages, imperative languages, logic languages, declarative languages, etc.

It's OK to be excited about a cool technology; just keep in mind it won't bring world peace (nor automagically solve all software reliability problems).

The best way to obtain software quality is to put the right tools for the job in the hands of open-minded people.

Reply Parent Score: 3

axilmar Member since:
2006-03-20

You fail to understand the following: there is no "one size fits all" solution to every imaginable engineering problem. Synchronous programming languages are great to solve certain classes of problems, as do functional languages, imperative languages, logic languages, declarative languages, etc.

Actually all languages are equal and they can solve the same problems, and this is already proven. Programming languages differ at the degree of easiness to solve a problem, but signal-based programming goes above all these models, because it is a modelling technique that reflects reality so well that it becomes very easy to solve most, if not all, problems.

All the other programming models are used to emulate signal-based programming. Most programs are a bunch of signals (gui or other type events) and responses. Even functional programs are.

Reply Parent Score: 1

Mapou Member since:
2006-05-09

The best way to obtain software quality is to put the right tools for the job in the hands of open-minded people.

This is the often repeated argument from people who are used to think in terms of the only software model they know, the algorithmic model: multiple tools for different jobs. This is precisely why algorithmic programming is flawed. It demands a plularity of tools, hence the current mess.

Programming will not come of age until a single software model and development method is used universally. This is not currently the case because we are doing the wrong way. The truth is that, once one understands the true nature of computing (it's all about communication between sensors and effectors), one realizes that the signal-based synchronous approach is the only way to do things. There is nothing that can be done the old ways that cannot be done better and more reliably with the signal-based model.

Edited 2006-05-10 19:01

Reply Parent Score: 1

corentin Member since:
2005-08-08

> This is the often repeated argument from people who are used to think in terms of the only software model they know, the algorithmic model: multiple tools for different jobs.

Blah.
I'm used to think in terms of whatever abstraction fits the job; i.e. with signals when I am developing reactive systems, with functions and trees when I am writing Lisp code and so on.

Electrical engineers, too, do use a lot of intellectual tools to do their job. They don't use an all-encompassing, unique tool to solve any EE problem.

Reply Parent Score: 2

rycamor Member since:
2005-07-18

Programming will not come of age until a single software model and development method is used universally. This is not currently the case because we are doing the wrong way. The truth is that, once one understands the true nature of computing (it's all about communication between sensors and effectors), one realizes that the signal-based synchronous approach is the only way to do things. There is nothing that can be done the old ways that cannot be done better and more reliably with the signal-based model.

Wow...

You see, that's the sort of statement that just begs the question.

Maybe you're right, but are you *demonstrably* right? Provably right? How about the next person that comes along with the ultimate paradigm? How do I judge between that paradigm and yours? Will it take me 10+ years of study to understand which is truly the right one? The fact is, we developers have to make judgement calls all the time. And the fact is, in real life one has to make judgement calls all the time. Is there one and only one approach to mechanical engineering? Manufacturing? Management? Finance?

In my life as a developer there is only one paradigm I have come across that one can argue is "provably" better than the others that predated it, and that is the relational model, and the reason it is provably better than previous DB models is because one can use formal logic TO prove it, and that is because the relational model is purely a logical model and has nothing to say about specific implementation. It is also easier to prove because it is of limited scope and fits well within a recognized logical framework: set theory.

But it seems to me you are saying you have both a logical model and an implementation model, and that they are better--across the board--than any and all previous approaches to any and all computing problems. Am I right? How do you go about proving this? Is there a single mathematical model I can look at to get a grasp of this?

Don't get me wrong... I agree with much of your assessment about what is wrong in computing today. There is a ridiculous plethora of different formats, protocols, syntaxes, official and unofficial standards. It gets dizzying to think of it all. But I am not sure the real world can support the polar opposite of that, which is one standard to handle everything.

(I'm imagining that a more capable theorist than I could come up with some sort of reductio ad absurdum, but I am more interested in exploring right now than arguing.)

Reply Parent Score: 1

Cloudy Member since:
2006-02-15

"The best way to obtain software quality is to put the right tools for the job in the hands of open-minded people."

This is the often repeated argument from people who are used to think in terms of the only software model they know, the algorithmic model: multiple tools for different jobs. This is precisely why algorithmic programming is flawed. It demands a plularity of tools, hence the current mess.


I use the argument because of analogy to other tool usages. Amazingly enough, in carpentry, not all fasteners are nails, and the hammer is the wrong tool for inserting screws.

I've used both imperative and declarative languages. For some jobs, imperative languages are the best tool, and those seem to be the majority of problems. For others, I prefer declarative languages. There are even times when functional programming makes sense. That's just one of many dimensions in language choice.

The "true nature" of computing is not merely about communication. It is also about command and control, algorithmic transformation, and randomness.

By throwing out the algorithmic part, you throw out the mathematical foundation of symbolic manipulation. You may wish to live in a world with mathematics, but I do not.

Reply Parent Score: 3

Cloudy Member since:
2006-02-15

"I suggest the author read Minsky's early work. "signal-based" programming is the Nth in an infinite series of attempts to model the asynchronous processes that make up software as if they were synchronous. It is doomed to the same failure of all such attempts."

This makes no sense, IMO. Synchronous, signal-based applications abound. There are spiking (pulsed) neural networks, spreadsheet cells, etc... all over the place and they work perfectly.

That there are some problems that can be modeled synchronously does not imply that all problems are effectively modeled this way.

Reply Parent Score: 1