Username or EmailPassword
That's an excellent link.
I'm not entirely in agreement with everything he says, but he makes some strong points.
I disagree with him quite strongly that microkernel IPC should be limited to byte/block streams. I'd strongly prefer working with objects directly (ie being atomically transferred). Object serialization over pipes is inefficient and often difficult, particularly when the encapsulated structures need to be reassembled from reads of unknown length. I find it ironic he views IPC pipes to be the equivalent of OOP. Sure they hide structure, but they also hide a real interface.
I know Tanenbaum was merely responding to Linus' remark about how microkernels make it extremely difficult to manipulate structures across kernel boarders. In a proper OOP design, one shouldn't be manipulating structures directly. Arguably, linux components wouldn't break as often if they didn't.
There are good arguments for either approach. But I do think microkernels have more merit as systems become more and more complex.
I also take this paper with a significant grain of salt, but for different reasons. While I agree with the need for standard interfaces, I do not agree with the pure OOP vision that data structures cannot constitute an interface and that their inner information should always be hidden away like atomic weapon plans. In some situations, a good data structure is better than a thousand accessors.
I feel the same with respect to shared memory. Okay, it's easy to shoot yourself in the foot if you use it improperly, but it is also by far the fastest way to pass large amounts of data to an IPC buddy. And if you make sure that only one process is manipulating the "shared" data at a given time, as an example by temporarily marking the shared data as read-only in the caller process, it is also perfectly safe. Edited 2011-11-23 17:59 UTC