As a programmer and manager of embedded software products for a living, I think that operating system programming is so much fun that it will eventually be outlawed. I’ve previously published two articles on OSNews, So, you want to write
an operating system and Climbing
the kernel mountain, and tried to summarize my experience in designing operating system kernels as well as technical traps that can be easily avoided.
You don’t know that you are wasting your time
I meant to write follow-up articles on the subject, but instead was sent by court order to Article Title School for two years. In the meantime, the world, and my perception of things – not just article titles – evolved. While I hope that someone, somewhere, found the advice helpful enough to start writing a kernel, I’ve realized in the meantime that the fun of doing that from scratch has – for all intents and purposes – actually been outlawed.
I hope I catch you before you burn any proof that you ever downloaded the Intel System Programming Manuals, as you look nervously through the window for the Code Police to come and send you to coder rehab. Cheer up, writing your own kernel is not really outlawed and you don’t have to move to writing accounts receivable software in C#. A lot of people with commitment, strong programming skills and free time are working on such projects as you read this, even though there is no point anymore. Some projects, such as Syllable and SkyOS, have made major progress towards a usable and stable operating environment and never fail to legitimately impress OSNews readers with their skills. The irony of such an opinion on OSNews is not lost on me. I expect kernel hobbyists to dismiss this article in the same way as I would have two years ago. Giving up rare skills that you built over a long time is not easy to accept. As a recovering kernaholic, I would love nothing more than being proven wrong by a strong, one-two punch demonstration, and to be conned into participating in a hobby OS project.
To be honest, I’m not holding my breath. Unless your only objective is to displace Robert Szeleney as the most admired underdog OS developer, your effort is probably wasted. I need to explain how I slowly evolved to this conclusion, and how this is a positive development. There is one piece of information and two colliding storylines that come into play.
How you can learn from being punched in the nose repeatedly
The piece of information first. I became a manager out of experience, not out of a Harvard MBA, so I like to hope that anyone with a decent business training will read the rest of the article and say “yes, so what?”. Well, pfft! to you – that’s the sound of the tongue sticking out. I haven’t been taught product positioning in school, or how to focus on what you’re really good at. I’ve learned by coming back from customers with a bloody nose.
Moving on to the first storyline. Last year, I seriously considered joining the Syllable project. It is the only desktop OS project on the market that I know of, that is at the same time usable, open source, and where I could make a significant contribution, unlike Linux for which an individual contribution is a drop in the ocean. The Syllable kernel has significant shortcomings. You can feel the round-robin scheduler and dysfunctional VM as you use the desktop. The lack of consistent primitives or clear notion of processes has obnoxious side effects such as the application server not closing windows of an application when it crashes. I thought I could help out instead of criticizing from my armchair. I did a lot of work on replacing the basic kernel with something that could support the rest of Syllable being dropped on top, and compete performance-wise with the Linux kernel.
Now for the second storyline. I created a company that develops, sells and promotes a software component architecture for consumer electronics products. The company has been around since 1998, and the product started out as software that I enjoyed writing – an operating system. Essentially, the product offered two opposed features. First, a component based operating system for consumer electronics products, the first of its kind, that lets you replace any system policy such as scheduling, memory management and power management, by your own. Second, a component model that basically transfers the well-known benefits of Corba, DCOM or .NET such as increased code-reuse and cleaner isolation, to the Consumer Electronics world. The re-use is a massive problem at the moment in the industry, since manufacturers have essentially moved from being hardware companies producing VCRs and analog TVs that had little custom software, to being software companies producing DVD-RWs and Digital TVs that require staggering amounts of custom software. We tried really, really hard to pitch our operating system to customers.
If they wanted the component model and the re-use benefits, they needed the OS, take it or leave it. Well, they left it, consistently, in Japanese, Korean, Dutch, French, English, and other languages. Whoops. We blamed the failure on a lot of things, but eventually, we faced the obvious facts.
One, the interface of a desktop, server, or embedded OS kernel has been refined over time to become a more programmable machine than the instruction set. It’s a standard conceptual interface. The same way you have the Intel and ARM instruction sets that support similar operations, you have posix, Windows and µITRON APIs for memory management and semaphores, but they are conceptually identical. If you offer radically different primitives, application programmers will not make use of it, for the sake of portability of either the code or their knowledge.
Two, a platform needs applications and a developer community in order to succeed. You have neither when you create a kernel from scratch. You could copy an existing design and API in order to have applications and developers, but it has already been done. Linux and BSD are free and fit almost any purpose.
Three, a new OS and code re-use are contradictory. For some reason, we couldn’t sell to our prospects that they could re-use a lot more code than they currently do, and could architect their software a lot better, by throwing all their code away and rewriting it for our OS. Our prospects had the nerve not to want to invest millions of dollars to rewrite software they already had.
Four, you can be the world’s best, or even really, really good, at only one thing. Everyone acknowledges this is true for a small to mid-size team, and I believe this applies to any organization, even the size of Microsoft or with Google’s coffers. Microsoft does a lot of things, all of which lose a remarkable amount of money on fire and motion, except for their operating system and tightly related core applications. Google’s managers won’t touch anything not clearly related to search. Management droids even have a name for this: the hedgehog concept. Hedgehogs are really stupid animals, but they are superior to many, in the sense that they only know one thing, rolling into a ball of spikes when attacked.
Fifth, there is no entitlement when you design something. Your users, whether they pay for a commercial product, or they use a Free/Open Source piece of code, do not care that you spent thousands of hours of really hard work to come up with what you offer to them. Syllable and SkyOS are really neat technical achievements, but they are less functional than Linux and are not attracting normal users in droves. Linux still has to figure out how to be better than Windows on the home desktop in order to win users over.
Armed with this reality check, my company repositioned its product away from the operating system and sharply on the component model and code-reuse. Essentially that meant porting our OS on top of the popular platforms in Japanese Consumer Electronics – Linux and TRON at the moment – then removing what wasn’t needed anymore. All of a sudden, customers started calling us to buy our product. By focusing on our one good idea, we were then able to make our component model cover a lot more needs, and today I think it is really a decent product. In retrospect, trying to shift them to a new OS had felt like pushing a rope.
Do I like working on what we sell now? Along with a couple of the programmers that were attached to doing kernel development, I really did mind that the market chose the least fun feature to work on. I couldn’t use my kernel programming experience anymore. Then, I realized that in comparison to writing kernel code, anything else is easier. It’s like training for a tennis match with weights and then removing them for the big game.
The skills you build for kernel development, the precision, rigor, testing, careful debugging, can be reused with nine times the efficiency when working on something above the kernel. You just have to find what can really motivate you and has a purpose to work on.
My company is not the only one that had this epiphany, it looks like. The Tao Group used to sell, guess what, an operating system for consumer electronics. You probably have heard of them as part of the Amiga saga. Their claim to fame is the Virtual Processor architecture, that lets you write portable but hand-tuned assembly code. Somehow they make this work. When the penny dropped, their OS, as a product, was taken out and shot, and now they are very successfully selling their one good idea. Their VP architecture allows very efficient, graphical content, that is portable across all CE devices. The JVM they designed on top of their VP architecture is mighty. They’ve found their one good idea and swept the rest under the rug.
Okay, so the second storyline was a bit longer than the first. If you don’t hear from me for the next two years, please send me care packages at the Article Writing School. Now, both storylines are high speed trains going towards each other and they just collided.
How you can make the world a better place
The epiphany about our product and the lack of purpose for yet another OS spilled into my intention to work on Syllable. In the end, I didn’t. The Syllable project has assembled a small group of talented people and I believe that they are absolutely right in trying to fix the lack of integration on a desktop that most distributions of Linux suffer from. Today, the user experience that the Linux desktops offer is best described as goofy. A well integrated kernel and desktop API, with a managed code approach to RAD such as what Mono offers, would make things a lot more consistent for the user. However, in light of the epiphany, my opinion is that the Syllable team is severely misguided about how to solve the problem. They want to be the best at such an integrated desktop while also having a good kernel, drivers and what have you.
Let’s ignore the typical kernel that is still stuck in “bootloader stage”. Most kernel projects start with an idea such as, let’s create an OS around this new filesystem thingie. Then, night after night, a small, expanding team of programmers duplicates scheduling, virtual memory, a posixish API, the windows registry, device drivers for ISA network cards and an USB stack. Two years later, the filesystem thingie is implemented. You have to commend the project developers for going that far, but, then that thingie is not as useful as it sounded, and the project is repositioned as a general purpose OS of sorts. Worse, it actually is useful, but nobody will ever benefit from it because their favorite application doesn’t work on this strange OS. Worse for the project, somebody steals the idea and releases it for Linux with great success. In any case, a lot of effort was spent in duplication, while producing nothing new to the OS world, or the users, you know, the ones that throw us a bone once in a while so that we can afford to keep having fun programming.
In terms of desktop or server operating systems, I cannot think of a single new idea that cannot be implemented as part of an existing OS. With the sourcecode of time-tested, free kernels readily available, and the previous statement that the basic kernel interface can be built on top of, there is no excuse not to experiment with the new idea as part of an existing kernel, or userland. For instance, I think that there is a lot of merit to having device drivers and other modules running as their own task, even a kernel one, and working asynchronously by communicating with the kernel exclusively through messages. This is a cool challenging project, and for the end users, it means they can unload and reload modules without any chance for failure, since you cannot have threads still left in the code being unloaded. The system also scales up on SMP a lot better than locking all over the place. You might be tempted to write a kernel just around that idea, but there is no justification not to patch Linux to work this way. It will be a lot of work to modify and re-test device drivers, but a lot less work than doing it from scratch. It can be done incrementally, always shipping an OS that works even though not all device drivers and kernel modules benefit from the new idea yet.
Guess what, the DragonFlyBSD project is exactly about that, and they started with the FreeBSD code base.
I’m a bit more familiar with embedded operating systems dynamics than servers and desktops ones. Their users, software engineers that work for consumer electronics giants, are really smart. They are a tough and rewarding crowd to sell products to. They have well understood that their customers, you and me buying basically the same DVD recorder every year, don’t care about some obscure edge that a custom OS would give them anymore, so they’ve standardized on OS platforms. It used to be commercial RTOS products, or free specs such as TRON, now the industry is pretty much migrating to Linux as the lowest common denominator for products. Even if your project is open source and free, if it’s basically a me-too kernel chasing Linux’s tail lights, nobody will care. Take Linux and apply your one good idea to it, and a lot more people will care.
I don’t believe there is a market, either in terms of paying customers, or in terms of people that will actually use your code as their daily environment, for writing a kernel from scratch anymore. This is depressing when you have mastered the art of architecting your wait queues, scheduler and semaphores so that everything is really efficient and maintainable. Any domain of human knowledge gets commoditized over time, this is good for everyone, and the experts have to learn to adapt their expertise to a higher-level problem set.
The good news is that the operating system is not limited to the kernel and the open-read-write-close interface anymore. A surprisingly minuscule amount of university research has gone into concepts that will extend an existing OS to develop applications quicker and with higher quality. Almost all of the inroads have been made by private corporations or non-funded open source efforts, and as modestly demonstrated by my company, there is room for a lot of new, really cool projects to work on.
As kernel developers, your skills are an edge over other people in terms of writing and delivering code to plug a hole that you have identified somewhere in the operating system. If you shift your focus to plugging it instead of duplicating existing and time-tested software, I can’t imagine how much better the software environment will become for all of us.
About the author
Emmanuel Marty is a founder and the Chief
Technical Officer of
NexWave Solutions, the supplier of the first commercially available component architecture for consumer electronics, in use at top manufacturers. He has been working with computers since the age of 10. Currently aged 28, he lives in Montpellier, France, with his wife and twin daughters.
If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.
Solar, the OSI ( http://www.opensource.org/ ) could answer all your questions quite clearly, but the basic premise is that the BSDL allows you to re-licence the code. This means you can take BSDL code and re-use it in your product and licence that product however you see fit. This includes proprietory, closed-source products. You can not take BSDL code and make it PD though, as you do not control the Copyright for that code. Only the Copyright holder can dedicate a work to the Public Domain.
> Does the BSD license make derived work fall under the BSD license? (I understand it does.)
No, that is the way the GPL works not the BSD.
> Does that mean that such derived work is inherently freely redistributable? (I understand it is.)
No: do you really think that Microsoft would use a license which would imply that their OS would be freely redistributable?
Vanders explanation is correct.
@ reno X:
Of *course* the BSD doesn’t apply to the *whole* work. I know that.
But whatever part Microsoft took from the BSD pool *still* is under BSD, no? I don’t really believe Vanders’ explanation, as the BSDL clearly says “must retain the above copyright notice, this list of conditions and the following disclaimer” – which means, it’s still BSDL, unless my grasp of the English language completely fails me.
@ Vanders:
> You can not take BSDL code and make it PD though,
> as you do not control the Copyright for that code.
German law doesn’t even *allow* me to fortfeit my copyright in the meaning of PD. All I can do is licensing my copyrighted code with a “carte blanche”. (Which is what I do.)
“That means whatever I write that’s based on BSD code falls under BSD, and must hence be made freely distributable, right? Except for it doesn’t necessarily expands towards anything linked with it.”
OK, you’ve raised two issues here.
Code under the BSD license is freely distributable, but you aren’t required to distribute it yourself. If you used BSD code from Author A and modified it, you can distribute binaries without including the source. You are required to include the copyright notice from Author A, so if I want it, I can get his source from him. You can do whatever you want with the part that you wrote – you own it, and can use whatever license that you want. A’s code is under BSD, but your code is under the license of your choice. The combination of A’s code and your code must satisfy the BSD license, and the BSD license allows you to impose additional restrictions.
The second issue that you raise has to do with linking and derived works. If you use my code, and add to or modify it, the part that I wrote is still owned by me and you need my permission to distribute it. You own the part that you wrote. The combination is a derived work, which is shorthand for what I just stated. Your program is a derived work of mine. I don’t own it all, you don’t own it all, we each own the portions that we wrote. Distribution requires permission from both of us.
If you link my code into your program, then you have included some portion of my code into your binary. So it’s a derived work. There’s no question with a staticly linked program – you have included my code. Dynamic linking is trickier, as you have included certain interface code. The FSF takes the position that linking of any sort is inclusion. I’m not so sure, as copyright law permits the use of copyrighted material without permission under certain conditions, and linking may fall under that. Or it may not. Only a court can make that decision. For the sake of good manners, I’d respect the author’s view of linking.
To summarize, no, the fact that you based your code on BSD code does not mean that your code must be freely distributable. The BSD code is freely distributable, but you can put limits on your code. If you had based your code on GPL code, then the GPL prohibits the imposition of any additional restrictions, so the derived work must be freely distributable. That’s the core difference between BSD and GPL. With BSD, the original work is free, but the derived work may or may not be. With the GPL, the derived work is as free as the original. That’s why linking matters for the GPL, but not for BSD.
Perhaps, our readers can go through this thread and see who brought up the “Linux is teh suckage” fiasco. I said one could write an operating kernel using existing free, well tested, very stable, scalable customizable kernel, not once did I mention the name of any kernel.
Then your “Linux is teh suckage” because it doesn’t have a HAL, it’s not written in C++, it’s POSIX and assorted nonsense ensued. Even after countless people told you Linux is not the only free kernel you could use. You insisted on spewing your unfounded rubbish.
Read through our discussions and see who mentioned Linux first, then come back and tell me I am forcing Linux down your throat. And as for your drivers don’t crash Windows comment, congratulations, you’ve just exposed you naivety. I wonder what causes the venerable “BLUE SCREEN OF DEATH?”
Solar does not even understand that there is no such thing, legally speaking, as public domain code-except for code which has become public domain through expiration of copyrights. Let alone the fact that there is no such as public domain period in most legal systems in the world. Try releasing public domain code in most of europe, Solar, see how far you get-legally speaking.
He *knows* that BSD is *almost* as *evil* as GPL. What a joke-he complains that they are not *free* enough for his purposes-justifying it by saying they are not *free* according to his, and only his, definition of some non-existant public domain license.
I almost feel for him-he so desperately wants something and is not willing to give anything back….what a shame….
Oh God. You’re clueless!
Clueless is someone that doesn’t give a single explanation about why the windows kernel architecture is inferior to linux
Clueless is someone that doesn’t give a single explanation about why the windows kernel architecture is inferior to linux
Was that a joke?
Was that a joke?
Was that an explanation of why the Windows kernel architechture is inferior?
iallwaysfigured that a hal worked for the benefit of apps, not drivers. so that when you have a hal the drivers then go under it and turn generic hal calls into hardware specific ones. atleast this definition of hal is in the works for linux as well, over on freedesktop.org. it will work alongside udev and dbus so that you can write a app for the usb storage device part of the hal and it will work with any usb storage device that linux have a driver for and that the hal understands.
Linux was already the example from the article.
I think that the author wasn’t saying that one shouldn’t write a kernel, but that if you do write one, don’t expect people to come knocking at your door with money in hand.
The idea of writing one for fun, or as a learning experience, I’m sure, would be supported by him. His supposition, though is correct. We may not like it, but it’s the way the world works.
Linux survived because it came at the right time, with a lot of publicity. With Apache and the start of the internet boom, it was the right product, at the right price for internet start-ups without much money. When it worked without much more than that one application, people took a closer look at it.
It really was a one horse pony at the time, but got better.
The problem for new people trying to duplicate Torvold’s sucess and fame is that there isn’t, at this time, a need for another system out there.
Despite what some people think about their own little projects, it takes thousands of programmers years to make an operating system .(it’s not JUST the kernel, after all) stable and sophisticated enough for more than a few people to like and use.
I can tell you that if the only systems available were XP and OS X, most of those here who just LOVE their barely distributed OS’s would have picked one of the above two, and would defend it just as strongly.
Some people just like to be different, just to be different. That’s fine, don’t get me wrong, but it’s not the real world.
tech_user wrote: there is is a need for new ideas and new ways of doing things … but we are running into the limits of the inflexible i386 architecture…
Yes. The only good reason to write a new kernel (other than the fun of it) is to try to solve the software reliability and productivity crisis. In my opinion, there is something fundamentally wrong with the way we program our computers. The main reason that software is so unreliable and so hard to develop has to do with a custom that is as old as the computer: the practice of using the algorithm as the basis for software construction. Moving to a pure signal-based, synchronous software model will result in at least an order of magnitude improvement in both reliability and productivity.
For an alternative view of software engineering, check out the info at this site:
http://users.adelphia.net/~lilavois/Cosas/Reliability.htm
The main reason why software is so unreliable is because users demand, and developers push, more features than can be tested in any reasonable development cycle.
There is no best way of development. There have been arguments back and forth over this for decades, and it hasn’t been resolved as yet. Every model has it’s good and bad points. As long as software development relies on humans, it will be flawed. Unfortunately, automated development systems haven’t worked well, to this point, either.
Let’s face it, computer technology is still in a primitive stage of development. No one knows enough to say what is best. Perhaps in 50 years, or 100, things will settle down.
If we’re still here, of course.
Melgross writes: The main reason why software is so unreliable is because users demand, and developers push, more features than can be tested in any reasonable development cycle.
Adding features under pressure is a reliability problem only in algorithmic software. As I wrote earlier, the main reason for unreliability is that we are using an antiquated paradigm of software construction: the algorithm. The reliability of algorithmic software is inversely proportional to its complexity. In a synchronous signal-based environment, the opposite is true: The more complex a given system is, the more reliable it becomes. This is because adding new signal pathways and connections also add new temporal constraints to the system, making it more robust.
http://users.adelphia.net/~lilavois/Cosas/Reliability.htm
And also take a look at Project COSA and the proposed COSA operating system links on the same page.
Now, I guess I won’t get a useful system if add random connections, so I guess a human would have to place them, or not?
Legend writes: Now, I guess I won’t get a useful system if add random connections, so I guess a human would have to place them, or not?
Thanks for replying. Certainly a human developer must make the connections but a signal-based synchronous environment will not let you make random connections. Connections must follow strict compatibility rules (high-level plug-compatible components) and stringent temporal constraints (low-level cells). Bad connections are simply not allowed. The system will even find missing connections (dependencies) automatically, resulting in a high level of consistency.
Temporal rules are only possible in a synchronous environment where timing is deterministic. Like integrated circuits, synchronous programs will not work if the timing is faulty.
I’m familiar with the concept of signal based synchronous environments. they are rather simplistic in their assumptions.
The analogy with the human brain is flawed, at best. I suppose if you believe in Scientology, you might believe in the perfection of the human mind. But if you have studied psychology and biology, you will know that the amazing fact about the brain is that it works at all. The flaws are many, as we all know. We don’t know if the brain is parallel and synchronous. In fact, we don’t know how the brain works at all. All we know at this point is where certain processes seem to be and when some of the few we know are functioning. Why, or how?
COSA and other projects seem to function about as well with real time machine control. but have as yet to show that they can be useful for deeper programming purposes. I would love to see an entire OS written in this way. For the most part even the developers of these systems utilize standard programming methods to interpret the machine control code, and to handle other processes.
The writer of the article has yet to show any achievement, such as writing a word processor, spreadsheet, video editing program, or the like. So far as I know, all programs have been relatively small, and related to external control.
I do find it to be interesting that Louis Savain, says:
“Temporal rules are only possible in a synchronous environment where timing is deterministic. Like integrated circuits, synchronous programs will not work if the timing is faulty.”
What, flaws and bad programming? I thought that we were just assured that it couldn’t happen.
Well, back to the drawing board.
Melgross writes: What, flaws and bad programming? I thought that we were just assured that it couldn’t happen.
At this point, Melgross, I don’t think continuing this discussion is going to benefit either of us. We are obviously not on the same frequency.
The Silver Bullet:
Why Software Is Bad and What We Can Do to Fix it
http://users.adelphia.net/~lilavois/Cosas/Reliability.htm
I don’t know Louis, this hasn’t been much of a discussion so far, as you seem to be disinclined to discuss it at all.
that i see between this and functions like in c and c++ is that here you dont spawn a new copy of the function when it gets called but instead have the same standing around every time, with the same internal variabels and stuff.
that is unless we are talking neural net like stuff where you fire off diffrent sets of output channels when specific sets of input signals come along. how to build a gui and mutch less a working app from that is way beyond my understanding. the sheer amount of parts that will be needed are staggering. in fact i think that getting a small app working will take a mount of parts similar to the number of cells in a human brain. and i fear for the speed of the app as in many ways you will be emulating a ic by use of a ic (the computers cpu).
another reason for useing algorithms is that the cpu is a overgrown calculator. its crunches some numbers and then the results are used by a diffrent ic to do some stuff (like say draw a image on a screen by useing the numbers as coordinates).
Every time a program chooses a step to take it’s using an algorithm. Boolean Logic just can’t be prevented from being used. We use it all the time.
Hmm… If I do this rather than that…
That’s Boolean. It’s tough to avoid.
hobgoblin writes: …that i see between this and functions like in c and c++ is that here you dont spawn a new copy of the function when it gets called but instead have the same standing around every time, with the same internal variabels and stuff.
This is an excellent observation. However, your objection is not a problem in COSA. The model provides for message servers. These are special components whose job is to deliver messages to other components. There are two ways to deliver messages in COSA, either via a queue (FIFO) or a stack (LIFO). This way, a single component can render a service for many others. The end result is that components need not be duplicated indefinitely, as you suppose. The only drawback (if you want to call it that) is that COSA forbids recursion at this point in time. It may be possible (and safe) but I haven’t given it enough thought yet.
The Silver Bullet:
http://users.adelphia.net/~lilavois/Cosas/Reliability.htm
Every time a program chooses a step to take it’s using an algorithm.
Not really. An algorithm is a one-dimensional sequence of steps. Here’s an excerpt from my site:
[begin quote]
Consider that a computer program is really a communication system even though we are not accustomed to think of it as such. During execution, every statement in a procedural code essentially sends a signal to the next statement, meaning: ‘I’m done, now it’s your turn.’ A statement can be seen as an elementary object having a single input and a single output. In an algorithm, the objects are linked together to form a sequential chain. Communication is limited to only two objects at a time, a sender and a receiver. My thesis is that this mechanism is way too rigid and restrictive and leads to unreliable software. Why? Because there are occasions when a particular event or action must be communicated to several objects simultaneously. Algorithmic development environments make it hard to attach orthogonal signaling branches to a sequential thread and therein lies the problem. The burden is on the programmer to remember to add code to handle delayed reaction cases: something that occurred previously in the procedure needs to be addressed at the earliest opportunity. Every so often we either forget to add the necessary code or we fail to spot the dependency. The result is what I call ‘blind code.’
[end quote]
The Silver Bullet:
http://users.adelphia.net/~lilavois/Cosas/Reliability.htm
i know that c (and by extention c++) only allow for many diffrent parts of the code to pass into the the same function via the use of global variables. the only part that can hand over multiple variables directly is the function that calls the nee function. so what your saying is that every function would in effect become its own thread and use a special “mesh” (or base thread) to pass messages back and forth? and every thread could have multiple input and output “ports” that other threads could pass signals of to? if one thread fails and crashes it would respawn without (in theory) take down the whole system? would still need to put up a warning flag to the base thread tho to indicate that it failed, the base thread would then output a error message to the log for a sysadmin to have a look at.
inside each thread the prosess would still be a chain but its a mutch chorter chain and therefor easyer to debug and maintain (doing one simple job and only that).
recursion could happen in that the destination put on a output is the input of the same thread that made the output.
the idea of inheritance becomes a it of a problem tho. unless one allow for the creation if mini meshes. you poke the thread that is the mini-mesh and it then passes it on to the threads that make up the mini-mesh via some translation rules in the thread itself. this will lead to layers of threads tho and i wonder if a cpu will be able to juggle them all…
still, its a interesting concept. but im no real programmer…