Yeah, I might be just re-inventing the wheel here, who knows? But I had this (original? I doubt it) idea a few months ago and I was meant to write about it for some time now. So, my idea is about creating a new operating system that is like none of the current ones. It would be so different, that porting applications from other “traditional” systems wouldn’t be possible. But the gains would be much more important of what we would lose by implementing a brand new new system.So, here is the idea: You got a fully multi threaded, multi-buzzworded OS with all the latest modern goodies in it, but no applications. Yes, the OS would not include any application with the traditional sense of the word. Instead, all it will support would be modules. The UI would be task-based and file-based at the same time. The way you access your work files is by a panel that includes all the mime types and depending which mime type you pick from the list, you can sort their resulted files found on your hard drive based on how much you are using them, recently modified, file size and more such criteria (sometimes, different criteria for files with different MIME types).
Let’s say that you want to load a PNG graphics file. You open the file and you got the graphics image in front of you. Now, you must decide what you want to do with this file.
Let’s say that you want to apply a filter, resize it and then save it. But there is no application like The Gimp, or Photoshop or PaintShopPro to do so. Instead, the OS provides the use of modules to carry out such actions. Each module is like a very small library/application, loaded by the system for use with specific MIME types (each module should include a header section where describes itself, its relationship with other modules and the system) and each of these modules only do a very-very specific job. So, you use one module to save as PNG, you will need another module to apply the “watermark” filter, and another one to be able resize the image. All these modules can be written by completely different authors, but because they are doing a very specific job, these can be easily and timely implemented by developers. So, even if you think “wow, no applications for this OS?”, the reality is that the most common needs will be satisfied in a very short period of time from a general release.
As you can see, with this system, there will never be Photoshop and PaintShopPro fighting for market share. There will be NO full applications whatsoever about any kind of usage (no sound apps, no CAD apps, no DTP full apps etc). Some pros and cons for this system on top of my head:
* Natural multi threaded code, pretty responsive implementation by design.
* The user will get a default OS installation with only the most needed features and not one more that he/she might not need.
* Adding new modules will be easy, downloading and installing via the web, and each module will hardly be more than 20-100 KB (depending on what is for), so download times are not an issue at all and almost instant even for modem users.
* The user will know exactly how to use his/her computer because the number of modules installed are decided by the user and they are probably there because he/she needs them in the first place. The operation of the computer is now simpler and fully customized.
* Price of operating a computer is now lower (e.g. no need to buy Photoshop for $500 and then only use 20% of its capabilities – you only have installed what you need each time). Renting software is now much easier and lots of free modules will be available anyway.
* Less bugs and security concerns overall, as debugging a smaller piece of code is way easier.
* Infinite functionality. The more modules you install, the more functionality you get. There are no functionality barriers.
* No more “leaning” on a particular piece of application or big corporations who might ask big amount of money for some of their full pledged applications that include features that you might never use.
* Modules are able to communicate with other modules and some modules can only be used if another third party module is installed. This way, even modules themselves can be extended, not just the root level of OS functionality.
* Such a modularity and with the help of (maybe an XML-based) description for each module can make an Artificial Intelligence system on top of the base OS way easier to be implemented.
* It will be a challenge to create an OS in such a way that it doesn’t deadlock with all these modules and at the same time is able to inter operate between them seamlessly.
* Designing the UI for such an OS is also a challenge, as having, let’s say, 300 modules particular to sound files, is a challenge of its own on how to sort them out logically so the user can easily find what he/she needs each time.
* Destroying big markets of applications, can never be good for the IT market. However, a new kind of market would emerge, where the user is more in control.
* This system is fundamentally different from everything around today in the small server/desktop OSes market, so that means that nothing will be able to get ported, as the whole philosophy is different. Developers might have trouble programming for this OS, at first.
* Too much work to have something like this up and running. Today, most new OSes just use some freely available tools and libraries to speed up their development instead of re-inventing the wheel (eg. Gnu tools). For this OS, a lot of re-invention and re-coding will have to be done.
* General consistency between modules might be an issue as they come from completely different developers. General inter operability between modules might be an issue too, at least until the system matures.
Microsoft with .NET and Oberon have some similar concepts, but none go to the extreme of implementing this architecture. (Update: readers now tell me that the Taligent/OpenDOC architecture had some similar concepts, but still, not quite the same.)
What do you think? Would something like it work? Try to think “out of the box” before trying to answer though.
Reminds me a lot of the paradigm of OpenDoc/Taligent … As nice the idea seems to be, I think it will be rather unusable per se … but a (perhaps) better deal would be to have applications, on top of classic OS (linux …), functionning this way. So you could have an application, say for editing sound, as a sort of “shell” for the type of plugins you describe. You keep the composant way, but with specialised applications instead of general os. Thus, it could perhaps be used… duno.
The idea sounds also like what you could do with some softs for composing sounds (chaining “sounds” modules), or like software compositing (Aspect Programming with composition filters) ..
BeOS partly has a concept like this. Why not join GE and share your thoughts in order to get OBOS r2 more modular?
Couldn’t this just be a shell to an existing OS?
It doesn’t seem that there is anything that advanced in it that it couldn’t fit on top of most OS’s. It just seems to be a different way of designing the shell.
I think there are many projects that have a simerlar idea, but i havn’t heard of anything taking it to such an extreme of wiping out apps all together.
I believe that many changes will have to be done in the kernel of such an existing OS to have it working the way described in the article. It is possible, but it won’t be as “pure” when trying to implement it on top of an existing OS. It will hit limitations that they wouldn’t naturally be there, if the whole architecture was thought out and implemented from scratch.
In many ways, you can sort of do these things with OS X services. Once you get a piece of data into the clipboard (or simply selected), then a whole host of options becomes available for processing this information. It’s just not done, currently, in place. It’s essentially “pasted” into the new app and processed there.
But, yeah, it’s essentially OpenDoc. Nothing but canvas and you get to drop bits of data wherever you want.
You can probably come pretty close to something like this today with MS Office and Visio. At least it makes a good demo.
But it is interesting, the whole hub bub over this kind of thing has pretty much died down.
Sure, modules aren’t a new concept. You can find them in many places. Most interesting have I found the BeOS kits and KDE’s KParts to be in this sence… KDE has it’s limitations of being on top of alot of monolitic technology though…
Very interesting idea, but like some of the others, I can’t see why this couldn’t be implemented on top of an existing OS or kernel, especially one like OpenBeOS.
Conversely, creating a new OS from scratch means trying to figure out how, if at all, the docucentric approach actually affects the kernel, drivers, and the like.
Still, if anybody wants to do this, I’d certainly like to follow it. Once the basic groundwork and structure is figured out, “building” the OS with the modules should be relatively simple.
Read my above comment as to why I would recommend to use a base OS that is made for something like this from Day 1. Patchwork is not something I endorse when it comes to innovation. Do it right, or don’t do it at all…
It’s a good idea in theory, but it would take a lot of work and time to implement, before people even consider it.
It sounds like it would be great for editing existing files, but for creation of new files it might not be as effective as the traditional application way of doing things.
I really like the idea of downloading just what you want, pretty much instantly. If it ever got advanced enough, I might consider it, but in the mean time, it could be difficult to adopt, and would take a long time to develop enough apps, or modules, for it to be completely functional (and usable)
Good idea, but i think the current method of computing has too much of a stranglehold for it to work.
> It sounds like it would be great for editing existing files, but for creation of new files it might not be as effective as the traditional application way of doing things.
The system would work based on MIME-types, so it will be as easy to work with new MIME types as it currently is under MIME-only based OSes, like BeOS. As for very generic files, like a .o for example, some defaults operations will be available unless you install new modules for the new kind of files. These default operation could be something like “open with a hex editor module”, or something.
This could easily be built on top of an existing OS. All
that would be needed is a new filesystem or a database built
on top of a filesystem. And a intelligent mime-type system
Yeah, let’s do yet another patchwork. Voila!
To add the AI system I am mentioning there on top of this system, we are talking about a lot of reduntant layers there… I would once again urge for a clean approach.
This is a very inefficient approach to a lot of things.
How would word processing be done? First load up
the text input module. Then Load up the formatting module.
Then load up the spell check module. Then load up
the text input module again. Repeat. Or how about a video
game? What mime type would that be?
A viewer would be by default there for most MIME types. So, loading and formatting would be there by default. As for spellchecking, sure you will have to load it, even today many word processors come with spell checking turned off.
As for video games, this is the “executable” mime type, what else?
Its not patch work. A clean OS doesn’t need to be patched
to have a different approach to a file system. You
don’t need to patch the OS to have an intelligent mime-system.
There is no reason to re-invent the wheel to put it on
a new kind of vehicle. All you are proposing is a
different approach to the user interface. Nothing about
the lower level of the system.
> As for video games, this is the “executable” mime type, what else?
This defeats your whole idea of not having an application
based OS. The only way I would see it is having all the
maps as files, then loading a “Map1.doom3map” file would
start you off the game. Or to play multiplayer you would have
to run a ‘module’ that adds a bunch of servers to your
file system, then have a ‘doom3server’ mimetype that would
load up the doom3 ‘module’.
>Nothing about the lower level of the system.
I honestly believe that the lower level of the system will need changes to suffieciently accomodate this kind of interface. At least for multi-threaded/driver/api/performance reasons.
The fact that I didn’t write about these needs doesn’t mean that they don’t exist.
The way I first pictured this is sort of from the user point of view. We have an interface to the files and a way to select the files. Selecting a file triggers the “module” that handles the file’s mime-type. This module then processes the file.
That seems to work for when I have, say one image viewer. Suppose I want multiple modules to process one mime-type (eg text editing module and a printer module to handle text.) Then I start thinking of a desktop where I can have icons and I select a file then send (“submit”?) it to a module of my choice.
I’m not an interface person but it seems sort of “friendly”. (Personally I don’t mind a command line interface too much.)
From the OS level I feel kind of worried about letting modules be loaded into my kernel without some serious checks. (I never have a production machine with module support.)
Overall though I do like it. It’s really interesting. I’m sure the ideas will bounce around my head for a bit now.
So you say…
I believe that many changes will have to be done in the kernel of such an existing OS to have it working the way described in the article. It is possible, but it won’t be as “pure” when trying to implement it on top of an existing OS. It will hit limitations that they wouldn’t naturally be there, if the whole architecture was thought out and implemented from scratch.
Then when someone broaches the issue that this doesn’t really require any sort of kernel support, you again mention…
Read my above comment as to why I would recommend to use a base OS that is made for something like this from Day 1.
I’m still failing to see what part the kernel plays in this at all. Perhaps you’d need an entry in the filesystem for a MIME type, but other than that, I see two implementation avenues:
Whenever you wish to manipulate a resource (I’m guessing this system would support network resources in addition to files) you start a new HWP which looks at the MIME type and decides the best way to “display” the contents.
From here you may either dynamically link with a set of modules which are loaded as LWPs, so everything shares the same memory space, or…
You can start additional HWPs which perform additional functions.
The latter is essentially the Unix philosophy… a bunch of simple applications which perform a single task well, linked together with pipes.
I think you’re probably aiming more toward the former. I think implementing this would require some sort of abstract filter graph architecture which can determine how to plug the modules together.
You’d also need lots of talented people designing tons and tons of APIs for all resource types you wish to handle. The OS would have to come with some sort of “base applications” into which the modules could be plugged (for example, if you open a text document, you’d need a base text editor application onto which you could tack your spell checker/various format exporters/etc)
So yes, a cool idea to be sure, but I don’t see how this can’t be done completely in userspace on top of the multithreaded kernel of your choice.
You can find an interesting article on throwing years of work and experience away to do it “cleanly” here: http://www.joelonsoftware.com/articles/fog0000000069.html
Really, any decently designed piece of software has room for changing needs, innovations, new features, … That is one of the primary goals of software design, that it is easily extendable. You don’t need to rewrite everything from scratch if you want to do something new.
>This defeats your whole idea of not having an application
Not at all. I never said that there won’t be any executables around! I just said that there won’t be full pledge applications that can do coffee as well as rendering. Some basic form of an executable that can define the rules that the rest of the modules should load/interact might be needed for things like games or other “integrated” and “non-file based” actions.
Nico is right…
It sounds a LOT like TALIGENT. I read their book on the architecture and was waiting for something to be delivered and then they decided to change to a framework instead of an operating system and it kinda went down hill from there.
I think throwing something new into the fray would only detract from what is out there and make things more confusing.
I love LINUX, but until they hammer out a few usability functions, I keep going back to Windows (for games, for specialized apps that only run on Windows like DameWare Utilities for NT/2000 networks, etc.)
And yes, I have heard and use VMWARE, but booting up a VM session of Windows can be time consuming – so I use Linux on my home machine and Windows on my PC. (I would still be using OS/2 if it was brought up to date or new versions released – darn IBM).
Keep hope alive!
As a Windows user trying to “defect” to Linux, I have one specific complaint: While a major feature of open-source operating systems is a lack of bloat, this lack of bloat results in a myriad of little applets, all of which are developed by various members of the Linux community.
There are competing applets, applets that require specific libraries, applets that only work with particular other applets, applets with cute little names that mean something only to their developers, and so on.
We need to follow the KISS rule here. If your little idea is to have any hope of success, there needs to be a specific list of applets developed by specific groups and updated by a specific timetable. Otherwise, you end up with the same newbie confusion and return to Microsoft that happens practically every day.
…I don’t see how this can’t be done completely in userspace on top of the multithreaded kernel of your choice.
Everything can be done in userspace, theoretically you can run a whole Mac system with alot of nifty hardware, MacOS and other apps – ALL emulated on a x86 machine or a scientific calculator if you want. It’s all just a question of how much you want to abstract. Personally I like staying as close to the hardware as possible.
>While a major feature of open-source operating systems is a lack of bloat…
Erm… This comment is kind off topic. I never said that this should be an open source effort. Who ever can do it (including individuals, projects or even commercial companies), are free to try it.
Some basic form of an executable that can define the rules that the rest of the modules should load/interact might be needed for things like games or other “integrated” and “non-file based” actions.
This sounds more and more like what BeOS was/is about to become, to me!
I also think current kernels are satisfactory. Applications are fine too, just breaking them into functional modules will not solve the problems of useability. When working with modules, other problems will arise, like too many dependencies or dupplications, incompatible interfaces, and we can have the same problems as with applications.
IMHO we just need a completely new OS GUI, with a document centric approach. It should completely hide the file system layout from the user, and it should lack file manager/explorer application. User will not generaly care where are their files stored, on a hard disk, a removable media or on the internet, the system will care about that. Are my files backed up? Leave it to the system. What can I do with this picture? The system will tell me.
BTW changes at the kernel or intermediate levels may still be needed, but I dont see them as first priority. This is where Open Source flexibility may come usefull.
A key part of what makes really great apps so great is that they provide a clean environment for the user to do work. I hear you with regards to everything-and-the-kitchen-sick apps like Photoshop, but to the people who have careers that revolve around these products (flash devs and Macromedia’s studio tools, or Visual Studio come to mind) it’s not just about having a thousand tiny little tools to get the job done. There is something to be said about having an integrated toolbox.
Your proposal doesn’t address how it impacts the biggest concern of all in terms of providing ‘context’. Now, this is not an easily definable term, but it’s what a coder feels in his or her favorite text editor, or what a musician feels in his or her favorite piece of music software. It’s a feeling of oneness with the workflow and the mental/virtual spacing of the tools and the feeling that one can ‘consume’ the application into one’s mind. You gloss over this by saying that this is something that it’d have to address, but what you are describing is the exact opposite of context in the workspace sense without giving any detail of how that might work. Modules and file-type based context? Well… no duh. But that’s a _TINY_ part of the usabilty picture once you do away with modality.
And as for AI.. gimmie a break…
First off: self modifying code is the key to AI and x86 chips don’t allow it in 32bit mode (data/code pages in memory are totally unique). The code in your mind rewrites itself CONSTANTLY. The difference between ‘information’ and ‘logic’ is an illusion.
Second, consider this thought experiment: Suppose you could have a conventional computer (not quantum, just like the one you are looking at) of whatever complexity and power that you wish. And suppose you had two bits of data:
– the PRECISE layout of the molecules of your brain at the moment of birth
– a dataset that describes EVERY SINGLE BIT OF INPUT that your have experienced in your life and the exact moment (touch, taste, smell, vision, etc.) as well as environmental conditions (chemical influences, injuries, etc.) that directly acted upon your brain.
So basically we have all the data that defines your entire life. Now…. the magic question is: at what level of detail do you need to simulate this in order to recreate EXACTLY who you are today? Cell level? Molecular level? Lower? Individual protons, etc? The problem is that at that low of a level quantum influence takes hold and you get genuine randomness that can not be factored out. The truth at the core of this discussion is of course that, because we exist in the physical reality, our intellect is a product of the inherent randomness in that reality. Any time that we create an artificial reality we can not expect intelligence to rise out of that reality until we can introduce the same level of randomness to that creation. Which we can’t do right now, and it’s not clear that we’ll ever be able to with conventional machines.
In the project’s message group was some conversations about this stuff.
Eugenias OS approach proposal doesn’t, afaik, prevent you from designing an app that LOOKS and ACTS like Photoshop.
Say you’re building Photoshop in this environment. You start by sketching some on what functions and routines you will need in your application. When you have an idea you make use of as many modules as you can. Some modules may not be advanced enough – then you write a extension module to them. Some may not exist – then you write them. When you’re done with all the core functions needed by the modules you can start designing the “app itself”, which in this case only need to be a framework to connect all the modules and call the right modules in the right time. It isn’t really that revolutionary, just one step further than existing solutions like ie KParts.
The point of writing a new OS for this is that a traditional OS is written for tradidional monolitic applications to be able to run well on top of the kernel. Alot of this traditional functions in kernel communication with userland could be cut. Cleanest, and probably easiest in the long run, would be to have a kernel specialized on this.
If OBOS plays their cards right they might have the answer to this OS request one day. If not, I’m sure someone else will. Time will tell.
Sounds almost exactly like one of my posts in the OS development forum a while back.
Honestly I see no similarities… could you please explain yourself?
Ah, cool (I don’t normally read the OS forums, except the BeOS and AtheOS forum). This kind of idea is not exactly too complicated, so a lot of people have thought of this it seems (I just received an email by another project who said that they were working on it for real!!), but no one actually have brought this into existance yet… I think the time is nigh though. It will be interesting what comes out of it.
Ealm, it seems that Null_Pointer_Us has mentioned something similar in that URL, but you will need to scroll down to the last 3-4 messages to find his two replies to the thread. I think only his second reply is mostly relevant to the current article.
Ok, no offence, but this seems like a waste of time. #1.Who in their right mind would help you eliminate the nead for app programmers?? #2. this sounds like a god awful un userfriendly OS #3. Get with the program, linux is the way to go.
Sorry – didn’t scroll down… now I see
I like this approach as well… why not take it to the extreme and let the kernel be part of this modularity as well… just have a few lines of code in the first partition sectors that direct to a certain module, which in it’s own turn initializes driver modules and different parts of the kernel. Of course every modules doesn’t need to appear as a module of it’s own to the user. They can easily be grouped in meta-modules to make it simpler.. one module isn’t locked into one meta-module though…
>1.Who in their right mind would help you eliminate the nead for app programmers??
You are wrong. They are not going to be eliminated _at all_. It is just that creating normal operations would be easier and faster to be implemented, so their role will be somewhat different.
>#2. this sounds like a god awful un userfriendly OS
Could be, I don’t know. There is nothing to compare right now.
>3. Get with the program, linux is the way to go.
I could mod down your whole comment just for this trolled sentence. You get with the program, OSNews is not OS-specific-centric. Open your mind and accept new ideas, or go and read only gnu.org.
The title of OSNews under the logo is “Exploring the Future of Computing” and not “Exploring the Future of Linux”.
This idea sounds vaguely reminicent of piping unix commands…
without really having all the knowledge to talk too deep into this yet I would say that the “kernel” (a big no of modules need to be able to read from hundreds of modules at the same time (in a practical sence), without loosing too much speed on switching and linking btw the modules.. I guess this means that the FS will need to be very efficient in handling module data from parts all over the HD, this might be really tough to design
I just received an email by another project who said that they were working on it for real
Eugenia, the first thing I thought of was OpenDoc while reading your article (and then you mentioned it yourself at the end). I’ve used OpenDoc extensively and the philosophy behind it has remained with me. By the way, you can still use OpenDoc and Cyberdog with Mac OS 9, believe it or not. And there are a dedicated group of people who still do use it. Of course, what happened was the drive to get developers to make “parts” (like your modules) started to fizzle and then, when Jobs came back to Apple, he killed it off completely to try and get Apple’s house back in order.
The upshot of all that is that – and it’s a shame – we never got to see what it would be like full-blown. There were quite a few parts for some basic things and I remember Claris had alpha parts for ClarisWorks, which I wish I would have saved. But, there was never a chance to see it in its full potential glory. I have often thought that GoBe was, in a way, a sort of morph of that…since they were ClarisWorks guys…and they used the idea of GoBe being document-centric.
Now, the thing is, if one is talking about this modular approach, it reminds me of an article I read in a Mac mag back when OpenDoc was still alive. The guy who wroye it was anti-OpenDoc and said he was tired of people saying, “Well, you can use this part for this and that part for that and so on and so forth…”. He said something like, “Isn’t this why we have huge, bloated applications in the first place – so we don’t have to fiddle around with all these different parts???”. LOL, I got a big kick out of that because it made me realize that this was the dividing line, this is where the face-off is: In a modular approach, does one come to a point where using many different moduals becomes more trouble and more confusing than using a big, traditional application? And, I certainly don’t know the answer because it never got that far regarding OpenDoc. Ultimately though, I think that is the big question that would arise. It raises questions…like would ordinary users catch on to the modular approach or would it confuse them if too many modules came into play?
At any rate, a great, thought provoking article. I’d certainly give modular computing a whirl!
I got a big kick out of that because it made me realize that this was the dividing line, this is where the face-off is: In a modular approach, does one come to a point where using many different moduals becomes more trouble and more confusing than using a big, traditional application? And, I certainly don’t know the answer because it never got that far regarding OpenDoc. Ultimately though, I think that is the big question that would arise. It raises questions…like would ordinary users catch on to the modular approach or would it confuse them if too many modules came into play?
There could be different levels of modularity. This is why I suggested using “meta-modules”. Ie the “HTML render” module could consist of many smaller modules doing specific rendering tasks. Maybe the meta-module could also make the modules it consists of able to communicate more efficient. Something like if they were actually part of the same file. So what would then happen if one of the modules that is “glued” inside the “HTML render” meta-module is called from a second module? I guess this would be a file system issue.
A few problems I can think of that one might encounter while designing an OS like this may be:
1. speed – traditionally modularity costs in performance
2. simplicity – user shouldn’t need to handle tens of thousands of modules by hand
3. compability – a module that does its task fine in one app might not do the same thing slightly different as good in another.
4. security – what if there is a security hole in a module that virtually every application makes use of? cheddar cheese…. and how much permission will each module have? who will check that the permissions are set appropriate
Probably this approach could be really nice as long as everything is well coded. Cause it all relies on good code. For obvious reasons I would be skeptical to running closed-source modules.
I don’t think this modular approach to apps will appeal to non techies.
Say I am learning how to make home videos. I start learning a full fledged app (eg. Ulead). Bit by bit I learn how to edit, add music, add titles, add effects etc. etc. I learn to do things that :
– I didn’t know were possible
– had not imagined
In yr modular approach, a user has to :
– know what he/she wants to do
– go find the applet for it
– download applet
The drawback is that many users may not even know what is possible. And this applies to all sorts of applications – spreadsheets, word processing, presentations…….
Yes, excessive bloat in apps is bad. But zero bloat leaves you with no framework, and it becomes much more difficult to learn new things.
Athene is already designed this way – it’s a MOO system (Modular Object Orientation). In fact on the first page of the SDK manual, it talks about modular operating system design right from the second paragraph :-).
Athene already supports over 80 classes and you can easily add new ones to extend the system. Want to add support for TIFF pictures? Just write a TIFF class, add it to the system and voila, every program will now magically support TIFF without any changes necessary.
I could talk on and on about it, but if you are interested in the technical details you can read the SDK here:
I like the concept, but I (to join the masses not meaning to piss you off, but because I truly believe this) don’t see why this cannot evolve from another OS or even multiple OSes.
Most fully modern OSes seem to be heading in this direction. I am not talking Windows, MacOS, or Linux. Those OSes are all rooted in the monolithic approach.
OSBOS, QNX, and MorphOS are some examples of modern OSes that seem to be evolving into what you are talking about. These OSes, IMHO, seem to take care of most of the buzzword compliance and the modular interface is already implemented to some degree. And they all get rid of the silt you seek to get rid of and at the same time make it easier to totally rewrite major chunks of the architecture (not that this would be an easy task anyway you slice it, but easier than say starting again totally from scratch).
An evolutionary path may be much better for this type of computing paradigm. Most of the stuff you mention is fully (or near fully) supported (supported but not yet implemented in some cases) by the modern OSes. This may not come about via the revolution you envisioned in your article, but current trends in current OSes seem to be heading this way.
The previous has been brought to you by the Department of Redundancy Department previously.
You’d definitely have to do a big think about how to get this one of the ground, but, sounds like there’s some good ideas flowing and it doesn’t seem very productive for some people to get grumpy just ’cause you don’t like it.
ps. I’m not grumpy… 🙂
I think what you suggest is basically what we are doing in Unununium ( http://uuu.sf.net ), maybe we even push it further.
Everything, from the deepest recesses (scheduler, memory manager, module engine) to the most volatile apps (ls, memstat), are all based around the same technology.
Everything is loaded in memory, dynamically linked, there’s no static component, yet once a part is loaded it’s directly able to use any other parts like if it was compiled right from the start with it.
We call that design ‘VOID’, since it has basically, no kernel at all, only modules. Applications are loaded exactly like modules, except that their actual lifetime is expected (only a user expectation, not system) to be shorter.
The only way this could work is if the modules were transparent to the user. In other words, the user doesn’t directly deal with modules and need not even know they exist for the most part. The only difference as far as they are concerned is that their computer usage is more document-oriented than application-oriented. Like many people have already mentioned. They would have their document or image, and there would be relevant tools readily available for them to use. They would no longer think in terms of “what program do I need to run”, but instead just open the file and have the appropriate tools pop into view.
This certainly doesn’t eliminate applications in the absolute sense. It merely changes how they are created and how they are used. Programs like Photoshop would still exist, they would simply be installed as a collection of targeted modules that are loaded on the fly depending on what the user is doing. Adding new modules would be just as transparent. In the same way that applications today are upgraded or patched to fix bugs or add features, modules would also be added or replaced as needed.
This whole idea is still pretty vague, but fascinating to me. It’s funny that this news item appeared today on OSNews, as I was just today brainstorming on my own OS project, trying to thing through similar ideas (eliminating applications, making the computer a single unified system, applications acting as plugins to the OS, etc.)
Great points by both of you. And, to make modules transparent to the user – wow, that would be something!
The article mentioned something very close to BeOS, though.
First was the MIME type stuff signature for every file. This makes an action dependent on the mimetype, figuring out what kind of app would launch the file.
The second one is definitely the Translation Kit, all translations are defined above the OS and below the user apps. Calling the services defined within this Kit provide all the translation services of any kind of file format.
Actually, I think this guy has the right idea.
There have to be layers like modules for different video and music types but at the same time a framework app for all of on e certain type of apps. This way loads of different front-end programs can use the same back-end arch and it still provide a great of functionality for the user.
Think of the gstreamer project. It is a multi-media backend project with getting a lot of attention from the gnome camp but there is QT player based of its backend construction as well. All by itself it is useless. However, what it does is provide developers with a backend multi-media framework for the rapid development of multi-media apps. You still need avifile and mpeg and mpg123 libs and stuff. This project simply pulls them together as plugins and provides the framework for taking the divergent parts and making something useful out of it.
COM servers and CORBA agents also provide these as well, though.
All it needs are specifications to implement these translators.
It seems, to me anyway, that a mime-type-centric interface only works if you assume interelated documents are of the same type. I’m not sure this is always the case. For example, if I have a document I’m creating that has pictures, text, and graphs/tables, I’ll have to constantly flip between three different types. In the end, I’d run into the same problem I have in current environments, where I have to have Excel, Word, and Photoshop all running next to each other, trying to make each one work with one another.
I’m not dismissing the idea at all, but I just think it might be a good idea to remember that integration between modules would be important (and likely challenging) to implement.
The main problem with such a module-based system is who decides upon the API for each task. Anyone who has ever programmed knows that an API is a compromise. And too often, that compromise must be revisited and changed because a new caller or implementation adds a requirement or cool feature that was not foreseen and is unimplementable with the current API.
The Unix piping analogy is a very good example of this. All the simple filters were written early and are the ones that newby first see and excite them. But then you fall on harder problems. So what do you do? you had “gluer”: first shell script, then awk, then perl… In the end, you need to be a full perl programmer to do the non-toy stuff.
The same apply to the modular OS approach. Yes, the basic and easy stuff can spur your adrenaline. But then to do, say, an email client that keep messages around, can reply, do attachment, mime-type encoding, filtering, etc, all integrated in a usable whole, you need an integrator. That what programs are in classical OS: low-level module integrators.
That may well be the opportunity of the modular design: instead of having the same vendor doing the work-module and their integraton, you could choose a vendorthat has produced the best integration.
After a while, all your nice vision will be buried under an integrated framework, as normal user are more interested in usability than elegance, and will be using something that for all intent and purpose looks just like current OS and applications.
Think about this: the linux kernel has a myriad of modules available. Linus chooses the one he deems worthy. Linux vendors choose the one they deem worthy. Most people just buy or build the one from their favorite integrator.
Note that trying to avoid the API problem by specifying a very small and dumb common base (usually saying that everything is a stream, a la pipe) simply dumps the problem to a higher level of abstraction. At some point you have to pass meaningful content with meaningful data and interpret that content and data. At some point, some intelligence has to do the intergration. For now, that still requires a human being. I wouldn’t count on AI for a while.
…this concept is actually very close to the “back end” that would be needed behind the “front end” that Jef Raskin describes in The Humane Interface. It’s nice to see that OS News readers are being consistent in disliking both sides.
Those of you with more open minds might want to read Raskin’s book. I’ve seen an awful lot of bashing of it from who clearly haven’t read it in depth (or at all). I don’t agree with it all but it’s all thought-provoking, as long as you’re willing to actually think about the ideas rather than dismiss them on the “that’s too different, if it ain’t broke don’t fix it” grounds most tech-heads seem inclined toward.
The mimetype idea is cool and would be a great addition to current oses, not too ahrd to implement either, however the module idea I am not so sure of.
What will tehy do? Right clicka mimetype and wait for a 100 module lsit to show up? Why not just open witha default application like normal oses do, it would include all fo these modules and make it easier to be productive ebcause you would have everything at hand. Maybe advanced and novice buttons would help the new user and meet the power-user’s needs. This really seems like something even more confusing and unfriendly.
Again though, the first idea, I think would be awesome if it was implemented in KDE or GNOME for example.
I don’t think the rest would happen even if it was possible and could be a little uer-friendly. It would never be able toa chieve wide acceptance, hardware recognition fast enough to gain the support it needs from big names liek Adobe or Macromedia. The applications would be medicore at best and would not be able to suit the power-user. It would take too much work re-inventing already existing technologies and would always eb playing catch-upw ith otehr oses, it would be nothing more than a hobbyst os.
I also beleive that oses like OS X jaguar and WInXp are already pretty friendly and don’t need to be modular. Your grandma needs to learn the world of computers sometime!
This is basically the approach used by x-kernel and scout from the U of AZ (and now Princeton: http://www.cs.princeton.edu/nsg/scout/), and the Strings system which BeComm (http://www.becomm.com – but you probably won’t get much technical info from their site) tried to commercialize, but that company was massively crippled in the financial meltdowns of 2000. Scout uses this approach to construct very efficient network devices, and Strings for multimedia and network applications. There are other similar systems I’m sure. From my work on these systems, I think this paradigm is delightfully useful for data-centric applications (protocols, video streams, things like that) but rather unuseful for traditional applications like photoshop. Which is allright by me.
I ran across a project, which resembles exactly Eugenia’s Idea: ChallengeOS (challangeos.sf.net). I promised to redo the projects webpage over a year ago, but never did and didn’t stay up to date with the development, but it seems pretty dead by now.
If anyone is really interested in the idea, you should _really_ read the ChallengeOS concept (http://challangeos.sourceforge.net/docframes.php?doc=concept2/Chall…) though.
1. Would the “save” module be able to save all filetypes(not just a single filetype per module)..creating generic/universal modules would reduce bloat/reinventing the wheel…
1a. If SAVE was a universal modual (I can forsee groups of programmers battling over whose module supported the most filetypes…) then a set of standard modules would be Created, as saving and printing and what not are very,very basic commands to use… if this is how it would work, it almost starts to sound like the posix (don’t know the correct term) of open, save, read, write, etc…
2. A program could still me “made”, bundling the 30 modules together in to a package in a way creates the same feeling of a program correct? Maybe I do not quite understand the idea properly.
3. What are MIME types, exactly? and would there be generic modules like the tcp/ip module for connecting to the internet…
Interesting idea Eugenia.
When can we quit playing with improving these machines and get to the crux of it and improve our brains. Can anyone hook our brains up with telepathy and more ram and whatever databases. then we can all stream our thoughts and visions and knowledge. hook me up scotty.
A.I. will never be like H.I. until sensation is introduced. H.I. is built on expereances with pleasure and pain. Until then A.I. will remain alien.
My machine is my pet.
Firstly I would like to say-Eugenia, very interesting idea. Perhaps I don’t understand it fully because it seems to me that this system would be a nightmare for ui design. The problem is that it makes the assumption that large applications such as photoshop, word, gimp, outlook, etc, are simply a collection of “modules” that could be seemlessly connected somehow with mime types and the such to form a useful application at the will of the user. I think that this concept is false. Modern computer applications have to specifically designed to be friendly and, to use a cliche’, are more than just the sum of their parts.
However, this approach sounds very useful from the perspective of developers. This sounds very much like the idea behind object-oriented programming, a collection of “modules” or objects that the programmer can select from to form a useful base for an application. For instance, if you were a developer creating a text editor, you could simply include a spellchecker module instead of having to include and maintain one yourself. That way, when the developer of the spellchecker module updated his code, you could include the newer version in your application.
Now, what I think your goal with all this rasimataz is to try to create an OS that is designed such that each application performs a specific purpose and avoids everything and the kitchen sink apps like office. I like this because kitchen sink apps are often confusing (more on this next paragraph). This sorta sounds like the UNIX philosophy of “everything is a file” but is at a lower level. In your idea, everything is an object (you call it a module). This makes sense to me from a user interface perspective.
Think of electron devices that are easy to use such as the microwave, toaster, television, radio, etc. The reason these appliances are easy to use is because they perform 1 and only 1 function and they do it well. Unfortunately, this concept kind of breaks down when it comes to the computer because computers perform so many different operations that it would be impossible (and uneconomical) to split these functions into separate appliances. So, figure out how someone can combine all of the computers operations without the overhead and complexity of today’s operating system and you have solved the human-computer user interface problem. Your idea seems to be a stab at that idea, but IMHO, it still doesn’t solve the problem, because the OS still has to juggle all of these different modules to create useful apps and has not eliminated the overhead of the OS. There is still no measure to hide or simplify the insides of the OS inside the kernel to talk to disks, devices, memory, processor(s), threads, multi-tasking, SMP, innercomputer communication, multicomputer communication (aka networking), and all kinds of different things.
Lastly, please comment and critisize this post; let me know if there are any concepts/ideas that I missed.
I think Rudyatek and Pierre are on the right track talking about user transparency and API concerns. If we’re talking about a user’s experience of the OS, a module idea is a step backward – adding more complexity to an already overwhelming learning “curse”. If we’re talking about a developer’s experience while building software, it is a good idea that is worth discussing more.
The roles of component developer and application assembler illustrate this idea. When developing an enterprise app, you typically have a server-side developer developing re-usable components that are strung together by the client app developer. Similarly, modules could be exposed by the OS developer and used by the application developer to develop a fully-blown app which provides the user a context.
Context should not be minimized for the user and definitely not done away with. Show up for a wedding in your Halloween costume to fully understand context. Users understand context in everyday life and therefore understand they need to run a program to create the desired context to interact with the computer. A user can’t get any work done if they are constantly asking themselves fundamental questions like “Where am I?” or “What can I do here?”.
The project I’m working on, [url=http://dynapad.swiki.net]Dynapad[/url], for developing an OS/environment particularly for PDAs, but useful on desktops and other systems as well.
Squeak’s UI system is very componentized, with the Morphic UI systmem.
Squeak already lays in this framework, but Dynapad is tying it in together in something cohesive- like is described in this article.
There are many similarities between Dynapad and what is described here, but taking out the needless complexities like an XML-based module description system, among others and trading it in for reflective objects. In Smalltalk, it’s easy to find objects- and these objects know what they can do.
Let’s say I release a set of filters for our Not-Photoshop not-application, for use by anything that needs it in the system. Put all the publicly available API calls in a category called “filter-api.” Any application can very straightforwardly ask the object for all methods in the “filter-api” category, and either process the image based on calling those methods, or if it’s appropriate, build a menu of options for the user.
The most important concept in Dynapad is the View. In Dynapad, one of the capabilities of all model objects is that it knows how to get itself viewed and how to edit itself. Of course, not all objects would have only one viewer or editor, and these objects can just as easily provide the system and the user with the possible options- or it can simply use the system or user-set default.
So far in Dynapad, there are applications. But the majority of anything that is used for manipulating data of any kind can all be done in one “program,” the InfoBrowser. It’s a simple setup, but quite elegant- just a View that shows you what’s in the database, and lets you browse through its hierarchy. When you want to view the specifics of an object, it asks the object to bring up it’s GUI view, if that’s your preference. Or, you could have it set to a summary mode, in which case the InfoBrowser requests the summary textual View, so you can view many objects on the screen at once.
In Dynapad, everything is built on the object database, Magma. All data is stored in it. It knows not only about flat text or binary data, but the relationships between objects- between the Names and Dates and Todos on your PDA.
Look forward to a huge article about this soon on OS News, but it may not come until the end of December due to certain constraints (working almost full time between 4 jobs and doing a full-time load of college class work).
As long as you’re going to all this work, why not have the underlying core not be a traditional file system, but go for a pure database?
You’d gain advantages over Eugenia’s suggestion of an XML description of each module, by having each module have another table that describes it’s relationship to other modules, and various information. That way, you don’t have to XML parse every time, you just have to do a query. Furthermore, you could do some really simple gestalts of things that way.
Since I know of no good UI’s for using a database in this way, one could build a layer around the database that seems to the user to be an actual file system. Then, if someone comes up with a really nice database UI for working on these things, it’d be really easy to adopt.
Just my two cents.
from what it sounds like, it sounds like Linux fills this already. it is a monolithic kernel with a module interface so you can compile everyhting into it and still install new device drivers on the fly through modules.
Seriously, there would be tens of thousands modules if
this would be useable, a user would never have any overview
over whats installed, what he should install etc.
Besides, it sounds like COM/XPCOM/Bonobo does a way way better job at your goal.
Hasn’t anyone learnt from the BeOS saga?
Great, go out, re-invent the wheel then find that you are completely and utterly incompatible with the rest of the world.
Funny still, there are people out there who still believe in this “no legacy”, “new idea” concept rubbish.
Look what happened when people tried to port Apache, X11 and Mozilla to BeOS. Features were missing from the basic OS, resulting in the software stuck in flux until Be finally got off its behind and did something about it.
Be, 10 years ago would have been better of, licensing an existing base, my personal choice would be BSD (From WindRiver) and build an OS ontop of that, whilst finetuning the core. Meaning, you would have POSIX and UNIX compliancy + a new, snazzy interface ontop, meaning, Be could have attacked the server and workstation space, whilst maintaining compatibility with the UNIX world.
This compatibility would also mean they would have a large development community willing to port and develop for the platform as no “re-learning” would be required.
Lets look at Itanium vs. x86-64. Itanium is a clean break from the x86, yet, it has arrived 2-3 years late, still way over priced and under performs against so-called “legacy” CPU’s like the Alpha, PA-RISC and UltraSparc III.
X86-64 on the other hand is what I call “house cleaning”, remove the old, unused crap, replace it with new features, and tune up existing features, and voila, we have a CPU that performs very well, and yet, remains compatible with what is out there in application userland.
This is our project concept. We are creating a modular desktop environment for Linux systems. We are based on our new technology, plugin manager. Everything in our system is a plugin. Lets say that you want to here an mp3. Plugin manager (pm) will load the appropriate plugin upon request and it will unload it when it is not needed. We have and a lot more other features, like vfs and we are note depend on X. arm0nia desktop can work thru drivers on frame buffer, SDL, SVGAlib and eventually X11. Eugenia already knows about our project and I told her that I would contact as soon as we had ready our first GUI. But we are heavily under development on the low level place on our project. And we need help. If you know C and you know what is a pointer ( and Linux ) feel free to contact at email@example.com
You can find more info at: http://www.arm0nia.org/doxy/pm_for_apps/html/index.html (our page is under construction to)
I think a system like this would make programming a much less lucrative field.
The added level of granularity from app -> module would eventually narrow things down to just one or two modules with similar functionality. Competing modules would innovate until their feature set was nearly identical and then they would compete only in price. Eventually only one module per function would remain.
Today, similar apps can compete because they offer different feature sets, but they also have some shared features. What you’ve explained basically breaks the main features of an app into separate modules. The value of a packaged application is no longer there.
True, this would give users a lot of choice in what they can do. Interface standardization would be key in order for this not to turn into a usability nightmare.
The idea sounds great to me, it would be a step away from editing data with applications and twords editing data. I have one question and one suggestion.
Why use a monolith?
A few of the pros you mention seem to tie in with the idea of a microkernel OS eg: debugging smaller code parts.
My suggestion: make all the content in the system into one type eg: XML. So below we have document types like PNG or text and above all of them are convertet to SVG. This way editing components only have to deal with one data type and piping between apps is trivial (they don’t have to agree on data types).
This may kill performance (leave it up to Intel/AMD/Sun etc..). What about dumping all existing formats (PNG, .doc, .mp3) and just dealing with a single type.
There may have to be only one data type as otherwise all modules dealing with a single mime type may not work with another mime type as the data structures are different.
Thus a user needs only specify that they’re writing a letter to their aunt and the specific modules are loaded.
Editing a document of a certain mime type seem to be a similar system to editing .png (image/png) or .jpg (image/jpeg) files.
In short: single data type
two tier system: existing file types are converted to a single data type, manipulated and converted back.
single tier system: All data are of the same type and are manipuled by user. Extra advantage – Users need not specify that they want to do.
Am I way off base?
This projet reminds me the unused replicant technology of BeOS. Why not create only replicant application, by example a photoshop like who can drag tools from gimp, photoshop… and then to manipulate a data we can use the power of all this apps. We can save via a preferences panel the tools we use in function of a mime type. so we can have completely modular apps for text, pictures … And why not a “Super mime type” we can modifie with gimp tools, openoffice tools and then decide to save it in the file format we want ?
i didn’t see the needs of a new OS.
Eugenia, you’re thinking like a toolmaker…..
Users think a toolkit is a hammer, a saw, an adjustable wrench, and a bigger hammer. Don’t confuse the poor souls:)
(Great idea – reminds me of OpenDoc etc – it would make for very small fast specialised tools)
Nice project you’re working on! I think everyone in this thread should read your docs at http://sourceforge.net/docman/display_doc.php?docid=8969&group_id=2…
Your JIT-linking technology was simple and elegant, solved one of the problems I had in mind with this kind of system.
I guess JIT-Linking is part of your file system?
I think this is a very good idea. I have thought of somethink like that, too. This would change the way computers work completely to a more data centric way.
The basic idea is not so new. Many have mentiond the unix pipes. This idea was also the driving force behind the transition from singe applications (word processord, spreadsheats …) to whole office packages. The next step were component models like COM.
So far the idea was to create components, that every applicatio can use. This proposal makes it the other way round: You create one big “application” which can be extened with moduls.
This would offer consitency and therefor easy of use.
The question is not, whether this breaks compatibility or if this is easy to do. The question is if this improves computing and to me the answer is a yes.
Todays operating systems are not much more than application programing toolkits with some maintenance tools. Such an system would not be an operating system anymore. It would make the computer to a tool to work with data.
The downside is that such a system would not work with anything that does not fit the metaphor of working with data, e.g. Games. At least it would not work well with them.
But e.g. coporate desktops don’t do (my english sucks) anything else (at least they shouldn’t) and the home user can always boot into a system which is optimized for games or use a playstation.
cat > file << EOF && aspell -c file && mail firstname.lastname@example.org < file
this sounds like the way you can chain stuff together on the unix command line. in this graphical age i know the paradigm needs updating (plan9 graphical pipes), but the basic concept is of “tasks” being done with the help of many small programs loosely interacting?
with the addition of “perl -e” even allows dynamic scripting in Perl and python allows it too (i think). why is this not the wave of the future?
PS: i apologise if someone has already said “unix pipes” and i missed it.
this remindes me of the distributed computing and related ideas, ie java (applets, class packages etc) and .NET (shudders), only lower into the comp. system and potentially more powerful.
in this case instead of having this as the actual kernal, write a script/binary/bytecode/etc bassed interface (like JDK, etc) that runs on top of the basic kernal and will do all the actual calling of MIME modules and their dependencies.
a possible usefull addition could be to allow container modules, ie as a stop gap so if there was a missing dependency (so a simple standard app. could be wrappedup in such a module, so as not to require reworking every part of the OS/Interface/Programs unless absolutely ness.).
I think as mentioned in the article the best factor in this is the standard interface for all programs (who knows prob. have the interface as a loadable module, to allow a cmd-line or GUI of various flavours to be used). however one if/but, what about if there were two ways to display/edit/alter/etc a file/buffer/pipe/etc, you’d need to allow the user to stipulate before hand or choose which to use, this could get interesting if there are a large number of options).
if this is implimented I’ll definitely give it a go (even though the lerning curve will be quite steep), or even if I can spare the time I might even toy with implimenting it (but v.unlikely).
If a database is used to keep track of all modules, why not have it network transparent. This way modules and meta-modules could be distributed out on any compatible hardware, all united in “real” distributed computing. (kind of like java, only Sun didn’t think this big design-wise)
As it happens, I’ve had very similar (but still different 😉 ideas. I found Clemens Szyperski’s book on component software to be very interesting. It discusses key conecpts, and does design reviews of things like OpenDoc and BlackBox. Good Luck.
Pierre is right in his comment about API.
Basicly your OS kernel is performing “glue” function replacing pipes in Unix.
Pipe is most primitive IPC mechanism after file locking.
Any serious application has modular design – you now offering to leave only modules and hoping it wouldn’t break applications. Well, these modules are exchanging data by some protocol designers agreed upon. And it’s much more complex than MIME. The complexity of data exchange protocol eliminates duplicating the code of conversion data to some standard and back and speed up the whole apllication itself.
This modular design will produce slow and bloated applications with limited functionality and unefficient code. BeOS is using MIME for file manipulations but it’s BMessages between modules are not MIME.
If I would right an app for network monitoring I would need several modules – Trap listener, MIB compiler, Event correlation engine etc. Let’s imagine that there are some of these modules around and I need to write my own modules that missing. And if ,for example, MIB compiler module doesn’t satisfies me I create my own. Now there are 2 diffent MIB compiler modules available for everyone with different functionality and different sets of bugs. Which module people would use ? None, they would create their own just as I would do.
This sounds like something you implement on top of an OS, not in the OS. The kernel needs only to handle things like passing memory to various applications in userland.
If you wanted, you could write this on top of the Linux kernel.
I can only imagine the API-specification hell that you’d have to go through to cover all bases in such an extremely modular architecture.
If someone thinks outside of THAT box, they’re toast, basically.
> I believe that many changes will have to be done in the
> kernel of such an existing OS to have it working the way
> described in the article. It is possible, but it won’t
> be as “pure” when trying to implement it on top of an
> existing OS. It will hit limitations that they wouldn’t
> naturally be there, if the whole architecture was
> thought out and implemented from scratch.
First of all, let me point you out AmigaDE, the way it was originally supposed to be and the way it will probably be on the desktops. All the concepts you’ve exposed are present there, and it reuses the TAO/Elater operating system.
Then, the AROS Team, and specifically me, has had exactly those ideas about one year ago, and slowly started designing the whole architecture. Again, let me tell you that it could run on top of the current AROS, perfectly and transparently integrating with it, and in a backward compatible way.
Then, there’s also Nemesis (http://nemesis.sf.net) that inspired me a lot for the design of CABOOM (Component Architecture Based upon an Object Oriented Model – the AROS new componend model system which will implement exactly what you thought of).
Forgot to say that AROS won’t use mime types internally,and not even databases registry-like and all that kind of stuff. It will be a truly dynamic system with nothing decided at installation or compilation time that cannot be changed at runtime. Nonetheless programming in this environment will be as safe as programming with C++, provising type safeness at compile time even in C, with some transparent magics of the standard preprocessor.
You have a VERY serious problem : the interface
-> you have a mimetype, a viewer and 500 options for operations on that file
-> AUTOMATICALLY design a user interface for this … good luck !
Now don’t take this the wrong way. I think it is very interesting, and could be done very well, but I truly fear for the UI.
Also you need the cooperation of each and every module writer for this to work decently (nobody writing a viewer that only lets you pick it’s own modules …), and with the recent demonstration of spyware/adware/… programs by very big software makers … I don’t see this happen
I’ve already started implementing a hobby OS based on
similar ideas. In this system, there are only modules:
You won’t really find an OS kernel, drivers, apps.
Of course, some modules are “more important” than others
and can thus be seen as something like a kernel, but in
fact they are only modules like others.
I even go one step further: my system offers many
features known from OO programming for modules:
modules can be instantiated, derived (single and multiple inheritance),
there’s polymorphism, abstract functions / modules, …
Modules are strictly separated from each other; they can
only communicate through special “meta functions” (offered
as int handlers): with those, another module’s functions
can be called, either the normal way or in parallel, as
a new process. That simplifies the use of parallelism.
Modules can easily use monitors to be protected from
problems that arise from parallelism.
I also use “events” which are similar to Qt’s signals & slots
for a powerful and easy way of communication.
I’ve already written a document describing the system in
more detail; you can find it along with the sources at:
Your JIT-linking technology was simple and elegant, solved one of the problems I had in mind with this kind of system.
I guess JIT-Linking is part of your file system?
What happens, at the moment, is that at build time, there is a linker that links all of the cells (modules) together, its sorta like a kernel, but there is really nothing in there except the cells, all one after the other, in a core file. What will happen in the future, is that the bootloader will do this for us. You tell the bootloader which device to mount as root, and it would look for something like /conf/cells or something similar, which would give it a list of cells to link, it would link them all together, initialize them, and give the kernel-less code control of the computer.
We havent used the name JiT in a while, mainly because people got it confused with JiT Compiling, which it is not. Its just finding which global functions a cell/app needs, and correcting the pointer, so that it can do the direct call for it. The only really tricky part is linking the linker cell. heh
I hope i explained this properly, i’m not the one who coded that part of the system.
This was the vision for OS/2 Warp as set out in the latest (and last I think) IBM CUA definition. With multi-threading, the Workplace Shell built on an OO approach (SOM if you remember that), and context-sensitive menu’s based on the object’s type. It was all there so you could right click on your simple text file and choose to edit/delete/… There were 2 problems which apply just as much today: 1 No-one wrote the applets/viewers/editors for all the data types (ISV’s just kept on with monolithic apps) and 2 the users care about compatibility which today means Office and PDF’s and … so you need to get MS, Adobe and the rest on board. Funnily enough, they prefer to carry on selling their monolithic apps!
Your vision is appealing; I just think there are some business realities which prevent it being useful to the mass of end-users.
My company is developing a Java application server and developer environment based on some of the same ideas.
Check it out and send me any comments.
..maybe so.. but this sort of idea would give a big boost to OSS software as Opensource code whichpreviously would only become a component of a bigger app.. like AbiWords spellchecker for instance, could compete directly with the spellchecker for them equivalent of MS Word or other writing package.
Choose an HTML rendering engine for your browser , etc
The theory is great.
That’s pretty much the original concept behind smalltalk. there were no “applications,” only extensions to the OS to give it new capabilities.
The Squeak project is rather like this. It can be used as an OS or hosted on an existing OS.
I must say that I fully agree with you on this one. This is something that I’ve been envisioning for quite some time, and only haven’t put into work because my coding skill is rather prohibitive in this matter. I see this as being broken down into 3 different types of “modules”:
1) Access. These modules are present for every time of MIME-type supported by the system. They can either offer RW, RO, or WO abilities for a particular MIME-type. This is *very* akin to BeOS’ translators. The presence of one allows global access. This allows you to have either one module that can do RW operations (for example, text can be RW), one will do RO operations (MP3 decoding), and another will only do WO operations (Ogg encoding).
2) Interface. These modules allow the user to interface and interact with the data. In essence, they are a front-end to the access modules. Say you have a Sound interface; it will have access to audio/* MIME-types, giving it universal access to all types of sound files you have on your system. When you want to open a sound file, this interface module scans all RW and RO modules that implement any type of audio MIME-type. Then you can do what you like with your plug-in modules (see below), and then save the file (or data) by having the interface module scan all access modules that implement a RW or WO interface.
3) Plug-in. These modules are plug-ins for the interface modules. The interface module calls these modules to perform some type of manipulation of the data. (Haven’t mentally worked out if the interface modules perform basic editing themselves, i.e. text entry, or if this is handled by a plug-in module; if it’s a plug-in module, then that seems to make the system inherently complex.)
An interface and be as specific or general as the developer wants. So one text/* interface module would solely handle plain text, and another interface module would also implement (or render, if you will) styled text, via RTF, (X)HTML, Microsoft .doc, etc. Or perhaps one interface module handles text/*, but presents the display differently (via a plug-in) to allow syntax-highlighting for your development language of choice.
This n-tiered approach also makes some things a little difficult to facilitate, as well. What if you only wanted to batch resize a collections of images to 50% their original size. There would be an interface (perhaps CLI, or built into the file manager), that would access the file for RW/RO, apply the plug-in module, then save the file to the same location with a RW/WO module, in one fell swoop.
I have taken this idea a little further in concept to apply to device drivers and input. Internally all data of a certain type would be handled as a particular type. All sound files would be internally manipulated as PCM wave files for instance. Then, any access module for RW or RO access has to convert the input format to PCM, and any access module implementing RW or WO has to convert PCM to the output format. Likewise with an XML schema for sytled-text, bitmaps for raster images, perhaps SVG for vector images, etc.
This approach can also be taken to encodings, where internally everything is UTF-16, and there are RW/RO/WO access modules for each encoding (ASCII, CP-1252, UTF-8, UTF-7, MacRoman, etc.) Device drivers would merely be modules that interface with the system in some way. Interface modules would interface with the video driver to present its data on the screen. (Perhaps there can even be middleware in between that drivers the UI, allowing a GUI, CLI, and possibly TUI (text user interface; think curses-based, for a GUI, but implemented on a console) to be presented to the user. This would allow remote use to be easily implemented, via telnet, rlogin, ssh, etc.
Also, input can be abstracted in this manner, allowing a layered n-tiered approach to input as well. There would be the keyboard driver, then the keymap, and then possibly an IME, and then finally this would be input to the interface (or plug-in) module. (I believe this is pretty much the way modern operating systems work, allowing your keymap (US 101 key, US 104 key, Dvorak, etc.) to be used, and any IME (MSIME2000 for Japanese input on Windows 2000) would then take the input for the keymap would then be converted. This would allow you to input Japanese characters while your keyboard allows you to type with a Dvorak layout.) Interfaces would also allow (possibly handled globally through the system/kernel itself) font-syncing with the selected keymap/IME. If a Japanese or Vietnamese keymap/IME were selected, then only fonts allowing input of the full gamut of characters would be available to the user (assuming a GUI and interface module that supports selection of multiple fonts).
These are just a few thoughts I’ve had on the matter that have been swirling around my mind for a while, and it’s nice to know that there are other people out there that share similar ideas. Thanks,
For the best OS first we need revolutionary hardware to run it on. Stock beige boxes will not work. The Tablet PC spec really is the future of computing in my opinion. Most computer users (and many who aren’t users yet!) need a simpler device. This explains the rising popularity of Palm and Pocket PC. Computers need to be available where users want them just like this other thing called paper. A book can be picked up, shared, stored, caried around, and used in many unusual positions. A Simple Yellow pad on a clipboard is an extremely powerful data device. It’s simple, shareable, easily accessable, extremely portable. The user interface of a book is really simple; turn the pages by the edge or corners. Quickly finding information relys on human’s inate ability for 3D perception and photographic style memory. Example: Grab a phone book or dictionary and look up “mechanics”. After you found the phone book how long did that take? Most people couldn’t access that information on a computer in an open window in that time. Another example: Grab your notepad. Write a note to remember this page. Now write one on your computer. Come back tomorrow and try to find both again. Look at delivery media. A typical hardcover is about 8″ x 10″, a note pad 8.5″ x 12″. These are standards set by 100 years of experience and user preference. Try using wildly diffenent size paper. It’s not the same; not by a mile. On a normal desk even 6″ x 9″ notepaper is dissruptive to workflow due to the standardization of the office–So is different colored paper, red ink, and pencil.
Computing is at an evolutionary leap. Vast changes to the way we use them and how they work in order to make the leap from the desk and in to everyday life. Jeff Raskin’s stuff is a good starting point, but more radical changes are needed. Look at the most populous computing devices out there, video games. They have simple controls for every screen action possible. I’ll grant they leave out typing, but anymore it really should be optional on a simple machine. Also the mouse should die. It’s distracting, and awkward. I have fingers; they offer infintely more control that a stupid mouse. FingerWorks is an excellent example of the alternative.
Any revolutionary OS should throw off all the chains of the past. It should incorporate AI from the very basic level all the way to the UI. The idea of modules is a good start. BeOS was a excellent example with changeable drivers, replicants, and translators. BeOS also had a DataBase-like file system that allowed it to remember simple user preferences to files, and allowed direct searching of the data for information. The final revolutionary idea was the BeMessage. Now adapt Web-based programming to this OS. All of the BeMessages should be “push” APIs rather than “pull” ones. The BeMessage server should follow an apache model of sorts, passing the message to another running server which would execute command and pass the requested data along to the next operation with another BeMessage. This would allow for simple module changes. All API errors would be caught by the Message server and handled according to standard defined programming. Rather than getting a “not-found/shut-down-program” error, the program would respond with a controlled “you-are-missing-x” error.
The missing part is User Interface. Designing every possible combination of useful screens for a task and using all the modules is an impossible task. The whole system needs to be designed so that an AI can monitor every messge and “learn” how they are used. Laying the groundwork early, and developing with Open Source and the collaboration of the AI’s on the internet can achive this far faster than any business entity. The same AI could monitor the system health, disk space, and hardware configuration. Eventually the AI would “build” the UI by recalling your instructions for a task–it’s really no different than a dynamic database driven web page; but it is an evolutionary leap that will take real inspiration to achieve.
Of course the next step is robotics and autotomons. Again the missing link is understanding how to react to the data presented, finding patterns, slight variations and exceptions, and reacting accordingly. Until we make the next “leap” computers will continue to be nothing more than glorified word processors. Remember, Open Source dosen’t need conventional business models and developing a revolutionary system can only be done outside the system of any current business model. The “Golden Nuggets” needed for this aren’t out there even in the massive companies that throw billions trying to achive this. Open Source can cover far more programmers and ideas in far less time if the initial system is constructed to search for and capture the ideas.
I think you indeed described the way an OS should work: do away with the monolithic kernel AND applications. However, I don’t think we have solved the reuse/component problem completely. Why are applications not written this way? We do it occassionaly with parts (GUI components or graphic format readers and such), but everything is linked into one binary preferably.
I think besides this idea you need to solve three problems:
1. How to have stability in the system: if you replace/enhance a component, not everything in the system should break (Eiffel’s Design By Contract probably).
2. What if you install something that needs a new version of X(that needs a new version Y (that needs a new version of Z)): you might end up having to replace your entire OS just by going to a new PNG component.
3. And the hardest: how to write a reusable component. That’s not solved at all. Often we need unique requirements that existing components don’t handle or never were written to support.
For example while reading a picture, you might want to display it already. You only have a component that can read, after that you can display. I think there is an uncountable number of such requirements that make reuse perhaps impossible.
Without even being a Software Engineer or other such OS expert (am a Telecomms Engineer), please allow me to chip in.
In many situations we encounter the idea of the heirarical model vs. distributed/flat model in solving problems. Whether we are looking at P2P networking, the Communist/Capitalist economic models, mesh vs. star telecommunications networks, RISC vs. CISC or business organisation, the problem and solutions can often be distilled down to the same basic expressions, and the same pros and cons develop from here.
To consider functionally abstracting the sub/tasks of a computer for a minute, the primary abstraction for us here is that the user shouldn’t have to know how the job gets done, just that it gets done correctly and on time. In between this are the layers that the developers create, built on the layers that the hardware engineers create etc. etc.
I understand the suggestion being made is to move the processing environment from an abstraction layer which we can call the “plug-in” level to what has traditionally been the application layer, and with this comes the moving of the plug-in gluing/coordination function from the application to one of the OS layers. Essentially, it is the removal of one layer from the heirarchy (or a flattening of the system).
Apple basically did the same thing with their system-level Audio Units concept by providing an OS level audio effect plug-in framework as an alternative to the application-level proprietary plug-in interfaces. They did, however, leave the application layer to call the modules and coordinate their execution, giving the benefits of a coherent environment from which to operate (see the anonymous, ‘integrated toolbox’ post above).
Considering what I said above, that these problems are often only manifestations of the one, another example could be the use of private companies in public construction projects. They are more efficient but less economical to employ as they have more overheads and profit to make. The private companies (plumbing, electrical etc.) are the layer that we are considering removing, but we can do this only if the government (the OS) is willing to employ the workers (the functioning code) directly and co-ordinate the workers itself (the coordination role). The effect is a flatter model with less specialisation through everyone having a common employer. (To extend the analogy, Apple did this but by leaving the plug-in coordination to the application layer effectively outsourced the project management).
In many such examples of flatter/flattened systems history shows this proposed idea will find the most problems in coordination and performance.
From what I gather, the proposed idea is to have the one layer with small software modules that perform similar types of processing tasks, called by the user to be part of a larger, more complex task.
It is more complex to coodinate because the further you take this levelling and modularisation, the more entities you create and these need to be managed with increasingly sophisitcated methods.
A practical example: I try to help a friend by having a shot at editing a photograph for him, hit on the best choice and sequence of filters to apply and tell him “do this, now that, and what you don’t have that module? get it off the net… I don’t know what version!” etc. and the more you apply the distributed/independent module concept, the more you tie the user into the processes he should be abstracted from.
THIS IS UNLESS, we design (into the OS) a framework from the start that includes tight version control, installation registries (ghasp!), central repositories and other such design stage considerations (a stage that the open-source community as well as many companies simply do not excel at working through).
Bloatware is the clumsy solution to this – avoid the problem by sticking it all in the installation (see the Kingrocky post “another con”).
But a system also becomes slower with this levelling and modularisation as more and more modules are called (with their associated overheads) to perform what was previously a relatively simple task. (Like a bad programmer who creates an inordinate number of general functions in a program to perform a simple function).
Silicon is becomming faster and so the speed issue will matter less with time as has cheaper memory made bloatware more viable with time.
So, in conclusion the further modularisation we employ the smarter we have to be at managing those modules to achieve the same benefits, but performance will always suffer by a proportional amount. With the tool of abstraction to help contextualise the idea and the tool of distilling the problem to help create imaginable scenarios, I have conceived of about a dozen that from my personal experience have indicated the complexity and performance cost to grow such that an optimal amount of modularisation will need to be found so that when weighed aginst the benefits, your idea is best capitalised upon.
Damn, it’s good to see people thinking up this stuff.
(BTW, obviously Sun did the opposite and heirachialised when it developed Java – by creating a new layer of abstraction, sometimes solutions come from going either way).
I am sure that Douglas Adams (the author of Hitch Hikers Guide to the Galaxy) had this idea in MacUser magazine a while ago.
Your idea isn’t that strange, but it’s not a new one ! I’m a member of a team who is working on an OS kernel which supports such modules. We’ve been working on it for 6 years now, and although it’s rather unfinished, our studies and simulations showed that it’s possible, reliable and (reasonably) fast !
But it’s a complex problem to use this architecture at its full potential. Any help welcome
Thinkig about being the only IT user tired of current paradigm this thread blowed into my face like a bomb. I pray the god people are even considering it.
The whole process of getting this great new OS can also be modularized. You don’t have to develop kernel, OS and UI all at once.
Geee, what an engineering adventure, it’s a shit I don’t have a spare life to waste
Richard Fillion described a part of the system, the boot sequence. You asked it if was part of the file system, well, it isn’t. A file system interface could be designed easily, all it would have to do is create a list of function ID codes with their available offsets, but right now there’s no ‘fs’ interface to it.
What we have at run-time is a dynamic linker. When a file is loaded, a list of dependencies is read, those dependencies are resolved and every point in the file needing recomputation is processed. Thus, in the sources you have something like:
‘externfunc’ expands into a ‘extern’ mnemonic and ‘call’ instruction, so it’s a just a normal ‘call’ cpu instruction to have access to this external module, which makes it extremely fast after linking.
anyway, I could probably go about our system for some time, just join our mailing list or #uuu on irc.oftc.net
A new *better* OS should result on thinks about what information is.
This is the key IMHO.
see : <http://www.10191.com/inferno/be++/index.html>
Dono if anyone pointed this out, but this sure sounds alot like Jef Raskin’s Humane Interface. http://humane.sourceforge.net/jefweb-compiled/index.html