Because everything is stored as an object, there needs to be simple editor. Something very similar to a text editor, that allows you to view and manipulate objects. It should work on any object (assuming you have permission to view and edit that object). It of course won't be able to view or edit parts of the object that are encrypted, unless you have the key.
For example: many programs have a configuration file, now they will have a configuration object and you can change the configuration this way. Similar to editing a configuration file with a text editor now.
And the OS Has Continuity Too
I think the same thing should be possible with the whole operating system that is done with the apps. It should have more saved state when it starts up. It should basically be restored to where it was when you turned it off.
My memory is so bad that I pretty much have to leave my machines running all the time. That has been one nice thing about Linux that I have been able to leave it running for months. Then when I come back it's just as I left it. The bad thing is when the power fails. It usually takes me an hour to get some semblance of what I was doing. And then it is never exactly everything I had going on. I just can't see why it's not possible to have the machine almost in the same state it was in when the power was lost. Obviously if you are in middle of typing something, some stuff is going to be lost. But why do we have to start over from nothing?
Modules and Software On Demand
One of the problems I've had on several occasions in Linux is the installation of software. Oft times, I have wanted to install a piece of software to do one function. But when I went to install it say via RPM, it had a dependency list a mile long. Sure "apt" and "yum" in most cases can handle the dependency thing. But it still bugs me to have all of that additional stuff, that I never use, on my hard drive wasting space.
As an example the other day I had done a minimal installation of SuSE on a machine (because it had very little disk space). I wanted to use K3B just to create data CD's and DVD's. However, when I tried to install it, it wanted MP3 libraries, Ogg Vorbis libraries, FLAC libraries, the list was quite long. I did not need any of this stuff on that machine, it was completely superfluous.
I think the software should be set up in smaller modules which will work independently. I further think that only a minimal amount of software should be installed on the system when it is started up. Then when you need to perform a certain task that software is installed automatically (with your authorization of course) and ONLY the software you actually require gets installed.
I alluded to this before, but I want to mention it again. I would like to develop a mechanism for independent modules to communicate with each other each other. Similar to the way in Unix/Linux you can hook the standard input and standard outputs together in a pipe. But have other types of interfaces (audio/video/whatever) and connect them at run time to perform some other function. For example if you had say, a filter plug-in, instead of just being able to use that plug-in inside the audio editor, you could just use the plug-in at the OS level and plug it between the output of some device and the mixer. Or you perhaps you could plug it into the mixer.
Can't Change Everything
While it would be tempting to try to change everything about a computer to see if it could be improved upon. I have decided there are limits to what makes sense.
The hardware is well, still "hard". It would be expensive to change the hardware and it would make it impossible for anyone without the special hardware to use the software. Making the whole point of the open source software moot. So we have to stick with the existing hardware.
I feel like the same holds true for the networking. It doesn't make sense to use something other than TCP/IP. TCP/IP seems to be flexible enough to support everything I have envisioned. Then we can communicate with the rest of the world. That can only be a good thing.
One of the problems that I have not decided how to handle yet is the endian-ness of different machines and data. Since the machine has access to the class definition of each object, it could automatically convert data to match the current processor.
The dilemma is which one should it be stored in? Or should it always be stored on disk in the format for the CPU the disk is connected to? It is tempting to store the information always in one format, say big-endian to match the network byte order. But then since the most common machines (x86) are little-endian, they would have to always have to take the hit.
Another alternative, would be to do as is done now and store the data on the hard drive in the native order for the CPU. This has speed benefits, the data only needs to be reversed when it's going over the network.