How will the future operating systems look like? How the user interface, the inner workings, the security policies and the networking will interact? In any case, innovation is the key.
If you visit OSNews once in a while, you will of course know everything about the present and about the future of operating systems. Somewhere between 2005 and 2007, Microsoft will release Windows codename Longhorn, and until that happens Gnome and KDE need to fill the gap between themselves and Windows XP. And if everything goes well, they will implement some Longhorn features as well. On the other side, we have the innovative Mac OS X. It is the user-friendliest computer system on earth, built on UNIX and has OpenGL acceleration of the screen.
Wait. Read that again, and think for yourself: how much innovation has there been and will there be? Let’s start with Gnome and KDE. They are mainly copying the user interface of Windows. Yes, Gnome places the application menu on the top of the screen instead of the bottom, and KDE has invented KIO. But almost everything else is plain copying. KDE even has the window buttons in exactly the same place as Windows. There is a reason for this. A quite simple one, actually. Most people today work with Windows, and when they make a desktop environment that behaves radically different, they are afraid they scare people so that they continue to use Windows.
But how is Windows doing? Is Windows innovative? This page says Windows is innovating, and says Windows is to the Macos what Java is to C++. That’s not entirely true: C++ was a bad fix to C, and Java cleaned everything up. On the other hand. MacOS was a clean, new implementation of a graphical OS while Windows was just a way to fix DOS. From that, we can say that Windows is to MacOS as if C++ had been invented as a reaction to Java. And when we look a bit closer: what things has Microsoft invented. They copied the overlapping windows. The Explorer is a copy of the Finder, while SMB is a copy of AppleTalk. Word was a reaction to WP and Internet Explorer is just an improved version of NSCA Mosaic. And there is a reason Windows does not really innovate: it doesn’t want to lose its market share, so it takes care not to scare users. When the Windows interface would radically change, they could switch to Linux just as well as upgrading to the new Windows version.
You might have noticed that Windows stole quite some things from Apple. So, are they innovating? In 1984, they were. The Macintosh was a nice new computer; one of the first (if not the first) home computer that was not character based anymore and had the mouse as a mandatory input device. Shortly thereafter, they invented AppleTalk, with which networking computers became as easy as plugging in the network cable. After that, only minor system updates have come out until Mac OS X was released. It was called innovative. But what does it do? It’s effectively a MacOS-like GUI with a UNIX-core, so in fact it does nothing more than combining two technologies, both being decades old. That has a reason, too: Apple’s marketshare is small, and in this way they can keep their former customers while they can also attract new ones: their OS is now built on the “proven reliability” of UNIX thanks to it being 30 years old. Apparently, they have not read the Unix-Haters Handbook, from which it seems UNIX was rather unstable even 10 years ago.
Does that mean the current operating systems are the best; that better is simply impossible? Most likely not, the most logical reason for the lack of innovation is the fear to loose market share by inventing something better, er, different. So here is my proposal: if you build an entirely new operating system, why not make it different from the ones that exist, so that it can try out ideas that might be better than the current ones, and it might even attract users, namely those who want a different operating system for a change, one with an identity. In the rest of this article, I will lay out such a proposal. I’ll need to see whether I have time to work on an actual implementation, but thanks to the nature it luckily isn’t necessary to start with the bootloader 🙂
1 Virtual machine
Nowadays new processors are being invented: the Itanium and the AMD-64. To take advantage of these processors, the operating system and all applications that run on it at least need to be recompiled and parts of them need to be rewritten. That is not very practical, something Sun realized when it invented Java. Microsoft has also seen this and started on the .NET project. Both these implement a Virtual Machine that can run binaries specially adapted to it. The advantage is that the same binaries can always run on the virtual machine, no matter what the host OS or the hardware is.
As this is very practical, I will take such a virtual machine (VM for short) as the basis of the OS idea. Not very innovative, I know, but rather practical. It makes the OS and it’s applications completely hardware-indepent and also has the advantage that the VM can first be implemented as running on another OS, so that work can immediately start on the VM and OS itself, without needing to code a boot loader and extended hardware support first.
2 The user interface
The user interface should be friendly and practical, both for the newbie as for the experienced computer user. Therefore, no POSIX compatibility is needed and no GNU utilities need to be ported. And why should they? In this modern world, we want to use more than text. We want fonts, webpages, flash animations, music, pictures and movies. The command line is not suitable for them, so a graphical interface (GI) is really necessary.
2.1 The general layout
However, this does not mean copying the GUIs of Windows or MacOS. They can namely be rather confusing. For example, most GUI’s has overlapping windows, which are confusingThe Xerox Star people already knew this and therefore didn’t allow windows to overlap. The confusing thing is the following: imagine you have two windows, say a maximized Outlook Express and a normal New Message window on top. When you accidentally click the Outlook Express window, it will look like the message you were typing is lost. Of course, it’s just hidden behind the window you just clicked, but that is not obvious. . The solution is to take the idea of the original MacOS even further: not only hide other applications when you activate one, but make all windows maximized instead. That solves the overlapping window problem and does away with the title bar taking precious screen space.
Now you will probably notice that drag and drop is not possible anymore, at least not between applications and also not between windows. That is not practical, because it forms a much more visual way of moving objects than the copy-past way Windows introduced. Therefore, the GI should offer a split-screen mode, in which two windows, can be visible next to eachoter.
2.2 Dialog windows
Of course, configuration and property windows don’t need the entire screen. Therefore, they can appear in smaller, document-modal (see below) windows. If you open one, the full-screen view behind it should be grayed out, so that the window containing the things you can do appears lightened up, so that it really gets your attention.
The appearance of MacOS 8/9 also has this effect, but it is lots more confusing because it makes no difference between windows that can’t be activated because of a dialog (as in my proposal) and windows that are plain inactive and can be switched to with a single mouseclick.
2.3 The widgets an sich
Nowadays everybody points out that Gnome should be used instead of KDE because it looks more polished, that you should use MacOS X because Aqua looks so cool and that Longhorn is even better because it provides hardware accelerated control drawing. Sounds great? It actually isn’t. Those fancy user interfaces waste precious CPU and GPU cycles, making your computer slower than it needs to be, thus making you work slower. In return, you are distracted from your work so that your productivity is even lower.
Looking at future developments, however, it seems to be rather practical to have a resolution indepent GUI. In that way, applications have no problems running on low-res devices such as a palmtop or a TV, while still being able to take advantage of high-res computer screens. To have something to brag about, the VM graphics system should offer nested canvases remembering their content, so that one could say that ” the new OS has a GUI in which each control is drawn with hardware acceleration”.
3 The document model
After having done away the windows, we should also get rid of applications, because they are also confusing. On Windows, you can open documents in two ways: from an empty application window and from the Explorer. The same applies to creating new documents, but strangely enough not to saving them. On the MacOS, that is also the case and it is even more confusing: an application can be active and running without displaying any windows. That means you see the desktop and the finder, but that the menu bar is different because you are effectively running another application. Those are enough reasons to leave the application model.
Instead, the only thing the user should see are documents. Nothing more and nothing less. When the user clicks a document, it is opened, and when he closes it, the document is closed. What software is used to accomplish this should not be visible in any way. The way this can be implemented is by making applications effectively applets (like KParts or OLE objects). When a document is opened, a new full-screen view is created and the document is embedded into it, along with the application.
The advantage of the applet model above applications is that no seperate logic is needed to provide embedding documents – the parent document’s applet just needs to embed another applet into it, and for that applet it is not visible whether it runs full-screen or embedded.
For normal documents, data will come from a file, either on disk or embedded in another document. That’s the way it also works on Windows, MacOS and KDE. However, sometimes that is not practical. Imagine that you want to make a chart application. You will probably want to link the data to the spreadsheet it is embedded in. It should be obvious that would not work. Therefore, we need to create the GUI equivalent of pipes, so that you can use (a selection of) your spreadsheet data as the input of the chart applet. That can create powerful systems: functions like bibliographies and tables of contents can be placed in separate applets that can be embedded in the document they are used in.
The documents can be stored in any format – the applet can determine it. To make embedding more flexible, a standard output format should be made, however. For this new OS, it should be designed from the ground up – this to support a few important things for the embedding to work properly. The reason for this is that besides the well-known object embedding, it should also provide text embedding, to make things like Table of Contents-applets possible.
Embedding objects is easy. The host applet defines, within the DDF, a region containing a sub-ddf and pastes the output of the embedded applet within it.
Embedding text is more difficult, and that has a reason. With embedded objects, the host decides how large the frame is in which the object is embedded. With embedded text however, the applet needs to decide the size as you can’t just rescale a text. Additionally, it would not look nice if embedded texts would not have advanced features like, say, automatic hyphenation. A solution would be to let the host application decode the DDF the embedded object outputs. That is not a clean solution, however, as the host applet needs to sport a complete DDF interpreter.
I believe the solution can come from breaking a monolithic program into several applets. The usual word processor can be split up into two pieces. The first one will be for composing formatted text. Let it support feautures like word wrap and embedding of other texts. The other applet will do the layout: it will put the text (and images) into frames, possibly on multiple pages, supporting text flow, page numbers and so on. In that way, you can still edit complex documents, but you have more flexibility, and can use the same advanced text formatting in both the word processor and the spreadsheet program.
4 The Network is the Computer
In these days, networking has become an essential part of every computer system, be it a standalone PC, a file server or a mobile phone. Well, you are right, the dishwasher has no network connection… yet. So the new OS certainly needs to be network enabled. That does not mean that there is no room for improvement, however. We have the Static, DHCP and Zeroconf methods of getting IP adresses, NFS and SMB to share files, Cups, LPR and SMB to share printers, NIS, NIS+ and LDAP to have the same user accounts everywhere, and Remote Desktop, VNC and X for remote logins.
These existing systems work. Sometimes. After editing a lot of settings and configuration files. And that is not how it should be. When networking should be practical, it should be really practical, for everyone. After all, a home user wanting to take advantage of the network they have made for internet sharing, does not want to dive into the world of TCP/IP, DHCP servers, gateways, DNS and so on. They want a network that just works. And what would be rather practical, is if you were able to edit the same document no matter whether you are working on the desk computer, the laptop or the refrigator.
Such a thing cannot be accomplished easily. Rendezvous is a step in the right direction, but it is still bound to one single computer: you don’t instantly have access to your documents – you need to search through other computers for the resource you need and login to that computer before you have access.
4.1 The basic idea
This is a rather interesting question. Imagine you have a home network with two computers. On the one hand, you want to be able to login to both of them, even when the other one is down. On the other hand, you don’t want that a hacker can enter the network with his laptop and have access to everything. And in a larger network, you don’t want each PC to store all user data, as such a network probably does have a server running 24/7.
That does already imply that there would be two “modes” of operation: one for the home user, where each PC knows all accounts, and another one for centralized networks where a server knows them. In a perfect world, these two could be matched, so let’s look how that can be done.
4.2 Peer-to-peer and server-client implementation
In principle, each PC operates in decentralized mode. Without a network, that means that it has one user (with associated ID) that owns everything. If two such computers meet eachother, both will learn the user data from eachother. Now, you can login to both computers with exactly the same result.
In a larger network, a server can be added. In a similar p2p-method as with decentralized mode, the server information is shared (but only its address, not the accounts themselves). When someone wants to login now, first the local user database is checked and when there is no match, the computer will also look at the server. The latter will send the account information to the local PC, and if everything is right you will get logged in and have access to the network, most likely the printers and drive space attached to the server. Additionally, the account now exists on your local PC too, so that you can use it even when you aren’t connected to the network.
4.3 Account modification
The only problem left is changing your password, as the new password needs to be propagated through the network without allowing hackers to change your password. Luckily, for this there is a solution, too, and it is rather easy. The new password will have the old one “within itself”, so that the new password can identify itself. In this way, no hacker can change your password without knowing the current one, while you can do it. To solve the problem for when two password changes meet, the date of each password can be stored in the account. This also makes it possible to remove obsolete passwords after a certain amount of time.
5. The end result
Finally, it might be useful to look at the results of the proposal: is it innovative, and almost more important, is it useful and user-friendly?
I believe the proposed GUI does indeed break with the current tradition and does this in a useful way. Doing away the windowed interface seems going back, but removes something which is rather confusing for new computer users (and has no advantage over split-screen like windowing other than wasting space because windows don’t fit to eachother). Not having a too fancy interface is also a good thing, as it doesn’t distract you from your work and does not scare away people (yes, people fear Windows XP as it is different from 98/Me).
The document format, on the other hand, does not offer much more than Display PDF or something like that. Combined with the linking model, however, it becomes more powerful than what we have today, allowing to use pipes, famous within the Unix world, within a graphical environment, which serverely extends possibilities and reduces complexity.
The network model, finally, unifies the traditional, server-based systems like UNIX and Netware, and the peer-to-peer networks like AppleShare and SMB in one package, allowing for one consistent, interface for both types of networks, still powerful but also comprehensible for the average home user.
Though this proposal might never see a working implementation, I still believe it shows there is a lot of room for innovation in the current operating systems. So I hope that they will not only innovate behind the scenes (SMP support, NTPL, WinFS, …) but that one of them will take the step to break with the past to allow new concepts in, so that the end user will finally get improvements as well.