posted by Nicholas Blachford on Mon 26th Jul 2004 19:00 UTC
IconThis series explores the sort of technologies we could use if we were to build a new platform today. The first 2 parts covered the Hardware and core OS. In this third part we look at security, the file system, file management and throw in a couple of other random ideas for good measure.

Security
<rant mode="on"> I intensely dislike the industry's tendency to blame users for security problems, this is a cop-out, a dereliction of responsibility. The security problems are created by the computer industry and it is up to it to fix them. Viruses, Trojans and other nasties have been around for many years, how can the users be at fault when the industry has done seemingly nothing to defeat them? A car user is not expected to know the inner workings of a car in order to drive, why should he know the inner working of a computer? Security is not simple, [Security] but that is no excuse for blaming the user. </rant>

If there are patches to be downloaded the system should - by default - check for them daily. Of course if the system was properly designed in the first place you wouldn't need many patches. Microsoft was warned about potential security problems many years ago, did they do anything about it? Who's fault is that? That said there are more secure OSs than Windows especially the Unix based OSs such as Linux / *BSD and OS X (despite thinly disguised marketing efforts that say otherwise).

In any new system security should be built in from the start. It should assume everything wants to cause damage so restrict a program's potential to do so. I don't believe in building a wall as it can be broken through, I believe lots of walls are a lot harder to get through and present many opportunities for repelling an attack, thus security should be considered in all parts of the system. No one part should be relied upon. All parts of the system should by default, be secure.

Virus Scanning
Scanning everything with a virus scanner should be a standard part of the system, not an add on. Even if there are no viruses (or Virii) the system should scan an incoming file to see if it or part of it is executable.

Sandbox all new files
All files, not just e-mail / web downloads, that way they can't do any damage. If an executable is not run immediately mark it so when it is run it can be sand-boxed first time. FreeBSD can sandbox applications using Jails [Jail].

You could go further and sandbox everything at all times and proactively look for things to sandbox - i.e. if an application attempts to download an executable file and execute it in memory, the system should either prevent or sandbox this behaviour.

Don't allow programs to delete all files
Also, don't allow programs access to the backed up files (see File System section). This will prevent a virus or errant program from deleting all the files in your home directory. If it tries the files should be moved to backup, the system should monitor the file system for this type of behaviour, warn the user when detected and give them an option of restoring files and either disabling the application or containing it's actions to specific files. Deleting backups should be a privilege only the user has - no application should have this ability.

Such a scenario is possible on almost all Operating Systems and it's happened to me in Linux - back in 2000 an alpha release of Opera browser crashed and removed the contents of my home directory, needless to say I wasn't exactly happy about it. I expect an alpha to be unstable and missing features, I did not (and do not) expect it to take out my home directory. If an application was so bad that it could do such a thing think of what a malicious programmer could do. Currently applications by default can cause as much damage as they wish in your home directory, the system should prevent this.

Automatically identify all files
Warn the user when the identification is incorrect, if there is a text based identifier, change it (but tell the user). Text based file identifiers can be and are abused and should no longer be relied upon.

Don't run services as Root
This is impossible to achieve on most Unix based systems due to the design of the kernel. This is one of the advantages of a microkernel as you don't need to run services as root [Microkernel] reducing their ability to do damage.

Limit external access
Despite being aimed as a desktop system we may want some server like abilities such as the ability to control the system from outside. This should be possible but only via an encrypted method and should preferably only be possible with a dedicated application. Some non-encrypted connections would of course be possible i.e. Web or FTP servers.

Indirect program launching
A program that launches other programs should not be able to launch programs directly, the launch would go through the interaction engine (described in a later part of this series) which launches the program. This sounds restrictive but making program launches indirect means they wont return to a terminal once done, an attacker thus cannot get access to a terminal by externally crashing a program.

Exported Services
If I run a web server it automatically becomes visible to everyone on the outside unless it has been fire-walled off. I propose a "tunnel" be added to our system and only services which have been explicitly been "exported" along this tunnel can be accessed from the outside world. This would mean the web server would have to be explicitly exported before it could be accessed from the outside world. The Tunnel could also be monitored so what's going in and out could be tracked and displayed, if there is something undesirable running it can have it's export disabled (automatic exporting by programs should not be possible).

Quite how a tunnel would be implemented is open to question. One possibility is to have two network stacks: The Internal stack is in contact with the programs in the system. The External stack is in contact with the external network interfaces. The tunnel sits in between connecting the two stacks. You could run a Router, Firewall, NAT (Network Address Translation) and other services on the external stack yet have the inner stack completely isolated from the internet. When you do connect you could go via NAT which itself adds another layer of security.

Table of contents
  1. "Security and Files, Page 1/3"
  2. "Security and Files, Page 2/3"
  3. "Security and Files, Page 3/3"
e p (0)    64 Comment(s)

Technology White Papers

See More