Network Transparency and the Multi-Root Filesystem

This article offers feature suggestions to budding OS developers looking for that neat edge.Wouldn’t it be real nice if FILE *f = fopen("ftp://www.microsoft.com/mission/statement.html","r"); just worked?

People have been dreaming of ‘mounting’ remote filesystems on demand for a long time. It seems to be a popular pastime for architecture astronauts. Despite Joel’s warnings to run from network transparency, I vote that you don’t!

Client-side libraries allow a program to access remote resources in the same way as local ones, for example the excellent libferris.

A new operating system that integrated such handling into the platform level (rather than an additional, optional library) would have the advantage that each and every application could access the same resources. The ‘ls’ in the ported bash prompt would be able to list the contents of an FTP directory, and the notepad clone would load your text files whether they were local, on some window server, or the other side of the internet.

Users work with and are familiar with URIs, so URIs are the natural way of expressing a file name and location. The filesystem ought to work with URIs.

Imagine the following snippet of fictious commandline:

> pwd
file:///home/me/
> cd http://www.microsoft.com
> cd mission
> ls
.	..	statement.html
> rm mission.html
access denied.

The use of URIs introduces what I term the ‘Multi-Root Filesystem‘. The protocol, e.g. “file”, “smb” or “webdav”, is a root. All protocols are peers of each other, and you can’t navigate between them in relative paths (i.e. no “file://home/me/../../../http://remote”).

Protocol handlers are global to the user or system. The file structure might be made upon demand, with the contents of “http://” reading like recent internet history; whereas “file://” might countain the local UNIX-style root “/” which contains in-turn “home” and “bin” etc.

Provide your system with a new protocol handler and suddenly all applications can use files available via that mechanism.

Obviously opening a file over ftp is possibly a million miles more complicated than opening a local file. When allowing the same API to be used to access both local and remote file-like resources, things that rarely go wrong on local operations (such as takes a long time, might fail) happen a lot more often. That the average programmer never checks that things fail/succeed and always putting IO into the UI thread is just plain bad, for both local and remote resources. It would have to be thought about. Massively multi-threaded message-passing operating systems might have the edge in this respect.

A unified, transparent network access for many protocols does not remove the need for dedicated protocol handling libraries for specialist programs. But it does make the average program suddenly much more powerful and useful to the average user!

It is worth mentioning an additional feature for the interested OS developer to research:
auto-mounting archives and encrypted files transparently, e.g. “sftp://www.mycom.org/mail/archives/2004-07.zip.pgp/get rich quick.msg”.


If you would like to see your thoughts or experiences with technology published, please consider writing an article for OSNews.

29 Comments

  1. 2004-08-03 5:51 am
  2. 2004-08-03 6:20 am
  3. 2004-08-03 7:11 am
  4. 2004-08-03 7:16 am
  5. 2004-08-03 7:24 am
  6. 2004-08-03 7:36 am
  7. 2004-08-03 8:02 am
  8. 2004-08-03 8:05 am
  9. 2004-08-03 8:10 am
  10. 2004-08-03 8:32 am
  11. 2004-08-03 8:36 am
  12. 2004-08-03 9:28 am
  13. 2004-08-03 10:03 am
  14. 2004-08-03 10:50 am
  15. 2004-08-03 10:58 am
  16. 2004-08-03 1:22 pm
  17. 2004-08-03 1:51 pm
  18. 2004-08-03 2:47 pm
  19. 2004-08-03 3:06 pm
  20. 2004-08-03 3:22 pm
  21. 2004-08-03 3:40 pm
  22. 2004-08-03 4:01 pm
  23. 2004-08-03 4:31 pm
  24. 2004-08-03 6:40 pm
  25. 2004-08-03 7:05 pm
  26. 2004-08-03 10:28 pm
  27. 2004-08-03 11:06 pm
  28. 2004-08-03 11:25 pm
  29. 2004-08-04 12:48 am