In the Free and Open Source communities we are proud of our ‘bazaar’ model, where anyone can join in by setting up a project and publishing their programs. Users are free to pick and choose whatever software they want… provided they’re happy to compile from source, resolve dependencies manually and give up automatic security and feature updates. In this essay, I introduce ‘decentralised’ installation systems, such as Autopackage and Zero Install, which aim to provide these missing features.
I am the author of Zero Install, but I hope to focus on concepts rather than particular implementations here. I’ll start by describing our goal of allowing programmers, users, administrators, reviewers and QA teams to collaborate directly, and explain a few of the limitations of traditional centralised systems.
We’ll look at the technical issues involved in distributing software without a central authority; how to prevent conflicts, handle dependencies, provide updates, share software efficiently, compile from source, and provide good security. Finally, I’ll look at the possibilities of converting between different formats and systems.
Stepping back for a moment from implementation issues, let us consider the various people (‘actors’) involved and the things they should be able to do. Here is a simplified Use Case diagram:
Here we have a user who selects and runs a program, which has been made available by a programmer. The user may provide feedback to help with future versions. Patched or improved versions may come from the original programmer, or from a third party represented here by a “QA team” providing timely bug-fixes on the programmer’s behalf. Users must be able to receive these updates easily.
The user’s choice of software will be guided by advice from their system administrator and/or from 3rd-party reviewers (e.g. magazines). In some situations a system administrator may need to lock down the system to prevent unauthorised programs from being run. The administrator may also pre-configure the software for their users, so their users don’t need to do it themselves.
Since users run many different programs, and each program will use many libraries, a real system will involve many different programming teams, QA teams and reviewers.
Language translators and producers of binaries for particular architectures are not shown; they are included in “Programmer” or “QA team” (depending on whether their contributions are distributed separately or bundled in the main release). Providers of hosting and mirroring services are also not shown.
Note that in these high-level use cases I don’t talk about ‘installation’ at all, because this isn’t something anyone actually wants to do; it’s just a step that may be required in order to do something else.
The challenge, then, is to provide a framework in which all of these different people, with their different roles and goals, can easily work together.
The picture above doesn’t quite correspond to the model used by traditional Linux distributions, where users must pick a distribution and then only use software provided by that distribution. This model falls short of the ideals of Free software, because a user is only free to install programs approved by their distribution (of course, it may be possible for users to do this; here and in the rest of this essay I am concerned with things being easy and reliable).
As a software author in this system, I must convince one or more major distributions to accept my software before most users will be able to run it. Since distributions are unlikely to accept the maintenance burden of supporting software with a small user-base this makes it very difficult for new software to be adopted.
The situation is worse if the program has network effects. For example, few people will want to distribute documents in a format that is readable only by users of a particular distribution (because only that distribution has packaged the software required to read them). In this case, the programmer must convince all the major distributions to include their software. This problem also applies to programming languages: who will write programs in a new language, when many users can’t get hold of the required compiler or interpreter?
For example: I want to write a program in D because I’m less likely to introduce security flaws that way, but I actually write it in C because many distributions don’t have a D compiler.
The situation with traditional distributions is also highly inefficient, since it requires multiple QA teams (packagers) doing the same work over and over again. The diagram below shows some of the people involved in running Inkscape. I’ve only shown two distributions in this picture (plus a Linux From Scratch user, who gets the software directly), so you’ll have to imagine the dozen or so other packagers doing the same for other distributions.
Do we need this many people working on essentially the same task? Are they bringing any real value? Without the Fedora packager, Fedora users wouldn’t be able to install Inkscape easily, of course, so in that sense they are being useful. But if the main Inkscape developers were able to provide a package that worked on all distributions, providing all the same upgrade and management features, then we wouldn’t need all these packagers.
Perhaps some of them would join the main Inkscape development team and do the same work there, but benefiting everyone, not just users of one distribution. Perhaps they would add exciting new features to Inkscape. Who knows?
A system in which anyone can contribute must be decentralised. Otherwise, whoever controls the central part will be able to decide who can do what, or it will fragment into multiple centralised systems, isolated from each other (think Linux distributions here).
How can we design such a system? One important aspect is naming. Linux packages typically have a short name, such as gimp or inkscape, and they include binaries with names like convert and html2text, and libraries with names like libssl.so. If anyone can contribute packages into our global system without someone coordinating it all, how can we ensure that there are no conflicts? How can the system know which of several programs named firebird the user is asking to run?
One method is to generate a UUID (essentially a large random number), and use that as the name. This avoids accidental conflicts, but the new names aren’t very friendly. This isn’t necessarily a problem, as the identifier only needs to be used internally. The user might read a review of a program in their web browser and tell their computer “When I type ‘gimp’, run that program“.
Another approach is to calculate the name from the program’s code using a cryptographic hash function. Such names are also unfriendly to humans, but have the advantage that if you know the name of the program you want then you can check whether a program some random stranger gives you is really it, enabling peer-to-peer distribution. However, since each new version of the program will have a different name, this method can only name individual versions of a program.
Another popular approach is to include the name of a domain you control in the program’s name. For example, the Autopackage developer guide gives @purity.sourceforge.net/purity as an example. These names are much more friendly for users. This does require you to be given a domain name by someone, but these are rather easy to come by, and a single domain is easily sub-divided further. Zero Install uses a similar scheme, with URLs identifying programs (such as http://www.hayber.us/0install/MusicBox), combined with the use of hashes to identify individual versions, as described above. Using a URL for the name has the additional advantage that the name can tell you where to get more information about the program. Sun’s Java Web Start also uses URLs to identify programs.
Finally, it is possible to combine a URL with cryptographic hash of a public key (corresponding to the private key used to sign the software). This gives a reasonably friendly name, along with the ability to check that the software is genuine. However, the name will still change when a new key is used.
Whichever naming scheme is used, we cannot expect users to type in these names manually. Rather, these are internal names used by the system to uniquely identify programs, and used by programs to identify their dependencies. Users will set up short-cuts to programs in some way, such as by dragging an object representing a program from a web-page to a launcher.
Note that Klik identifies programs using URIs, but using a simple short name (e.g. klik://firefox). Therefore, it is not decentralised in the sense used in this essay: I cannot distribute my packages using Klik without having my package registered with the Klik server, and the controllers of the server must agree to my proposed name.
By using globally unique names, as described above, we can unambiguously tell our computer which program we want to run, and the program can unambiguously specify the libraries it requires. However, we must also consider file-level conflicts. If we have two libraries (@example.org/libfoo and @demo.com/libfoo, for example, both providing a file called libfoo.so) then we can tell that they are different libraries, but if we want to run one program using the first and one using the second, then we cannot install both at once! This ability to detect conflicts is an important feature of a packaging system, helping to prevent us from breaking our systems.
Another source of file-level conflicts occurs when different programs require different versions of the same library. A good package manager can detect this problem, as in this example using Debian’s APT:
# apt-get install gnupg The following packages will be REMOVED [...] plash rootstrap user-mode-linux The following packages will be upgraded: gnupg libreadline5
Here, I was trying to upgrade the gnupg package to fix a security vulnerability. However, the fixed version required a newer version of the libreadline5 package, which was incompatible with all available versions of user-mode-linux, rootstrap and plash (three other security-related programs I use regularly). APT detects this and warns me, preventing me from breaking the other programs. Of course, I still end up with either an insecure gnupg or no user-mode-linux, but at least I’m warned and can make an informed decision.
In a centralised Linux distribution these problems are kept to a minimum by careful central planning. Some leader decides whether the newer or older version of the library will be used in the distribution, and the incompatible packages are updated as soon as possible (or, in extreme cases, dropped from
Traditional Linux systems also try to solve this by having ‘stable’ or ‘long term support’ flavours. The problem here is that we are forced to make the same choice for all our software. In fact, we often want to mix and match: a stable office suite, perhaps, with a more recent web browser. In fact, at work I generally want to run the most stable version available that has the features I need.
In a decentralised system these problems become more severe. There is no central authority to resolve naming disputes (and renaming a library has many knock-on effects on programs using that library, so library authors will not be keen to do it). Worse, if updating a program to use a new version of a library prevents it from working with older versions, then it will now be broken for people using the older library. We cannot assume that everyone is on the same upgrade schedule.
Indeed, upgrading a library used by a critical piece of software may require huge amounts of testing to be done first. This isn’t something you want to be rushed into, just to get a security fix for another program. Less actively maintained programs may not be updated so frequently, especially some utility programs developed internally.
Finally, conflicts become much more serious if you allow ordinary users, not just administrators, to install software. If installed packages are shared between users, which is important for efficiency, and packages can conflict, then one user can prevent another user from installing a program just by installing something that conflicts with it. How packages can be shared securely between mutually untrusting users will be covered later.
In the GnuPG example above, the package manager is providing a valuable service, but the situation still isn’t ideal. What I really want is to be able to install all the programs I need at the same time!
The general solution is to avoid having packages place files with short names (such as gimp) into shared directories (such as /usr/bin). If we allow this, then we can never permit users to install software (one user might install a different, or even malicious, binary with the same name).
Also, supporting multiple versions of programs and libraries can only be done with a great deal of effort. For example, we could name our binaries gimp-2.2 and gimp-2.4 to support different major versions of the Gimp, but we still can’t install versions 2.2.1 and 2.2.2 at the same time and we’ll need to upgrade our scripts and other programs every time a new version comes out.
We can simplify the problem a great deal by having a separate directory for each package, and keeping all of the package’s files (those that came in the package, not documents created by the program when run) in that directory. This technique is used by many systems, including the Application directories of ROX and RISC OS, the Bundles of NEXTSTEP, Mac OS X and GNUstep, Klik’s disk images, the Filesystem Hierarchy Standard’s /opt, and Bernstein’s /package. Then we just need a way to name the directories uniquely.
So how do we name these directories? We can let the user decide, if the user explicitly downloads the package from the web and unpacks it to their home directory. This still has a few problems. Sharing packages between users still doesn’t work (unless the users trust each other), and programs can’t find the libraries they need automatically, since they don’t know where the user has put them.
One solution to the library problem is to have each package include all of its dependencies, and not distribute libraries as separate packages at all. This is the technique Klik uses, but it is inefficient since libraries are never shared, even when they could be, and upgrading a library requires upgrading every package that uses it. Security patches cause particular problems here. As the Klik site acknowledges, this is only suitable for a small number of packages. Bundling the libraries with the package is also inflexible; I can’t choose to use a customised version of a library with all my programs.
A solution that works with separate libraries and allows sharing between users is to store each package’s directory in a shared system directory, but using the globally unique name of that version of the package as the directory name. For example:
/opt/gimp.org-gimp-2.2.1/... /opt/gimp.org-gimp-2.4.3/... /opt/example.org-libfoo-1.0/... /opt/demo.com-libfoo-1.0/...
Two further issues need to be solved for this to work. First, we need some way to allow programs to find their libraries, given that there may be several possible versions available at once. Secondly, if we allow users to install software then we need to make sure that a malicious user cannot install one program under another program’s name. That is, if /opt/gimp.org-gimp-2.4.3 exists then it must never be anything except gimp version 2.4.3, as distributed by gimp.org. These issues are addressed in the following sections.
GNU Stow works by creating symlinks with the traditional short names pointing to the package directories (e.g. /usr/bin/gimp to /opt/gimp.org-gimp-2.4.3/bin/gimp). This simplifies package management somewhat, but doesn’t help us directly. However, with a minor modification – storing the symlinks in users’ home directories – we get the ability to share package data without having users’ version choices interfere with each other.
So we might have ~alice/bin/gimp pointing to /opt/gimp.org-gimp-2.2.1/bin/gimp, while ~bob/bin/gimp points to /opt/gimp.org-gimp-2.4.3/bin/gimp. Alice and Bob can use whatever programs and libraries they want without affecting each other, but whenever they independently choose the same version of something they will share it automatically. If both users have a default policy of using the latest available version, sharing should be possible most of the time.
Continuing our example above, Alice can now run the new gnupg, while Bob continues to use user-mode-linux, since each has a different ~/lib/libreadline5.so symlink. That is, Alice can’t break Bob’s setup by upgrading gnupg. We can go further. We can have a different set of symlinks per user per program. Now a single user can use gnupg, user-mode-linux and plash at the same time:
When an updated version of user-mode-linux becomes available, we simply update the symlink, so that user-mode-linux and gnupg share the new version of the library, while plash continues with the older version.
In fact, we don’t need to use symlinks at all. Zero Install instead sets environment variables pointing to the selected versions when the program is run, rather than creating huge
numbers of symlinks, but the principle is the same.
The simplest way for a programmer to distribute software is as an archive file containing the program’s files. This archive can be placed on a web page, along with instructions on how to download and run it.
This isn’t very convenient. At the very least, we will expect our computer to check for updates periodically and give us the option of installing them. We will also want the system to download and install any missing dependencies we require. Both of these tasks require a machine-readable version of the web page, and there are two similar file formats available for this purpose.
Luau and the Zero Install feed specification both define XML-based formats for describing available versions of software and where to get them. They are rather similar, although Luau feeds don’t provide information about dependencies and don’t have signatures, while Zero Install feeds don’t provide messages about what changed between versions.
A more subtle difference is that each version in a Luau feed contains a cryptographic digest of the package’s compressed archive, while Zero Install feeds give a cryptographic digest of the package’s uncompressed directory tree. The Monotone documentation has a good explanation of how this can be done, although the manifest format it describes is not identical to the Zero Install manifest format.
While either type of digest is sufficient to check that a downloaded package is correct, Zero Install’s digests also allow installed packages to be verified later, and permit peer-to-peer sharing. This doesn’t work if you give the digest of the archive, since the archive is thrown away after installation. Of course, there’s no reason why both can’t be provided, giving an extra layer of security.
Java Web Start’s JNLP is another XML-based format with similar goals, but only works with Java programs. Also, the JNLP file and all the jar files (libraries) must be signed with the same certificate, which isn’t suitable for the distributed OSS development model.
Sharing installed software
We saw above that letting users install software and having it shared between them was possible (and safe) provided that they were able to put genuine versions of programs in the shared directory, but not incorrect ones. There are two ways we can achieve this.
The first is to have users ask a privileged system process to install a program, giving it the program’s globally unique name. The system downloads the named software and unpacks it into a directory with that name. So, Alice can ask the system to install http://gimp.org/gimp or http://evil.com.gimp, and the system will install to /opt/gimp.org-gimp or /opt/evil.com-gimp as appropriate. Alice cannot cause one to appear in the other’s location, so Bob can be confident that /opt/gimp.org-gimp really is the program he wants. This is rather similar to the way a local caching web proxy works:
The second approach is to have users download the software and then ask the privileged system process to copy it to the shared cache. This requires the program’s name to be a cryptographic digest of its contents, as explained above.
Which method is best? An early filesystem-based version of Zero Install used the first method, while the current one uses the second. The main disadvantage of the first method is that the privileged process is rather complicated, and this is not a good thing for a program which runs with higher privileges than the person using it, since it’s easier for a malicious user to find bugs to exploit. After all, the process must download from the network, provide progress feedback to the users downloading the package, allow users to cancel their downloads or select a different mirror (but what if two users are trying to download the same package?), and so on. In particular, it is not possible to share software installed from a CD when using the first method.
Using the second method, the privileged process only needs to be able to check that the digest of a local directory matches the directory’s name. For example: Alice goes to gimp.org and discovers that the version of the Gimp she wants has a SHA256 digest of “4fa…“. She sees that the directory /shared-software/sha256=4fa… already exists on the computer (perhaps Bob added it earlier). Alice runs this copy, knowing that the installer wouldn’t have let Bob put it there under that name unless it had exactly the same contents as the copy Alice wants to run.
The basic security model used by Linux and similar systems is to have different users, each in their own security domain. Each user must trust the core system (e.g. the kernel, the login system, etc) but the system is protected from malicious acts by users, and users are protected from each other.
In this essay, I’ve talked about malicious users in several places, but it’s important to realise that this includes otherwise-trustworthy people who are (accidentally) running malicious software, or whose account has become infected with a computer virus, or who have failed to choose a secure password, and so on. So even on a family computer, where the people all trust each other, there is benefit to containing an exploit in a single user’s account.
Many Linux installation systems work by downloading a package and then executing a script within it with root access, and copying files into locations where they can affect the whole system. If you tell your computer to “upgrade all packages” each week, there may be several hundred people in the world who can execute any code they like on your machine, as root, within the next seven days!
For some packages, this is reasonable; you can’t expect to keep your kernel up-to-date without trusting the kernel’s packager. For others (desktop applications, for example) we might hope to limit them to destroying only the accounts of users who actually run them. Games, clipart, and documentation packages should ideally be unable to damage anything of value. This is the Principle of least privilege.
As well as malicious or compromised user accounts, we must also consider the effects of an insecure network, hostile web-sites, compromised servers, and even malicious software authors.
Klik is activated by the browser trying to resolve a URL starting with ‘klik://’. Firefox displays a confirmation box to prevent malicious sites from starting an install without the user’s consent, although the dialog does give the user the option to defeat this protection in future:
Autopackage requires the user to download the package file and then run it from the file manager. Firefox extensions can only be installed from white-listed sites, and a count-down timer prevents users accidentally clicking on Install. Zero Install requires the user to drag the link from the web-browser to some kind of installer or launcher (e.g. onto a Start menu), and requires confirmation that the author’s GPG key is trusted:
Transport Layer Security (e.g. the https protocol) can protect against insecure networks and replay attacks. It allows the client to be sure that it is talking to the host it thinks it is, provided the remote host has a certificate signed by a trusted CA. However, TLS requires the private key to be available to the server providing the software; the server does not require any action from a human to make use of this key. This means that an attacker breaking into the server and modifying a program will go undetected.
An alternative approach is for the author of the software to sign it on their own computer and upload the signature to the server. This should be more secure, since the developer’s signing machine is much less exposed to attackers (it may not even be on a network at all). It also allows mirrors to host the software, without the user having to trust the mirrors.
In fact, rather than signing the software itself, we may prefer to sign the XML file describing it. The XML file contains the digest of each version, as explained above, and the software can be verified from that. The advantage here is that the actual package file doesn’t need to be modified. Also, the signature remains up-to-date, since the author re-signs the whole XML file on each new release (signing keys should be updated from time-to-time to use stronger algorithms and limit the effect of compromised keys).
The downside of these static signatures is that replay attacks are possible, where an attacker (or mirror) provides an old version of a program with known security flaws, but still correctly signed. To protect against this, Zero Install records the time-stamp on the signature and refuses to ‘upgrade’ to a version of the XML feed with an earlier signature. This warning should also make it obvious to those users who did get a more recent version that a break-in has occurred.
The signed XML file must also include the globally unique name of the program; it’s no good trusting a correctly-signed version of ‘shred’ when you asked your computer to open the file with a text editor!
As always, users have the problem of deciding whether to trust a particular key in the first place. The hint in the screenshot above is from a simple (centralised) database supplied with Zero Install, and only says that the key is known, not trustworthy. A useful task for a QA team (or distribution) would be to add their signatures to approved versions of a program, or to provide their own database of trusted keys.
A final point is that we may want to give different levels of trust to different programs. If I am evaluating six accounting packages then I will probably want to give them all very limited access to my machine. Once I have chosen one, I may then give that single program more access.
Sun’s Java Web Start is able to use the security mechanisms built into Java to run programs in a restricted environment. Other systems may use more general sandboxing tools, such as Plash.
Freedom to modify programs requires easy access to the source code, and the ability to use your patched version of a library with existing programs. Our installation system should be able to download the source code for any program, along with any compilers, build tools or header files we need, as in this screenshot of Zero Install compiling a Pager applet:
It is often the case that a program compiled against an old version of a library will still run when used with a newer version, but the same program compiled against a newer version will fail to work with the old library. Therefore, if you plan to distribute a binary it is usually desirable to compile against the oldest version of the library you intend to support. Notice how Pager in the screenshot above has asked to be compiled against the GTK 2.4 header files, even though my system is running GTK 2.8. My blog post Easy GTK binary compatibility describes this in more detail, with more screenshots in 0compile GUI.
The Pager binary package includes an XML file giving the exact versions of the libraries used to compile it. Our ability to install any version of any library we require without disturbing other programs allows us to recreate the previous build environment very closely, reducing the risk that recompiling it will have some other unintended effect.
Converting between formats
Autopackage and Klik packages are supplied as executable files. Running the script installs the software. Zero Install uses no installation instructions, only an XML file describing the software and its requirements. Most systems fall in between these two extremes, often giving dependencies declaratively, but with scripts to perform any extra setup required.
The scripting approach gives the most power to the packager. For example, supporting a new archive format only requires having the Klik server send out suitably modified shell scripts for clients to run, whereas new archive formats can only be used with Zero Install after upgrading the Zero Install software itself.
On the other hand, scripts give very little power to the user; the only thing you can do reliably with a script is execute it. It is possible to trace a particular run of a script and see what it did; CheckInstall monitors a program’s “make install” command and then generates a package from the actions it observes. However, this cannot pick up dependency information or detect alternative actions the script might have taken in other circumstances.
My klik2zero script can convert Klik packages to Zero Install feeds. This works because the result of executing a Klik script is a self-contained archive with all the program’s files. However, you have to host the resulting file yourself because the information about where the script got the files from is lost.
The autopackage2zero program works differently. Many Autopackages are actually shell scripts concatenated with a tar.bz2 archive, and the converter parses the start of the Autopackage script to find out where the archive starts and then creates an XML description that ignores the script entirely. This means that you can create a Zero Install feed for an existing autopackage, using Zero Install to check for updates and providing signature checking, but actually downloading the original .package file. Again, this loses any information about dependencies or other installation actions, but it does work surprisingly often.
Going the other way (converting from a declarative XML description to a script) is much easier. Zero2Bundle creates large self-contained application directories from Zero Install feeds by unpacking each dependency into a subdirectory and creating a script to set up the environment variables appropriately. Note that this is different to the normal (recommended) way to use Zero Install applications from ROX, which is using AddApp to create a tiny ROX application directory with a launcher script.
Even if people continue to get most of their programs from centralised distributions, the process of getting new versions of packages into the distributions in the first place could benefit from some kind of decentralised publishing system. A packager should be able to tell their computer to watch for new releases of programs they have packaged, download each new release, create a package for it, and notify them that it’s ready for testing. Likewise, adding a new package to a distribution should require no more than confirmation from a packager. If it is not this simple, then we should be finding out why and fixing it.
Better integration between decentralised systems and native package managers would also be very useful. Users should be able to mix-and-match packages as they please.
We want to combine the openness and freedom of getting software from upstream developers with the convenience of binary packages, dependency handling and automatic updates. We want to allow users, administrators, programmers, QA teams and reviewers to collaborate directly.
Users of traditional Linux distributions can only easily use software from their own distribution. This is inefficient, because popular software is packaged over and over again, and limiting to users, because less common software is often not available at all. Adoption of new software is hindered by the need to become popular first, so that distributions will carry it, so that users will try it, so that it can become popular.
If we no longer have a distribution to ensure unique naming of packages then we must use a globally unique naming scheme. Options include UUIDs, content-based digests, and URLs.
We must also ensure that packages cannot conflict with each other, especially if we permit mutually untrusting users to install and share programs. We can avoid the possibility of conflicts by keeping each package’s files in a different directory and naming these directories carefully.
Dependencies can be handled by letting go of the idea of having a single, system-wide version of a program installed. Instead, dependencies should be resolved on a per-program basis.
To allow our packaging system to check for updates and fetch dependencies for us we must provide the information about available versions in a machine readable format. Several XML file formats are available for this purpose.
For a decentralised packaging system to be used for more than the occasional add-on package, it must be able to share downloads between users to save bandwidth, disk space and memory. This can be done either by having a trusted process perform the download, or by having a shared directory which will only store directories with names that are cryptographically derived from their contents.
The are many security concerns to be addressed, for both traditional and decentralised software installation. We should avoid running scripts with more privileges than they require, and avoid running them at all if we can. We must provide a user interface that makes it difficult for users to accidentally install malicious software, and allow users to check that software is genuine in some way.
Free software requires the ability to make changes to programs. We should be able to get the source code easily, plus any compilers or build dependencies required. With conflict-free installation, we can get better binary compatibility and more reproducible builds, since we can build against selected versions of dependencies, even if these are not the versions we normally use when running.
Finally, we saw that it is often possible to convert between these different formats, with varying degrees of success. Even if most users don’t start using decentralised packaging right now, but continue with their existing centralised distributions, these techniques are useful to help the process of getting packages into the distributions in the first place.
I say this all the time on digg and only get dugg down. Sadly the greater linux community doesn’t like the idea of not having to install software. Installation of software is a pathetic idea which Windoze and Linux seem to embrace. Luckily for MacOS and AmigaOS, it isn’t usually a necessity to run an installer of any type – thank god these two OSs exist.