DISCLAIMER: This site is a mirror of original one that was once available at http://iki.fi/~tuomov/b/
1. Linux “distributions” have a practical monopoly on easily installable and discoverable software packages for the distribution in question. This must be removed: it must be easy for authors to provide binary packages of their programs, that are easy for users to both install and discover, along with their dependencies. The effort required from authors to provide these packages should be marginal, and the same package should be usable on a wide variety of hardware-compatible distributions/platforms.
ZeroInstall appears to be the best attempt at such a decentralised package system so far. In particular it implements cryptographic signatures on packages, automatic upgrades, automatic (limited) dependency handling, non-root installation of software, as well as installation of applications in their own directories and hence multiple version coexistence and conflict freeness. Furthermore, a rather nice feature for just testing software, is the ability of its tools to run software directly from the package's URL (and in a sandbox). On the downside, it uses XML configuration files, and possesses certain WIMP-centricism. The dependency handling is rather primitive, and there's no search mechanism for packages.
One obvious problem for such decentralised binary packaging systems is an unstable library ABI. To some extent this can be circumvented by supporting the installation of multiple versions of the same library: application directories are essential for a decent decentralised package system. However, source packages could also be made less of a pain to install by replacing autocrap with something more reliable and more fixable. Indeed, presently there's a lot of redundancy between distributions' package descriptions and the utterly horrid autoconf scripts. Much of the same information is contained in both, but it is practically impossible to automatically extract the information from the autocrap mess. There's a lot of common aspects to building and linking a program, and installing a binary package: both require certain capabilities from the system. The autocrap mess tries to figure out if the system provides these capabilities with unreliable tests implemented as practically unmaintainable scripts replicated in every program that uses autoconf. Binary distributions use a rather rigid package (or file) dependency mechanism (that expects to find certain things at certain locations, and therefore can not well deal with multiple versions of packages, non-root installs, etc.).
2. What I propose is an abstract capability-based dependency handling mechanism, and the integration of the packaging system with a build tool. No more kludgy checks versus manual configuration of a program's build options: the system knows the capabilities it has, or can obtain. Capabilities are the fundamental units of dependency: there are no package dependencies, but each package can provide one or more capabilities, and many packages can provide the same capability.
In a decentralised environment, the claim that a package implements a certain capability carries little weight. Therefore it helps that packages can prove their claim. Each capability therefore has private-public key pair associated to it (e.g. plain old PGP). A signature, possibly provided separately from the package, can then act as a proof. This works as long as there are just one or few implementations of the capability with infrequent releases, so that the key owner can sign the packages manually. However, the key owner could also release a (signed) test suite that can be used to derive automated proofs. In case of library dependencies, mere compatibility of packages and their descriptions may be able to act as a partial proof.
ZeroInstall depends on the centralised DNS for pseudo-decentralised naming system for packages: packages are identified by somewhat volatile URLs. We can do better: since each capability has a cryptographic keypair associated it, we can generate a cryptographically unique identifier for the capability by, for example, signing the public key (plus some additional free-form components, if the same key is shared by different capabilities specified by the same author) with the private key. Then the capability name already carries the information needed to verify any proofs found for a package's claim to implementing the capability.1
Packages themselves are to some extent “anonymous” in my scheme: they're merely (tarballed or so) collections of files, extractable and runnable from any location within the file system (after suitable modifications or environment setup by the package tools), along with a package description listing the capabilities the package claims to provide, the capabilities it requires, and other information about it. (Description, version, architecture for binary packages, etc. Alternatively architecture and other hardware requirements could be expressed as required capabilities.) One of these capabilities, for which a signature proof is provided, might be designated as identifying this package and its other versions and variants, to facilitate automated updates without referring to a volatile URL. (This does not prohibit other packages with different designation from being proved to implement this capability: the designation and proof merely mark packages that have commensurable version information.)
As explained, packages do not depend directly on other packages. Rather, they depend on capabilities that could be provided by any number of packages (or by other means). A method is then needed to discover a package that provides the capability. When the user already has a package installed that does (or claims to do) so, that would usually be the preferred alternative. The package being installed could also suggest some particular package (by means of its designation, or location) providing the wanted capability. However, to discover alternatives – for example, variants optimised for the user's system – what is really wanted is search engines specialised to finding packages for the system, based on capability identifiers. When the system can't decide on a preferred version based on presence of capability proofs, system characteristics, or previous configuration by the user, they would be asked about the choice.
Also, the oft-promoted, ‘single-click installation’ process tends sometimes to be too laboursome with its manual discovery of software from the Web. ‘Single-command fetch-and-install’ provided by many *nix distributions is rather convenient. In a decentralised package system similar functionality again depends on search engines, this time based on a text search in package descriptions. However, in a decentralised environment these descriptions can not be entirely trusted. This is the weak point of the decentralised system: before you have a cryptographic capability identifier, you're lost. But then again, centralised distributions also provide significantly modified and out-dated software without mentioning this fact. If a search engine exploits the Web link graph (or a cryptographic signature graph), perhaps the results can be made good enough, however. You are, after all, more likely to find the author's home page for a software as the top search result from an internet search engine, then you are to get an unmodified version from your distribution. Additionally, users could configure certain linkers to be ranked high, such as a free software directory (or such a directory could function as a preferred search engine). Nevertheless, fundamentally the technology does not solve the problem of deceit when capability identifiers are not yet known – when trusted contacts are not yet established. It can, however, provide authors better channels to provide their software as they intended it, and therefore lessen the incentive for creating huge monolithic megafrozen distributions. Additionally, it does away with autocrap.
3. The discussion in the previous section has been quite abstract. In particular, how packages obtain access to the capabilities they request for, has not been specified. This is intentional: a good package system – and good software architecture in general – provides mechanisms, not policy. The specification of capabilities is almost entirely up to the capability author: perhaps the most important limitation is set by the installability of packages in their own directories in almost any location2, to avoid conflicts (which in turn helps avoid a plethora of other policies). The package system merely provides ways for a package providing a capability, to pass information specified by the capability author to the package requiring the capability, as well as the package/build system itself.
Another practical aspect is that the cryptographically unique identifiers for capabilities can be quite unwieldy. Therefore local names are needed: packages would likely define each capability they require (or provide) to locally have a more manageable name.1 again
Some may be asking, what are the differences between capabilities and packages (in the sense the distros use the term), and what kind of capabilities could there be? Library APIs are one case where there could be multiple libraries and packages implementing the same API: consider OpenGL, for example, with various hardware versions of the library. Also a newer version of a library might provide both a new API and the old API version as well (different from library version), so it would provide both capabilities. Capabilities of this type are obviously already used by some distributions, however based on a centralised naming scheme, with packages being able to “provide” another package.
Things get more interesting once we start to consider the integration with a build system. For example, autocrap scripts frequently make checks such as that “the integer data type is at least 32 bits”, wasting a lot of time compiling a program to do so (without guarantee of success, especially in case of more complex checks). But the package description could just specify that it requires such a known capability, and the system can check whether it has it already “installed”. (This kind of “standard” capabilities might fall outside the crypto-naming scheme, or the build system could provide ready aliases.) Unlike autocrap, the system should allow for easy user overrides of all information provided by the system (or fall-back autodetection routines), by augmenting the description of the required capability. Also, like autocrap but with less of a mess, the system could support autodetection scripts for capabilities of which the system has no prior information.
4. My initial designs here are far from complete, and certainly require input and scrutiny to avoid problems I might have overlooked. Developing core software is not a job for just one man with a tunnel vision, as the case often seems to be in FOSS. Let us nevertheless consider a practical example: such a simple one as linking a library. I use the .INI-style syntax, as it is very readable and robust, and I would not use or support any XML-corrupted implementation (or one using a scripting language, which would make alternate tools difficult – the discussed autoconf mess being a case in point).
So suppose a library that we call ’libbar’ here provides the same API as a library that is customarily known as ’libfoo’. Then its package description might list something like:
[provided.capability]
alias = libfooAPI
uuid = zxc09z87df890asasödklfj-w3,j.4324asdf08972342340+89sdfadsf9872etc.
library-file = lib/libbar.so
headers = include/
Here ‘libfooAPI’ gives the mentioned local name, while the uuid
is the
cryptographic identifier (in practise longer than this). The ‘libfoo’
and ‘libfooAPI’ capability author has specified that the file to link
against to get access to the libfoo API is provided in the library-file
variable, and the location of C headers in headers
. The package/build
system adds any prefixes as needed.
Some program that requires ‘libfooAPI’ might then include the following description for a required capability, and a program to be built:
[required.capability]
alias = libfooAPI
uuid = zxc09z87df890asasödklfj-w3,j.4324asdf08972342340+89sdfadsf9872etc.
[..how]
tool = linker
action = link-against
file = $library-file
[..how]
tool = c-compiler
action = provide-headers
path = $headers
[program]
target = bin/foo
c_sources = src/foo.c
depends = libfooAPI
The contents of the how
sub-blocks (required.capability.how
)
describe to the package/build system, how it can give the capability to the
requesting package. This part could also be in the ‘libbar’ package description.
Having it in the package requiring the capability adds an additional test
that the capability is of the requested form, while having it in the package
providing the required capability allows for more alternative implementations.
The system should perhaps use whichever is first encountered, which also
provides means for user overrides.
The link-against
and provide-headers
actions and their parametrisation
are specified by the corresponding linker or C compiler tools/modules of the
build/package system. The set of available actions might be extensible
depending on the tool in question. Certainly the set of tools would be
extensible, but not within the package description files. This is a common
source of problems and complexity: trying to do everything with one language
and in one file. Scripting language based configuration files, LaTeX,
and even modern Makefiles suffer from that. Not to even mention XML extensions.
1 This is a bit similar to a plan for decentralised DNS. Indeed, the whole capability naming system could be part of a more general cryptographic resource location scheme, if it was able to handle multiple alternatives for a resource (such as multiple packages provably implementing a given capability).
2 Configuration files and data regularly modified by
the software could and should, however, go in a different location
from the package's contents. Already user configuration files are
indeed separate (but unfortunately not under ~/.etc/
). The
contents of /etc/
, on the other hand, are huge mess: it contains
an amalgamation of the program's original configuration file,
distro's modifications, and local system modifications. I'd like
to see a three-tier configuration file scheme used by programs:
package's original unmodified configuration files reside
with the package's contents on possibly read-only media. These
settings are overridden by possible system configuration files
in another location, and finally user configuration files.