DISCLAIMER: This site is a mirror of original one that was once available at http://iki.fi/~tuomov/b/

The megafreeze development model of GNU/Linux distributions like Debian as well as many other much lesser ones, is broken. It doesn't matter how often you do it: freezing thousands of packages into a stasis just doesn't work. If you do the megafreeze too often, even the base system is likely to be broken, and if you do it rarely, the extra packages, which are often development versions, will be out of date very quickly. Ubuntu with its regular shorter megafreeze cycles than Debian, also has too old packages of programs going through intensive development.

Nobody in their right mind will use the ancient development versions of the non-essential packages in the megafrozen “stable” distribution. Unfortunately, there are too many people who do not know well enough to steer away from them. As a consequence, megafrozen distributions cause trouble to upstream developers: you have to deal with countless lusers with outdated broken and unsupported versions of your software. I, for one, do not want ancient development versions of my software in megafrozen distributions.1

Furthermore, the concentration on providing a “stable” megafrozen distribution, takes resources away from providing a reasonably stable distribution with recent software. It just isn't possible to use the “stable” Debian if you need any recent software and the “unstable” and “testing” distributions are really what the names imply: constantly and randomly broken, unannounced; “testing” in fact more often than “unstable”. (Compiling software yourself, whether it be from the official tarball or a source-based 31337 ricer distribution, is out of the question when you just want to or need to try the software quickly.)

Now, being able to run multiple distributions simultaneously, or efforts like Debian's backports.org (which seemed too much of a hassle to use automatically, and such use certainly isn't well documented), would provide a marginal remedy to the situation – to the user. But as an upstream author you'd still have the zillion lusers complaining about the ancient development versions of your software in the “stable” – more like static – distribution. You simply can not synchronise the development of thousands of packages to ever provide a truly stable distribution. You have to stick to smaller stable collections.

So, what's so difficult about providing stable base systems, and well-maintained additional package collections? That is what I think would best suit most use cases: a truly stable base system with all the essential system software, and another very stable collection with server software, both updated every few years, when there is a new kernel series out or so – not forgetting fixes, of course. Then there would be additional well-maintained and constantly upgraded package collections of packages for use on personal computers. At least I would prefer running a stable, even if old, base system that wouldn't be broken by every update of other less essential packages, and updating the less essential packages – many going through intensive development – more often, from a well-maintained repository.

So, there you have your alternative development model: now cease using the megafreeze model, and make my life easier, both as a user, and as an upstream author.

1 If distributions absolutely want to provide unsupported versions, they should take full responsibility of supporting them; making sure that users direct their complaints to the distribution's maintainers, not the upstream author; and warning with das big red blinkenletters, that what they're using, is ancient and probably broken, and therefore not representative of the project's present state.