DISCLAIMER: This site is a mirror of original one that was once available at http://iki.fi/~tuomov/b/


Bloatware. I don't like it. My computer is not a Turing machine with an infinite tape. And even if it were, it takes time to move on a tape. Yet, rubbish mountains of software seem to be written on that assumption. Not only clear and simple bloatware, but also software to which the label is moot, like the Linux kernel itself. The mass-murderer kernel. The kernel that goes on a random killing spree, when it notices that it has ran out of its finite tape. The tape that it had previously promised to be infinite.

Fortunately, the kernel can be taught better manners with the wonderful vm/overcommit_memory=2 setting. No more random killings of innocent small programs. But now the bloatware starts dying, or refusing to start. Now they discover, that their assumptions of an infinite tape didn't hold. They have not been written to be able to gracefully handle failed memory allocations. Some expect to be able to pre-allocate an insane amount of memory. They will never need that much, but still fail when the kernel can no longer make that promise.

You suggest that programs should know how much memory they need then? To pre-allocate a fixed, ungrowable, heap at the beginning of their execution, java-style? No, absolutely not. That is the worst behaviour that I can think of. It is usually impossible to produce near-exact memory usage estimates. The behaviour therefore depends on a mass-murdering kernel and extreme over-estimation of memory demands.

No, these are not the ways to write good software. The Turing machine is a good theoretical model to work from. But it is not good enough as a practical model. Computational complexities are not enough: the constant factors are also important in practise. And the infinite tape, it is pure fantasy. Therefore, work initially on the assumption of an unlimited tape. Do not, however, fail catastrophically when it shows to be false. Fail gracefully. The kernel should not kill programs: programs that try to allocate more memory should themselves handle the failure. An interactive program should not crash. It should return to where it can continue execution. A long-running non-interactive task should try to save its state, if at all feasible. Also try to use as little memory as possible. There isn't unlimited amounts of it. You get the idea. The finite state machine is an inadequate model of my computer. But my computer is not a Turing machine.