DISCLAIMER: This site is a mirror of original one that was once available at http://iki.fi/~tuomov/b/
A better name for the operation called font anti-aliasing is font blurring. Infact, the whole term "anti-aliasing" is misleading from the signal processing point of view as well. There anti-aliasing refers to removing the high frequencies from a signal that a down-sampled signal can not represent. If the term were applicable, we should model the screen as a device sampling continuous signals in two dimensions up to a certain frequency threshold. But we do not usually model the screen as such a device. If we did, we'd have hard time with concepts such as "one pixel wide vertical line". Instead, we'd have to consider suitably chosen signals in continuous time that somehow end up looking like one-pixel-wide lines after an (imperfect) anti-aliasing filter (that removes frequencies above the Nyquist frequency) and sampling. Unlike in audio where we are interested in the frequencies, the values of individual pixels are often more important in user interface graphics and technical drawings. Captures of photographs, on the other hand, are an example where the notion of a sampled two-dimensional signal applies. This might lead one to suggesting that fonts are photographs too. The problem with that notion is that fonts are usually mixed with interface graphics and other technical drawings that are not of that nature. Mixing such blurred samples in crisp graphics makes them look out-of-place. But smooth and cushy photographic interface graphics do not look good either.
Subpixel rendering on TFTs, where the red, blue and green components of each pixel are considered as smaller pixels, isn't any better than normal blurring, although it is frequently claimed so. It is worse infact. Instead of the fonts simply being out-of-focus, now they're surrounded by a clearly visible mushroom haze, no matter what subpixel configuration and orientation you use. No wonder the gnome people are so keen on them.
Serif fonts are more readable than sans-serif fonts on paper. On screen, the case is the opposite. The simple sans-serif fonts are more readable on the poor resolutions of current display devices that can't subtly display the complex shapes of serif fonts that increase readability on paper. It is frequently claimed that anti-aliasing reverses the situation again. That is not the case. Blurring simply makes the shape of a poorly rasterising font designed for printers more distinguishable, but at the same time makes those subtle hooks and so on visible not so subtly, and also decreases readability by making the font more difficult to focus on by blurring it, effectively decreasing contrast. The blurring effect is even greater than with simpler sans-serif fonts due to the complexity.
What is more, the font libraries that support blurring on *nix (and their authors?) tend to be hostile towards users who do not want blurring. There are all kinds of obstacles present to prevent or at least make it very difficult to start using clear crisp fonts. These include forbidding the use of bitmap fonts, in general and some in particular, and not being able to rasterise True Type fonts well, thus promoting the misconception that unblurred fonts are universally ugly (and not just the subjective view of some AA zealots). The case is exactly the opposite. Blurred fonts are always ugly on current screen resolutions, while well-rasterised unblurred fonts can be beautiful. For the bad rasterisation the tyranny of the patent system is partly to blame.
These considerations have led me to decide that I will not support blurring of fonts in my programs that can control the fonts they use (primarily Ion) until the following have been taken care of: