Optical low pass filters (OLPF's) or Anti Alias filter don't come in different "strengths". The filter works in two passes, each layer splits the light in two either horizontally or vertically, so by combing them together, you get vertical and horizontal filtering. The distance of seperation of the two rays of light is governed by the thickness of that layer, so if you want, (or need to as you don't have square photosites) you can adjust the filter accordingly. You choose the thickness of the filter in relation to the spacing of the photosites on the sensor.
Aliasing is what occurs when too high a frequency (too detailed information) enters a sampling system. A CMOS or CCD sensor has a regular array of photosites, and is a sampled system. If you put in too much detail, any detail beyond which the sensor can handle is "folded back" as an "alias" into recordable and visible frequencies, producing aliassing artifacts known as moire or "jaggies".
Once aliasing gets into a sampled system, it is very hard to remove. That is because the aliasses occur in the same levels of detail as real detail in the image. The more detailed the information that caused the alias, the lower the frequency it folds back to, corrupting more and more real image information. That means you cannot remove aliases without also removing real image data.
The thickness of the OLPF is usually chosen to split the light so that it matches the pitch of the green photosites on the bayer array. Green is by far and a way the largest component of luma, and it has the closest spacing on the bayer array, so by setting the thickness for that, you're using the least optical low pass filtering you can get away with. If you were to set the thickness for that of the red or blue, you'd be reducing the resolution of the final image too much, but you would avoid chroma aliasing. As it stands, the best compromise (and all engineering is a series of educted compromises) is to filter the green correctly, and hope that chroma moire doesn't intrude too much.
One thing that does not remove moire or jaggies, but actually makes things worse, is downsampling. Because downsampling is a filter followed by decimation process where all frequencies not allowed in the small image are removed in the large image, then pixels thrown away to create the small image, if you have aliasing you have frequencies that will not get removed by the downsampling filter as they're folded back into frequencies where detail exists that you wish to keep. Often poor image downsampling filters are used, and these actually create more aliasing on the small image as they're not strong enough to filter out the too high frequencies, and this only makes matters worse.
Now, most of what I've written also applies to three chip systems (like in video cameras) or single chip depth based systems like Foveon. With bayer pattern sensors there's an extra complexity caused by the bayer pattern itself and how it works, and that is when you get aliassing artifacts, you don't just get luma aliasing, but you get chroma aliasing too, and that appears as funky coloured edges to sharp objects. This is quite objectionable - even more so than the pure luma aliasing that you get with the other approaches. Bayer demosaic algorithms that reconstruct the RGB rely on analysis of the surrounding image content to determine the best possible guess of what the colour should be. If there are aliases in there, the algorithm cannot detect which is real detail and which is aliassed, thus causing results that are not as good as they could be.
Aliasing doesn't just add artifacts to an image that should not be there. There are other implications, that perhaps are not as critical for stills cameras, but they do effect my area of speciality, which is moving images. Compression systems work on frequencies, and aliases add extra high frequencies that don't correlate to image content, making the image harder to compress. Also, as I deal with very high definition moving images, anything you might want to do to a still, like "paint" out a problem, is not applicable or appropriate.
The major problem with OLPFs is not that they're in a camera - to me, they're a necessary part of the design to make sampling theory work without producing artifacts - but that because of their physical nature, they're not a very "steep" filter so it's hard to remove unwanted frequencies without effecting wanted frequencies. Of course, the lens' MTF acts as a low pass filter, so we don't have vast amounts of high frequency detail entering the system, but we do have enough to be an issue, especially on the size of photosites we're using to get low noise. A steeper OLPF would obviously solve this, but they don't exist. Similarly and OLPF that would filter the red and blue stronger than the green would help, but they don't exist either.
The strange thing is, in my mind, the more resolution in terms of pixel-count the more you should be using a OLPF as you're more likely to be scaling that image down. They best way around the issues of an OLPF, is to oversample, capturing more pixel resolution than you need, and using a proper downsampling filter to reduce the resolution, but increasing pixel-sharpness in the process. I personally don't think that aliasing helps algorithms to intelligently upscale images aliassed images, on edges, tend to have pixels that are less correlated with their surroundings, and hence there's no local information to properly infer edge direction, and you can get into the situation where there is no "good" direction through that pixel as it becomes like a saddle-point.
To remove the necessity of an OLPF, you either need to shrink the photosite size (probably leading to noisier images) or go to larger sensors at the same time (and increase your lens cost and size) or go to poorer MTF lenses. Or put up with aliasing artifacts. I think I'm sensitive to them - I don't like them one little bit, or what they do to algorithms for working with images.
Graeme