No doubt many readers know this already but I was truly SHOCKED to discover that each supposed pixel in my camera contains only the data for one colour!
This (the approx. fact, not the discovery) is not exactly true, and half (or quarter ) truths are dangerous.
Each sensel holds the result of an approximate 1/3rd of the relevant visual spectrum, but (usually) after an Optical Low Pass Filter (OLPF, AKA AA-filter) has done its work. For chromatic information that's not too much of a chore to reconstruct, reconstucting luminance information, that will require a bit more cleverness.
This means that the true resolution of my camera is only a QUARTER of its advertised value! My 10MP SLR is really only 2.5MP!
Wrong. I suggest trying to measuring the actual resolution after a good CFA demosaicing. Sure R/G/B resolution individually will not achieve its theoretical maximum for the pixel density, but it will meet (or even exceed!) the sampling density.
I have been aware of interpolation and the demosaicing step for years, but I had always assumed that this was to make small realignments to centre the 4 elements of the Bayer mosaic within the pixel.
That would be a very poor way to demosaic images. The Bayer CFA is much more clever in design, and implementation.
It never occurred to me that it was being used to invent colour data for three quarters of the image!!!
Wrong again, it doesn't "invent" color data, it estimates the missing data from clues in its immediate surroundings (and it does so in a very convincing manner, especially for chrominance).
It was only when I was stepping through the main data structure at runtime in Dave Coffin's DCRAW code and wondering why only one quarter of it was filled in that the penny finally dropped. Shocked I tell you!
Too bad it required studying Dave's code. It's mostly common knowledge, at least around these quarters ...
Cheers,
Bart