There's a bit of a leap there from sensels to pixels.
Hi Luke,
I said sensel
positions. It's the sampling positions that determine the sampling density in 2D space (the focus plane). Foveon complicated matters by stacking (3) 'photosites' with (1) sampling position, but the positions are what counts for resolution. Measuring a signal 3x at the same position (the Foveon claim to faim), delivers the same spatial resolution, although probably with (slightly) different signal levels (at least caused by photon shot noise), and potentially higher color resolution (depending on sampling pitch and sampling aperture/microlens size and shape).
When we're talking either about the Bayer sensor or the Foveon, sensels do not equal pixels.
But they do, albeit different quality (broadband/narrowband color) pixels. The ISO standards also make that distinction between sampling/input position (sensor element), and output (monochrome, or RGB, or HSL, or ...) pixel. Sensor element -> Picture Element (sensel->pixel). It's really simple and straightforward.
It was complicated by some convoluted Foveon speak, because they needed a marketing tool to close the gap at that time between the Bayer CFA sensel count and the Foveon sensel count. Instead of explaining what the real differences are between single color sampling and multiple color sampling (at the same sampling position), and the differences between OLPF filtered and non-filtered aliasing (but aliasing in both(!) cases), they created more confusion than is helpful. And the confusion continues, as this thread and many others demonstrates.
Somewhere along the way, it pays to remember that this is not a real Nyquist domain. This is not a discrete time-sampled domain. There is no /true/ wave reconstruction going on in the same way that audio samples can be used to /uniquely/ determine the original waveform.
It is exactly the same, only the time domain is changed in a spatial (position) domain. Nyquist is as relevant in the discrete spatial domain as it is in discrete time domain. It's the basis of
Digital signal processing (
read all about it). Analog signals are converted into Digital signals, and digital signals are discrete signals both in amplitude and sequencing (be it in time/frequency/space or other domains).
It might help us to understand just how the first generation of Merrill sensors, at somewhere around 15M sensels, somehow manages to yield so much more /perceivable/ detail than the number of sensels might lead us to naively assume?
Not really necessary, because it is (and always has been) merely caused by the absence of an optical low pass filter (OLPF), and three color samples at the same position.
There is no magic, it is straightforward DSP. It's exactly why some Bayer Sensor designs use 4x multi-sampling, and why they eliminate an OLPF. Nothing new under the sun, really, just another set of drawbacks (constant lighting requirements between piezo-element shifted sub-exposures, and more image magnification to reduce aliasing tendency).
It's IMHO also time to drop the mumbo jumbo (it only leads to impossible to explain assumptions and no real solutions/explanations), and just appreciate the benefits of the various imperfect but very usable methods of image capture. If only battery technology progressed faster, because that would allow more elaborate in camera processing with a decent battery life.
Cheers,
Bart