It is hard to wrap my brain around the fact that my small camera has a 7.6mm x 5.7mm CDD sensor and my larger camera has a full frame 36mm x 24mm CMOS sensor that is 20 times bigger yet not nearly 20 times better at producing quality images. I believe that the only difference in digital images is the number of pixels and the ability of the camera to deliver the most suitable color for each pixel (post-production aside). Is this true?
Ed(?) - your belief is pretty far off the truth, but you are at least asking the right questions and looking for the truth so I won't knock that.
Your question is a perfect candidate for a
reductio ad absurdum way of answering. (This is a logical technique that you can employ yourself of course). If size doesn't matter, then let's take that 7.6mm x 5.7mm sensor and shrink it by a factor of 1000, so now it's a 7.6 micron x 5.7 micron sensor. And let's take that 36mm x 24mm sensor and enlarge it by a factor of 1000, so now it's a 36 metre x 24 metre sensor. Both still have the same number of pixels as before. Do you still think that they will give comparable or even same-ballpark image quality?
Well, am saying that, artistic rendering aside, image quality is a function of the selection of pixel color. The only difference between the output of a $300 sixteen megapixel digital camera and a sixteen megapixel flagship DSLR is the instrument's selection of color for each pixel.
That's twice you use the expression "selection of color" - "selection" is an unfortunate term because it implies that the camera has some choice, some informed decision to make about each pixel. It doesn't. It doesn't select or choose, it simply
measures. (It also doesn't work with or measure "color" - it measures greyscale
intensity at each pixel - the color comes afterwards, by interpolation across adjacent pixels which captured the light through different built-in coloured filters).
What distinguishes your two different sixteen megapixel sensors is how accurately they can measure the underlying rate of light flux falling on each pixel. You can check the references given above for a description of the Poisson nature of photon arrival times. To keep it simple, all I'll say is that the sensors only have a limited sample of that light flux to work with. The more light each pixel captured, the more likely that its measurement is closer to the true underlying rate of light flux. Given equal sensor technologies, this means that a larger 16MP sensor gives better IQ than a smaller 16MP sensor, due to its larger pixels capturing more light in a given exposure time. This scales nicely even when you apply the
reductio ad absurdum test. At the microscopic end of the
absurdum, you eventually hit the limits of discrete Poisson stats and quantum electronics.
Keep asking yourself such questions, keep researching and learning. In this age of depressingly rampant black-boxism ("it just works, I have no idea why"), you'll still find lots of people willing to help.
Ray