Hi, than what do you make of Ming Thein's assumption/theory that "less pixels might actually produce a perceptually sharper/ crisper image for a given reproduction size, providing that this size is reasonable for the amount of resolution you’ve got in the smaller image."
It seems to me that Mr. Thein makes some valid practical points.
http://blog.mingthein.com/2012/11/05/resolution-shot-discipline-image-quality/
BTW, I understand that a camera system's MTF will benefit from more MP's, but I think that Leica chose wisely, given the tech that is available to them at the moment.
He lost me in the first sentence:
"And I define a good result as one which the image is critically sharp at 100% actual-pixels view". That is not a "valid practical point". That is a theoretical view removing oneself from why (most of us) are using cameras in the first place.
If my old 8MP crop DSLR was "better" according to some theory than a state-of-the-art Nikon D800 due to having (possibly) sharper pixels, would images be better? If theory and practice contradicts, then that would make it a bad theory (at least applied to this particular problem).
Since 36 doesn’t divide cleanly into 24 – you get 1.5 old pixels per new one – there’s always going to be some guesswork as to precisely how that half pixel is allocated. And depending on the algorithm, any one of the following might happen – blur edges; stairstep artefacts; haloes or abrupt transitions; odd discontinuities in diagonal lines.
I don't think that this is a good description of how image scaling works. Yes, there are trade-offs, but "guesswork" is a bad choice of words. Assuming that the camera is a "Nyquistian sampler" (the more blurry images are at pixel level, the more true that approximation is) and that there is no camera noise (which is of course only an approximation), there really is not guesswork, the 2-dimensional (3 if we include color) continous "waveform" is really uniquely known, reproducible at any scale.
I would rather stress that a higher-resolution camera would have more scene-related information at its disposal (provided that total noise is not increased by shrinking sensels). Having more information can not be a bad thing (as long as it comes at zero cost, which is of course not true), and you can always reduce the amount of information later if need be.
There are many _practical_ reasons to avoid excessive resolution. It tends to add to storage, processing cost. It tends to reduce frame-rate. It might cause compromises in sensor/electronics that affects aspects of image quality negatively. I tend to believe that most of these reasons tends towards "less of a problem" as capacity of digital circuits and storage increase with time. There is also the question of what kind of lenses/technique is needed to fill that bandwidth with real information (as opposed to blur), and what print size/distance/eyesight is needed to appreciate the added info.
-h