Probably not. But the real issue with your logic is that binning or downsampling isn't really that effective a means to increase DR. It's a gimmicky trade-off that is only acceptable in a small subset of circumstances.
I don't know where you get the idea that I am advocating binning or downsampling. I think images should be left in their original RAW resolution until just before they get forced into a display.
My main point throughout all of this has been that binning and downsampling are BS
. They throw away resolution, and gain nothing of value. Ditto for bigger pixels, in the same size sensor.
Perhaps you didn't see the numbers in the crop of Ray's under-exposure, and binned versions of it, which I linked to in that other thread, but at least you should have seen that all the versions looked to have about the same amount of noise, even though the standard deviations varied wildly. My point was that image noise does not equal pixel noise, and image DR does not equal (nor is it solely limited by) pixel DR.
A Phase One P45+ has less-than-4x the pixel count of the G9, 1Ds, and 5D, and thus has less than a stop's worth of DR advantage over any of those cameras on the basis of additional pixels. Where the real difference lies (several stops worth if the MFDB shooters are credible, and I'm not going to presume to contradict them without tangible evidence) is in the quality of the MFDB pixels vs the smaller-format cameras.
That's what I've been saying all along, but several stops; no way. 4x as many pixels means about 1 stop more DR, AOTBE (same pixel quality).
The you-can-get-extra-DR-from-extra-pixels argument is true in theory, but in practice, it's bulls**t. In most instances, trading away 75% of your pixels for a measly 1-stop DR increase is a waste of resolution that doesn't solve the DR limitation of your camera anyway.
Again, I'm not advocating binning or downsampling, and haven't in a couple of years or so, since I realized that they were false economy (unless the optics are so poor that it incurs no significant loss of detail, in which case it saves you storage without much compromise).
However, if you're going to compare two cameras, one with 4x the pixel density of the other in the same size sensor, then a 2x2 binning or downsampling of the 4xMP camera will, in all likelyhood, have the same shot noise as the other, but as little as 50% the read noise. Bigger pixel counts are just dirtier to read, relative to the number of captured photons, period. The range of read noises, at least at low ISOs, is not that great amongst cameras, and there is no strong correlation to pixel size. Some of the highest read noises in electrons are in DSLRs, like the D2X, not in compact P&S sensors.
And it relegates the file to Web JPEG and small print usage which may well defeat the purpose of why you bought a camera with more pixels. Buying a camera with better pixels is going to make a bigger difference in image quality than just buying a camera with more pixels.
Not at all. I keep hearing you and others say this, but no one has ever done a thing to prove it, except for proving something else, instead (like Roger Clark's S60 vs 1Dmk2 comparison, which proves that bigger sensors can capture more light, and tells absolutely nothing about pixel density values).
Yet, when I cut to the heart of the matter, with my binnings of Ray's under-exposure, or my equal-sized crop comparisons of DSLR vs FZ50, everyone just shrugs their shoulders, and don't bother to think about what they mean, and then a few days later repeat the myths which my demos should have destroyed.
Comparing cameras on a pixel-quality basis makes the most sense,
It makes sense, but it is meaningless unless you also include the quantity as part of the specs, and if the reader of the specs understands that pixel quality in and of itself is worthless. You need a significant number of quality pixels for them to mean anything at all, and more lower-quality pixels can result in a higher quality image.
I've never implied that pixel quality should not be measured; what I have stressed is that it does not decribe or limit the image quality. Therefore, I would not recommend just testing the pixels as you suggest. Such a test should also be accompanied by image-level testing.
because when you add additional pixels, you expect to get more resolution and image quality, and in real life, the differences between lens quality, AA filter strength, and sensor quality on individual pixel quality are far more significant than the effect of throwing another megapixel or two at the image to brute-force another 1/4-stop or so of DR.
Lens quality puts a limit on image MTF, but you can oversample the optics by a good margin before there are no remaining benefits. Oversampled images are the easiest to correct for CA, perspective and geometrical distortions, rotation, etc, and result in finer CFA artifacts and negate negative aspects of AA filtering.
What you call "brute force" is really just better design. Your negative connotations are an illusion. More pixels on the same sensor size do not come at the expense of any of the qualities you mention, only at the expense of pixel quality, which does not have to come at the expense of image quality.
Once you meaningfully and consistently evaluate per-pixel quality, comparing total image quality is a simple matter of multiplying pixel quality by pixel qualtity.
No, it is a matter of the square root of pixel quantity. IOW, if you double the amount of pixels, then you can take an increase of a half stop in pixel noise and maintain the same image noise, and the same DR, or with the same pixel quality, you can get 1/2 stop more DR or less noise.
You seem to be able to comprehend better image quality by extending the pixels, but you think that with lower pixel quality, you can never extend the pixels enough to get image quality as good or higher than any image with even slightly better pixels? That is mathematically and physically preposterous. You seem to be stuck on the idea that a pixel's quality has an absolute limit as to what it can do for an image. That's just a pure falsehood. We look at things all day long with individual photons striking surfaces in random locations, and it is not considered to be noise, nor do we consider it to limit DR. It *is* what is really there. Our retinas bin photons out of convenience, but we really don't need for them to be "pre-binned" with a loss of information (which is what is really happening with big pixels with low pixel noise and high pixel DR) ; our cameras do that out of necessity because of storage/transfer limitations and physical obstacles, and lack of vision on the part of designers as well, but we won't know how much until the physical obstacles are not as much of an issue.