I agree that the Color Sensitivity benchmark should be very suitable for art reproduction work. But that is a niche for most of us. You might call it "studio". In the text I questioned whether people can actually see chroma noise under these conditions. I have no proof other than that people can hardly see the luminance noise at medium gray at 100 ISO, and that chroma noise at the pixel level should be even harder to see. Anybody feel like generating some test images (of gray patches) using MatLab to simulate what the latest cameras can achieve?
The print-mode data is just a way to normalize pixel-level noise to a common resolution. I agree that if you print large enough, you can do some serious pixel peeping - although that is easier and cheaper to do by clicking on "100%" screen viewing. I guess DxOMark considered pixel peeping more common on screens - even by non-photographers. Only some photographers (the nerdier ones?) inspect a print with their nose touching the print
You probably already know this, but (to be on the safe side) this particular DxOMark benchmark data does not cover the contribution of sensor resolution to print quality. So if you pixel peep, this benchmark tells you what you will see when you are admiring the creamy richness of noise-free bokeh gradients
Peter,
There are a few issues here which could do with more clarification.
Graphs are designed to highlight differences. That's their purpose. If one camera has just 0.1EV difference in DR compared with another camera, a graph is designed to clearly show it. That's fine. I have no objection.
However, it is important also to know the practical significance of such differences on the print or on the monitor at specific degrees of enlargement.
For example, the normalised print size of 8"x12" that DXO use is rather small, for good reasons no doubt, so no interpolation is required for the smallest resolution cameras that have been tested, such as the Canon 10D.
Now, we all know that different RAW converters produce slightly different results. One converter may apply greater default noise reduction which is beyond the user control, and another may produce slightly sharper default results but with greater noise.
Likewise, one particular interpolation algorithm may produce more detailed images than another.
The question that I have is this. Just how reliable are these 'normalised' results at 8"x10” when images are interpolated for the purpose of making much larger prints, using the
same RAW converter?
I assume that differences due to the different handling of different brands of RAW images by the same converter will exist, but they may be negligible in practice. Would you agree?
Another issue which I think requires more clarifiction is the paractical significance of those value differences shown on the graphs, whether they be dB, bits or EV.
I gather from DXO’s articles that a difference of less than 1 bit in Color Sensitivity, and a difference of less than 0.5EV in DR may not be noticeable. We need to elaborate on such issues. How does print size affect such assessments as to what’s noticeable and/or significant?