Hi,
In a way, that is the reason that curves like Bill Claffs data are usable. Camera histograms, histograms in LR or Capture One distort the information. Just as an example:
- Capture One's film curve shows images to bright. That tricks the photographer to expose lower which of course allows more recovery on highlights.
- LR often applies some highlight recovery, so LR may encourage expose high enough to cause some clipping
Bill's data on the other hand shows the real world data. Another great tool is RawDigger, which presents the actual data on screen. RD is a great tool to understand how the camera exposes, but also to choose the best exposure from bracketed shots.
As far as I understand, the XF has some advanced functions actually showing raw histogram and selectable warning for clipped exposure.
We have two sets of data, engineering DR, which means DR with limit at SNR=1. DxO mark shows that when data are viewed in 'screen mode'. The same data is available on sensor spec sheets and when such data can be found, it is generally in good agreement with DxO mark.
The "print mode" display they have is normalized. That is a reasonable way to compare sensors with different number of pixels.
Bill Claffs numbers are also normalized. Bill thinks they are more accurate than the DxO mark data.
The good thing is that the data is available and that is a good guide to how the systems perform.
Let's show a real world example:
Camera | Year introduced | DxO-mark pixel mode | DxO-mark, normalized |
Phase One P45+ | 2007 | 11.75 | 12.9 |
Sony A900 | 2008 | 11.5 | 12.3 |
Sony A7rII | 2015 | 12.68 | 13.89 |
Now, what does those figures look like in a real world test? I made one and done it with some care.
For full size use this link:
http://echophoto.dnsalias.net/ekr/Articles/TMP/Darkside.pngIt is quite obvious that the A900 and the P45+ are quite close, with some advantage to the P45+, that is a very good match with the DxO results.
The 'Bright Side' of the same high luminance ratio image is here:
http://echophoto.dnsalias.net/ekr/Articles/TMP/BrightSide.pngFull size:
http://echophoto.dnsalias.net/ekr/Articles/TMP/BrightSide.pngMy take from this test is that the DxO mark data is quite relevant, at least for the three devices tested. Doing the test was not really easy, building a high luminance range image was not that easy, I also needed to adjust highlight exposures accurately. I would say that my accuracy here was about +/- 0.1 EV.
Best regards
Erik
Graphs and numbers are definitely helpful. But I find them more useful for comparing (given a specific test maker) between two models of the same camera.
That is, if Jim or DXO or whomever, test two camera generations (using the same test criterium) from Brand X then it's usually the case that the RELATIVE values of those two tests will be informative.
But as soon as you start trying to use an absolute value, or compare between brands, or worse between testers, then you really don't get that much confident information from the number/graph.
If someone is considering a P1 and asks me how many stops of DR it has I'll answer "P1 says 15 stops, but I'd rather you come and compare your current camera and this camera, and see how the highlights and shadows handle when shooting the same tricky scene. I suspect you'll be very pleasantly surprised, but the whole point is to find out if that's the case!" because this is answer the question they are really asking (or at least should be asking).