Dynamic range is after all a fairly specific quantitative measure --- we are not talking about more visceral, experiential attributes like "pop" or "3D effect" or the way that different lenses "draw", or how a camera "falls in the hand".
...BJL?? I don't expect to read something like this from you.
Of course dynamic range is visceral.
Most quantitive measures of DR only loosely correlate to useful photographic dynamic range.
For a shadow or highlight detail to be
photographically useful in a high quality workflow I propose it needs to show:
-
accurate color, especially when comparing shadows and quarter tones of the same object. If half of a banana is in deep shadow and half is in bright light then I only consider the shadow to be part of the useful dynamic range of the camera if the shadow-half is a correct-in-saturation-and-hue yellow - if it's a muddy brown color then I don't care that I can see the banana, it's not useful dynamic range. Likewise with skin tone and clouds/sky in highlight to three-quarter tone transitions - if the skin is a weird pink before a blown out highlight then the pink is at best questionably part of the useful DR.
-
smooth, visually appealing transitions to tones around it (posterization and other visually awkward transitions, chunky rather than evenly distributed noise)
-
tactile, natural looking detail/textureAnd numerical representations of DR of a particular sensor almost always exclude vital parts of producing actual photographic images like the lens (lens characteristics, namely flaring, but also, micro contrast, total resolution, and chromatic aberration can all affect the usefulness of sensor-recorded data in highlights and shadows) and the software used to process (e.g. Capture One uses some proprietary data in a Phase One raw file to reduce noise inherent in the sensor based on the temperature of the sensor at the time of capture - processing in other software ignores that information).
Highlight rolloff, tonal transition smoothness, noise grain structure, shadow color accuracy, loss of texture in shadows - some of these things can be represented numerically, but given the number of variable the best approach is to look at actual pictures. If you can dodge out an important subject in such a way that the subject is beautifully rendered then it was inside the range of the useful DR of the camera. If you can see it, differeniate it from it's surroundings but it's artificial looking and generally aesthetically crappy then it was not.
In practice it's perfectly possible to have two cameras with identical "measured" DR but have their highlights/shadow be lightyears apart in real-world-photographic-usefulness. Much like you can have three noises which are equally "loud" on average (as measured in DB) but one is a beautiful trumpet jazz quartet, a second a dozen screaming babies, and a third the loud but generally tolerable rumble of a jet engine in the exit row seat of an international jetliner.