The first image is an HDR merge by SNS-HDR. The second image is my best attempt at processing only the middle exposure in ACR and Photoshop. The images are from a Canon EOS R. Different sources rate the dynamic range of that camera between 13 and 14.1. It seems to me that there is a big difference between theory and practice.
Not really,
the difference is between SNS-HDR processing and your processing. As long as you capture the scene's entire dynamic range, the only difference you will see in the result using a different number of shots is because of software processing, not because of the number of camera shots itself.
Some software (SNS-HDR, Photomatix,...) will produce a different result depending on how many
images you provide them with different exposure, but not on the number of
captures done. In your cat's scene you could build 5 replicas at 1EV intervals of the single shot containing all the DR, and the result would have been the same if you manage to fool SNS-HDR to process them (you would probably find your cat a bit noisier, but this was expected and has nothing to do with the strong differences in local contrast, tones and specially shadow lifting you are showing here).
I made the following experiment: 7 shots at 1EV intervals: 1-2-3-4-5-6-7, and built two
HDR linear merges: one using all seven shots, and one using only 1-3-5-7 shots. From the two resulting merges I generated 4 results with Photomatix (please ignore the technical quality or scene interest) using a different number of replicas of the merge:
- A: 7 shots merge 1-2-3-4-5-6-7, 7 image replicas at 1EV intervals
- B: 4 shots merge 1-3-5-7, 7 image replicas at 1EV intervals
- C: 4 shots merge 1-3-5-7, 4 image replicas at 2EV intervals
- D: 7 shots merge 1-2-3-4-5-6-7, 4 image replicas at 2EV intervals
The result A = B, and they differ from C = D. In other words, those results in which Photomatix was provided the same number of images differently exposed (either 4 or 7), no matter if they were produced from a different number of original shots, are identical. And when a different number of images were provided, no mater if they came from the same number of RAW files, are different.
(tomas = camera shots, imágenes [copias] = images loaded in Photomatix with different exposure)
100% crop on dark area:
Conclusion:
the key variable here is number of images (at different exposures) provided, not the number of shots (RAW files) done with your camera. The number of
captures needed depends on the scene while the effect of the number of
images provided to your tone mapping software depends on how your software tone mapping algorithm works. It's nothing to blame at sensor behaviour or number of shots.
I would add, because I know this is possible, that a good tone mapping software should provide the same result no matter of the number of images it is provided, as long as the whole scene's DR is captured. Unfortunately this is not the case for Photomatix, and probably SNS-HDR (I didn't try it).
Disadvantages of more shots:
- Loss of detail because of micro missalignments of shots in overlapping areas (don't consider software alignment will work perfect because missalignments will never happen at 1 pixel multiples)
- Prone to ghosting artifacts with moving subjects (water, trees, clouds passing by the Sun casting moving shadows,...)
- Waste of extra memory resources (card, HD)
Final practical suggestion: those who use automated tone mapping software are probably better served with just one or a couple of shots (3-4 EV apart), and feed their software with evenly spaced replicas of them, rather than using the camera as a machine gun. Finding out the optimum number and exposure spacing of images is a matter of trying.Regards