For didactical purposes (both in practising with R and to try some new strategies), I have written some
R code which implements and optimum HDR fusion of 2 captures shot 4EV apart over a high dynamic range scene (12 stops).
IMAGE HDR FUSION WITH R (Google translation up right)
This was the scene:
Ignoring the RAW metadata (nominally 4EV apart), we calculate a much more precise relative exposure ratio which happens to be 15,69 rather than 16 (4EV).
I have also plotted the RGB levels participating in the calculation, all belonging to the mid tones as expected. Colours mean that for some pixels not all RGB values were used, only those that fell between some min/max thresholds. Over 20% of the entire image information was used for the calculation.
Accurately calculating the relative exposure is necessary to perform an exposure correction down on the +4EV shot to match its overall exposure so that a seamless composite can be build taking some RGB values from one shot or the other, without any progressive blending area. Even for a given pixel, some RGB channel may come from one shot while the other two channels could come from the other, and it worked; I had never tried such a fine pixel selection.
This was the fusion map: black pixels mean information is taken from the most exposed shot, white pixels are taken from the low exposure shot, and again colours mean areas with a mixed information source:
I encourage you to download the
fusionmapeotonos.tif file to check which information comes from which source RAW file. A couple of curves are included for final tone mapping, although perfect tone mapping was not the main goal of the exercise.
Regards