I’ve been doing some macro photography, and I’m unsatisfied with the sharpness that I’m getting. I’ve been trying to develop a testing regime aimed at determining the amount of blurring caused by camera vibration, unflat field, aberrations, diffraction, and the like. To that end, I’ve created a target which has a fairly broad band of high-spatial-frequency energy. I am photographing the target, processing the images in Lightroom, exporting them as TIFFs, reading them into Matlab, converting them to a linear (gamma = 1) representation, and performing analysis, the critical part of which is measuring the standard deviation (or, if you prefer, root-mean-square noise) of the image.
In order to make the rms noise measurement meaningful with real-world images, I need to compensate for both global and local exposure differences. Since the value of the standard deviation of an image in a linear color space should, all else being equal and ignoring photon noise, be proportional to exposure, I perform the compensation by forming a correcting image by filtering the input image with a large (400x400 pixel) constant-value kernel, and then dividing the input image by the correcting image to form the corrected image.
There’s a problem: the technique overcorrects the images exported from Lightroom. I tried turning off everything I could find in the Develop module, including camera calibration. It made a difference, but didn’t fix things.
So I created two images that differed in exposure by a stop. I measured the ratio of the mean values of the G channels of the two raw images with Rawdigger: it was 2.015. In Lightroom, Using PV 2012, the ratio of the linearized Adobe RGB green channels was 1.688. With PV 2010 and PV 2003, it was 1.681. Using all three channels and converting to monochrome in Matlab produces similar result. There seems to be a tone curve applied by Lightroom that keeps ratios in the raw file from being preserved in linear representations of the converted image.
I tried Iridient Developer, and got different, but still incorrect (in the photogrammetric sense) ratios of about 1.65. The ratio varied with the processing options. I thought the raw channel mixer set to green only might produce the right ratio, but no joy.
Using DCRAW, with the command line incantation “dcraw -v -4 -a -w -j -T -o1 _D437349.NEF” produces sRGB files with the green channel mean ratio of 2.015, the same ratio as that of the raw files –- actually it’s not quite the same ratio (it differs in the fifth decimal place) but I attribute that to the change of color space from camera native to sRGB. DCRAW users will note that I’m white balancing to average; leaving this out makes little difference.
So, while I have a solution for raw conversion that I can use, it’s much less convenient for me than to do the conversions using Lightroom.
So, my question is: How do I set up Lightroom so that the tone ratios of the original raw file are preserved in a linear representation of the output file?
Thanks for any help on this.
Jim
PS. For a little background, my first fumbling steps in this project are covered in
this and subsequent posts.