The Sony raw compression controversy seems to be heating up again, prompting me to do something that I've been resisting for months: simulating it.
There are two pieces to the Sony compression algorithm. The tone curve leaves out progressively more possible values as the pixel gets brighter. This kind of mimics the 1/3 power law that defines human luminance response. Are there a sufficient quantity of buckets at all parts of the tone curve, and after an image is manipulated in editing, do artifacts that were formerly invisible become intrusive?
The other possible place where visible errors could be introduced is in the delta modulation/demodulation. If the maximum and minimum values in a 16-bit row chunk are further apart than 128, information will be lost. Is that a source of visible errors?
And the last question: even if the above errors could be visible with synthetic images, are they swamped out by photon noise in real camera images?
Drawing on work by Lloyd Chambers and LuLa'er Alex Tutubalin, I wrote Matlab code to look at the effects of the Sony compression/decompression algorithm on real or synthetic images. The input image is sampled onto a simulated RGGB Bayer array, compressed, decompressed, demosaiced with bilinear interpolation, and compared with an image that's simply sampled and demosaiced. I just got it working this morning. So far, I've run one synthetic image (the ISO 12233 target) through the code.
Here's a link to the result.For those of you who just want me to cut to the chase, the differences between the two images is very small.
I invite anyone to critique the algorithm I implemented as described in the page linked to above.
I invite anyone who'd like to see what happens when one of their images is run through the simulator to PM me, and I'll do what I can to make it happen.
Jim