Sorry to revitalize such an old post, but today I came across a Sony A7 II RAW file and there is something I don't understand. The compression seems evident in the RAW histogram obtained with dcraw -D:
Just counting used levels we find 4 zones in the range of decoded values:
- From 128 to 801 all used -> 674 levels
- From 802 to 1424 half the levels used -> (1424-802)/2=311 levels
- From 1427 to 2023 one out of each 4 used -> (2023-1427)/4=149 levels
- From 2029 to 4101 one out of each 8 used -> (4101-2029)/8=259 levels
Total number of used levels: 674+311+149+259=1.393 levels -> log2(1393)=
10,44 bitsWhat I do not fully understand is that if one now develops this RAW file with DCRAW (gamma 1.0 output), levels spread in the final image in the same way as in the decoded RAW file, without any compression curve applied. This shocks me because given the high DR of this sensor, if the decoded values are already linear in a 12-bit range (from which only 10,44 bits are actually used), how can it avoid shadow posterization when lifting the shadows? and if they are not linear, how can the output from DCRAW not linearize them?.
Unless I am missing something or DCRAW is simply ignoring Sony's compression, I don't understand what's going on.
Regards