Luminous Landscape Forum

Equipment & Techniques => Mirrorless Cameras => Topic started by: Jim Kasson on February 14, 2014, 03:45:56 pm

Title: Sony a7 and a7R Raw Compression
Post by: Jim Kasson on February 14, 2014, 03:45:56 pm
The Sony raw compression controversy seems to be heating up again, prompting me to do something that I've been resisting for months: simulating it.

There are two pieces to the Sony compression algorithm. The tone curve leaves out progressively more possible values as the pixel gets brighter. This kind of mimics the 1/3 power law that defines human luminance response. Are there a sufficient quantity of buckets at all parts of the tone curve, and after an image is manipulated in editing, do artifacts that were formerly invisible become intrusive?

The other possible place where visible errors could be introduced is in the delta modulation/demodulation. If the maximum and minimum values in a 16-bit row chunk are further apart than 128, information will be lost. Is that a source of visible errors?

And the last question: even if the above errors could be visible with synthetic images, are they swamped out by photon noise in real camera images?

Drawing on work by Lloyd Chambers and LuLa'er Alex Tutubalin, I wrote Matlab code to look at the effects of the Sony compression/decompression algorithm on real or synthetic images. The input image is sampled onto a simulated RGGB Bayer array, compressed, decompressed, demosaiced with bilinear interpolation, and compared with an image that's simply sampled and demosaiced. I just got it working this morning. So far, I've run one synthetic image (the ISO 12233 target) through the code.

Here's a link to the result. (http://blog.kasson.com/?p=4823)

For those of you who just want me to cut to the chase, the differences between the two images is very small.

I invite anyone to critique the algorithm I implemented as described in the page linked to above.

I invite anyone who'd like to see what happens when one of their images is run through the simulator to PM me, and I'll do what I can to make it happen.

Jim

Title: Re: Sony a7 and a7R Raw Compression
Post by: Vladimirovich on February 14, 2014, 03:55:20 pm
And the last question: even if the above errors could be visible with synthetic images, are they swamped out by photon noise in real camera images?
you apply the curve to reduce gradations to the data that is already signal + noise... so how the noise is relevant at all ?
Title: Re: Sony a7 and a7R Raw Compression
Post by: Jim Kasson on February 14, 2014, 04:05:15 pm
you apply the curve to reduce gradations to the data that is already signal + noise... so how the noise is relevant at all ?

Visually, it might be. Noise may provide a visually confusing stimulus that covers up some artifacts, the same way that noise added to an 8-bit-per-color-plane image or a indexed-color image can reduce visible contouring.  I'm not a proponent of this theory; I really don't have a position on it. But I've seen it proposed by others, and I don't think it's unreasonable.

Jim
Title: Re: Sony a7 and a7R Raw Compression
Post by: Vladimirovich on February 14, 2014, 04:07:37 pm
Visually, it might be. Noise may provide a visually confusing stimulus that covers up some artifacts

but you reduce gradations (and severely) __after__ noise did what it does (dither)... so unless your raw converter/PP software dither during operations you can consider that there is no noise in data at all... so only the amount of gradations and what you do matters... assume that you have 16 bit raw where 15 bit is noise and 1 bit of data... nicely dithered image... now if you reduce gradations to only 2 then it really does not matter whether there were 15 bits of noise or no noise at all...
Title: Re: Sony a7 and a7R Raw Compression
Post by: ErikKaffehr on February 14, 2014, 05:05:20 pm
Hi,

Jim, thanks for your effort!

I am pretty sure that photon statistics dominate over compression artefacts. The compression scheme seems reasonable to me.

BTW, I would be much more concerned about the moiré.

Best regards
Erik


but you reduce gradations (and severely) __after__ noise did what it does (dither)... so unless your raw converter/PP software dither during operations you can consider that there is no noise in data at all... so only the amount of gradations and what you do matters... assume that you have 16 bit raw where 15 bit is noise and 1 bit of data... nicely dithered image... now if you reduce gradations to only 2 then it really does not matter whether there were 15 bits of noise or no noise at all...

Title: Re: Sony a7 and a7R Raw Compression
Post by: jrsforums on February 14, 2014, 05:09:33 pm
Is all RAW compression, such as cr2, lossy?
Title: Re: Sony a7 and a7R Raw Compression
Post by: Vladimirovich on February 14, 2014, 05:26:59 pm
I am pretty sure that photon statistics dominate over compression artefacts.
and I am sure that just enough gradations helps... noise is irrelevant... reduce number of gradations and no matter how much noise is in your data you will see banding
Title: Re: Sony a7 and a7R Raw Compression
Post by: ErikKaffehr on February 14, 2014, 05:53:55 pm
How?

Can you show an example? Of any case where there is a significant number of discrete levels within the say 2 sigma of the noise?


Best regards
Erik


and I am sure that just enough gradations helps... noise is irrelevant... reduce number of gradations and no matter how much noise is in your data you will see banding
Title: Re: Sony a7 and a7R Raw Compression
Post by: dds on February 14, 2014, 06:16:44 pm
Would these artifacts be relevant?

http://diglloyd.com/blog/2014/20140214_1-SonyA7-artifacts-star-trails.html
Title: Re: Sony a7 and a7R Raw Compression
Post by: Vladimirovich on February 14, 2014, 06:24:54 pm
How?

Can you show an example?



I think you mix 2 different things - one is how much bits is enough to encode a signal sans photon/shot noise (let it be X bits) vs how many gradations (let it Y, where log2(Y) < X ) after you decode back (w/o dithering the data after decoding) is enough not to have posterization if you start heavy image processing...  how big that difference between log2(Y) and X is tolerable for a heavy postprocessing do you think ? and if you do not see the banding it is not because there was a lot of noise before the encoding - but because there are either enough gradations present after decoding or you don't push in processing for "99%" of shots... reduce the number of gradations sufficiently and all the shot noise in the world is not going to be enough not to have banding (unless you dither in software after decoding)... now certainly Sony engineers were not that stupid not to leave just enough gradations to make this a non issue for "99.99%" of users
Title: Re: Sony a7 and a7R Raw Compression
Post by: ErikKaffehr on February 14, 2014, 06:46:00 pm
Hi,

I guess that could come from the delta compression. It would be sensitive to steep gradients.

Best regards
Erik

Would these artifacts be relevant?

http://diglloyd.com/blog/2014/20140214_1-SonyA7-artifacts-star-trails.html
Title: Re: Sony a7 and a7R Raw Compression
Post by: Jim Kasson on February 14, 2014, 07:36:23 pm
Simulated raw gradient quantized by a simulated 4 bit ADC, then demosaiced with bilinear interpolation:

(http://www.kasson.com/ll/Gradient4.jpg)

Simulated raw gradient with 20% noise added quantized by a simulated 4 bit ADC, then demosaiced with bilinear interpolation:

(http://www.kasson.com/ll/Gradient4N.jpg)

Jim
Title: Re: Sony a7 and a7R Raw Compression
Post by: Jim Kasson on February 14, 2014, 08:11:38 pm
I was surprised how small the errors were on my first test. I spent some time looking for a bug. Then it hit me. I computed the difference image by subtracting the two images using the Photoshop "difference" blending mode. I believe that works in the working color space, which was Adobe RGB. Because of the gamma of 2.2 in that space, taking the difference the way I did will attenuate highlight deltas compared to shadow differences, when compared to computing the difference in a linear RGB space.

Jim
Title: Re: Sony a7 and a7R Raw Compression
Post by: Jim Kasson on February 14, 2014, 08:18:21 pm
BTW, I would be much more concerned about the moiré.

Erik, remember that this is a simulation, and I simulated a perfect lens. No aberrations, no diffraction, no field curvature, no mis-focusing, etc. A real lens would blur the moire somewhat. I did simulate capture averaging with a 100% fill factor. If you simulate point capture, the false colors are something else. Also, bilinear interpolation is far from the best demosaicing method. It has the advantage that it is readily available, non-proprietary, and reproducible by others.

Jim
Title: Re: Sony a7 and a7R Raw Compression
Post by: madmanchan on February 14, 2014, 08:55:19 pm
I was surprised how small the errors were on my first test. I spent some time looking for a bug. Then it hit me. I computed the difference image by subtracting the two images using the Photoshop "difference" blending mode. I believe that works in the working color space, which was Adobe RGB. Because of the gamma of 2.2 in that space, taking the difference the way I did will attenuate highlight deltas compared to shadow differences, when compared to computing the difference in a linear RGB space.

Right, Jim.  If you're looking for the actual numerical deltas/errors, then using the linear space for the diffs makes sense.  But what you did the first time (in a 2.2 encoding) makes more sense from a visual evaluation.
Title: Re: Sony a7 and a7R Raw Compression
Post by: Jim Kasson on February 14, 2014, 09:23:16 pm
Right, Jim.  If you're looking for the actual numerical deltas/errors, then using the linear space for the diffs makes sense.  But what you did the first time (in a 2.2 encoding) makes more sense from a visual evaluation.


Eric, I totally agree. From a visual perspective, it would have been better to use a gamma of three, which would have compressed the highlights even more. OTOH, I wanted to make sure that what I did was clear, and to relate it to comparisons that are out there using linear spaces.


Thanks,

Jim
Title: Re: Sony a7 and a7R Raw Compression
Post by: Vladimirovich on February 14, 2014, 10:07:33 pm
Simulated raw gradient with 20% noise added quantized by a simulated 4 bit ADC, then demosaiced with bilinear interpolation:
so that will be 2 bits signal and 2bits noise in 4bit raw before non linear encoding of post ADC data, right ?
Title: Re: Sony a7 and a7R Raw Compression
Post by: Jim Kasson on February 14, 2014, 10:43:38 pm
so that will be 2 bits signal and 2bits noise in 4bit raw before non linear encoding of post ADC data, right ?

I added the noise in Photoshop in Adobe RGB, using the add noise filter. I think, but I don't know, that when the filter is set to 20%, Photoshop adds noise in the working color space with either the peak, or the peak-to-peak, or the rms value equal to one-fifth of the full-scale signal or one fifth of the current value. If it's important to figure that out, I can do some research, or just add the noise in Matlab, where I can know exactly what's going on. The gamma-encoded file was linearized before the simulated sampling.

You're right about that all being before the nonlinear encoding of the post-ADC data.

So, in the gamma 2.2 space, I'd say 4 bits signal and 1.678 bits (log2(16*0.2)) noise.  But there's a lot of guessing in that calculation.

I was just trying to get a feel for whether noise pre-quantization could reduce contouring. It looks like it can.

Jim
Title: Re: Sony a7 and a7R Raw Compression
Post by: madmanchan on February 14, 2014, 10:47:08 pm
I was just trying to get a feel for whether noise pre-quantization could reduce contouring. It looks like it can.

Absolutely -- that's exactly what dither is (randomizing the quantization error).
Title: Re: Sony a7 and a7R Raw Compression
Post by: Jim Kasson on February 14, 2014, 11:28:45 pm
Absolutely -- that's exactly what dither is (randomizing the quantization error).

Eric, I tried to send you a PM, but it looks like it didn't work. Anyway, to your point, we're in violent agreement.

http://patents.com/us-3999129.html (http://patents.com/us-3999129.html)

https://www.google.com/patents
/US4187466 (https://www.google.com/patents/US4187466)

Thanks,

Jim
Title: Re: Sony a7 and a7R Raw Compression
Post by: ErikKaffehr on February 15, 2014, 03:36:13 am
Hi Jim,

I am quite aware of that. But it stroke me that aliasing causes real artefacts. In swedish we have a proverb, "sila mygg och svälja elefanter", according to google translator it would be: "straining at gnats and swallowing elephants", I was thinking about that.

The way I think, at high photon counts where the tone curve is compressed the photon noise will be large in numerical values. So I think that we won't se a lot of quantisation effect because the data is much less exact than the quantisation step.

My guess is Sony does this compression so they can use 12 bit pipeline to process data having 14 bits of bandwidth, seems reasonable to me. I don't understand the reason for the delta compression, and I think it may cause some issues.

Thanks for good info!

Best regards
Erik

Erik, remember that this is a simulation, and I simulated a perfect lens. No aberrations, no diffraction, no field curvature, no mis-focusing, etc. A real lens would blur the moire somewhat. I did simulate capture averaging with a 100% fill factor. If you simulate point capture, the false colors are something else. Also, bilinear interpolation is far from the best demosaicing method. It has the advantage that it is readily available, non-proprietary, and reproducible by others.

Jim
Title: Re: Sony a7 and a7R Raw Compression
Post by: Jim Kasson on February 15, 2014, 02:04:33 pm
I came up with a really tough synthetic image to throw at the Sony raw compression algorithm. It didn't do as well as with the ISO 12233 target. Some of the artifacts look like those in the star trails image that Lloyd Chambers posted.

The difference image:

(http://www.kasson.com/ll/Sony%20tough%20test%20diff.jpg)

http://blog.kasson.com/?p=4838

Jim
Title: Re: Sony a7 and a7R Raw Compression
Post by: Jim Kasson on February 15, 2014, 06:12:40 pm
The way I think, at high photon counts where the tone curve is compressed the photon noise will be large in numerical values. So I think that we won't see a lot of quantisation effect because the data is much less exact than the quantisation step.

Erik, it looks like simulating the photon noise (assuming camera is set to unity gain ISO) obscures the background moire on my tough test, and obscures the detail in the larger, more prominant artifacts, but not the artifacts themselves.

http://blog.kasson.com/?p=4847

(http://www.kasson.com/ll/sony%20tough%20diff%20photon%20noise.jpg)

Jim
Title: Re: Sony a7 and a7R Raw Compression
Post by: Vladimirovich on February 15, 2014, 06:34:13 pm
I added the noise in Photoshop in Adobe RGB, using the add noise filter. I think, but I don't know, that when the filter is set to 20%, Photoshop adds noise in the working color space with either the peak, or the peak-to-peak, or the rms value equal to one-fifth of the full-scale signal or one fifth of the current value. If it's important to figure that out, I can do some research, or just add the noise in Matlab, where I can know exactly what's going on. The gamma-encoded file was linearized before the simulated sampling.

You're right about that all being before the nonlinear encoding of the post-ADC data.

So, in the gamma 2.2 space, I'd say 4 bits signal and 1.678 bits (log2(16*0.2)) noise.  But there's a lot of guessing in that calculation.

I was just trying to get a feel for whether noise pre-quantization could reduce contouring. It looks like it can.

Jim

right, but applying sony style curve will reduce your 4 bits to how many exactly in lights ? from 4 to 1 or 2 ? so that demosaicking will be dealing with quite less resolution.
Title: Re: Sony a7 and a7R Raw Compression
Post by: Vladimirovich on February 15, 2014, 06:42:26 pm
My guess is Sony does this compression so they can use 12 bit pipeline to process data having 14 bits of bandwidth, seems reasonable to me.
it depends when they do it... if the firmare does that just to write the date to a raw file then what kind of pipeline we are talking about... only to save space/time when writing to SC cards.
Title: Re: Sony a7 and a7R Raw Compression
Post by: Bart_van_der_Wolf on February 15, 2014, 08:05:25 pm
Is all RAW compression, such as cr2, lossy?

Hi John,

No. Sofar, e.g. Canon CR2's are compressed lossless (unless one uses lower than full size Raws). The compression used seems to be Run-Length-Encoding, which means that multiple small differences in a sequence are compressed to a single difference and a multiplier for the number of occurrences.

As a basic principle for slowly changing adjacent values, RLE encoding is often more efficient to encode only the differences in a value sequence (e.g. a slope of one in a gradient), than the absolute value (which may be a sequence of large multi-bit numbers).

This is not what the Sony compression does, it skips progressively more histogram bins (values are not exactly encoded, but skipped and accumulated) as the intensities increase.

Cheers,
Bart
Title: Re: Sony a7 and a7R Raw Compression
Post by: ErikKaffehr on February 15, 2014, 10:19:10 pm
Hi Jim,

That is what I would expect. I don't see effects of the tonal compression but I see artefacts of caused by the delta compression.

My guess is that Sony may release a firmware change that doesn't use the delta compression.

My expectation was that adding photon noise would essentially eliminate all visible banding effects due to the tonal compression. Would be interesting if you could switch either compression on and off, now you have invested so much effort.

Best regards
Erik

Erik, it looks like simulating the photon noise (assuming camera is set to unity gain ISO) obscures the background moire on my tough test, and obscures the detail in the larger, more prominant artifacts, but not the artifacts themselves.

http://blog.kasson.com/?p=4847

(http://www.kasson.com/ll/sony%20tough%20diff%20photon%20noise.jpg)

Jim
Title: Re: Sony a7 and a7R Raw Compression
Post by: Jim Kasson on February 15, 2014, 10:42:19 pm
My expectation was that adding photon noise would essentially eliminate all visible banding effects due to the tonal compression. Would be interesting if you could switch either compression on and off, now you have invested so much effort.

That's not hard, Erik. I'll take a look at it tomorrow.

Jim
Title: Re: Sony a7 and a7R Raw Compression
Post by: Hans van Driest on February 16, 2014, 05:08:35 am
I think that the compression of the tone curve is mostly  irrelevant, since the part of the resolution that is lost is completely swamped in shot noise anyway. The thing is that this compression is almost certainly done in the ADC itself, and doing so is a very smart move.
Sony uses column conversion, meaning they use a lot of ADC's in parallel. This, in combination with some other tricks, seem to get rid of most, if not all, pattern noise and leaves the low read noise. All this is resulting in the great dynamic range of Sony sensors. A problem with having so much ADC's, is that they have to be simple. And simple they are. Sony uses the most basic of ADC's, where a voltage that is ramping up, is compared with the analog voltage out of the sensor. So called slope analog to digital converters. A problem with these is speed. For 14 bits, such an ADC needs 2^14 clock cycles for each conversion. when using say a 400MHz clock, this means 41us per conversion. Sounds fast, but they must perform over 6000 of such a conversions for each image, stretching the conversion time to a bit ore than 0.25 sec. This is a bit slow, for high frame rates, but also for live view.
The slope, or voltage ramp, going into the comparator is generated by an analog to digital converter, meaning that the shape can be made as desired. Sony uses a variation of an exponential slope (the compression). This cuts the conversion time down by a factor of eight (2^11 in stead of 2^14). Now the total conversion time is slightly over 0.03sec. Great for live view.
It might very well be that this explains why Nikon live view is as poor as it is (line skipping to reduce conversion time), compared to that of Sony.
And the elegant thing is that this compression is not really costing much, if anything.  14 bits are needed for he dynamic range. But signal to noise ratio, when light is hitting the sensor, is not only determined by the ADC and read noise, but also by a property of the light itself; shot noise. Say that the a7r sensor has a full well capacity of 60000 (optimistic). Such a full well capacity means that at maximum illumination, the snr is sqrt(60k) is approximately 245, which can easily be resolved with an 8 bits ADC. so with 11 bits, there is room to spare. That is the thing with shot noise, its level goes up with the signal, not just as fast, but it goes up. So the 14 bits are needed for the deep shadows, but once there is enough light on a pixel, you do not need them anymore.

It would indeed be nice is Sony would also allow you to use all 11 (compressed) bits, without the second part of the conversion.

Sorry if all this is a bit technical, I do not know how express the above otherwise.
Title: Re: Sony a7 and a7R Raw Compression
Post by: Jim Kasson on February 16, 2014, 01:52:55 pm
My expectation was that adding photon noise would essentially eliminate all visible banding effects due to the tonal compression. Would be interesting if you could switch either compression on and off, now you have invested so much effort.

I can't switch just the tone compression off, since the delta modulation scheme expects the image in 11-bit tone compressed form. But I can, and did, switch the delta modulation off and ran the gradient image through just the tone compression/decompression algorithm.

http://blog.kasson.com/?p=4854

Your expectation is correct, Erik, at least for the test image.

Jim

Jim
Title: Re: Sony a7 and a7R Raw Compression
Post by: ErikKaffehr on February 16, 2014, 04:41:20 pm
Hi,

Thanks for making the test!

Best regards
Erik




I can't switch just the tone compression off, since the delta modulation scheme expects the image in 11-bit tone compressed form. But I can, and did, switch the delta modulation off and ran the gradient image through just the tone compression/decompression algorithm.

http://blog.kasson.com/?p=4854

Your expectation is correct, Erik, at least for the test image.

Jim

Jim
Title: Re: Sony a7 and a7R Raw Compression
Post by: ErikKaffehr on February 16, 2014, 04:50:08 pm
Hi,

The ADC you describe is called a Wilkinson converter, indeed it is simple but it is also very accurate.

I agree with regard to conversion times, but I don't share your conclusions. On  the D3X, there was a choice between 14 bit and 12 bit, but 14 bit was limited to 2FPS. On the Alpha 99 the camera has 14 bits in single shot but 12 bits in continuous modes.

I don't think they implement the tonal compression before the ADC, it would be to complex I think. The reason I think they do it is that the Bionz is probably only 12 bit wide. With tonal compression they can push 14 bit wide data trough a 12 bit ASIC. So they save an expensive redesign of the AICS and all algorithms.

Best regards
Erik

I think that the compression of the tone curve is mostly  irrelevant, since the part of the resolution that is lost is completely swamped in shot noise anyway. The thing is that this compression is almost certainly done in the ADC itself, and doing so is a very smart move.
Sony uses column conversion, meaning they use a lot of ADC's in parallel. This, in combination with some other tricks, seem to get rid of most, if not all, pattern noise and leaves the low read noise. All this is resulting in the great dynamic range of Sony sensors. A problem with having so much ADC's, is that they have to be simple. And simple they are. Sony uses the most basic of ADC's, where a voltage that is ramping up, is compared with the analog voltage out of the sensor. So called slope analog to digital converters. A problem with these is speed. For 14 bits, such an ADC needs 2^14 clock cycles for each conversion. when using say a 400MHz clock, this means 41us per conversion. Sounds fast, but they must perform over 6000 of such a conversions for each image, stretching the conversion time to a bit ore than 0.25 sec. This is a bit slow, for high frame rates, but also for live view.
The slope, or voltage ramp, going into the comparator is generated by an analog to digital converter, meaning that the shape can be made as desired. Sony uses a variation of an exponential slope (the compression). This cuts the conversion time down by a factor of eight (2^11 in stead of 2^14). Now the total conversion time is slightly over 0.03sec. Great for live view.
It might very well be that this explains why Nikon live view is as poor as it is (line skipping to reduce conversion time), compared to that of Sony.
And the elegant thing is that this compression is not really costing much, if anything.  14 bits are needed for he dynamic range. But signal to noise ratio, when light is hitting the sensor, is not only determined by the ADC and read noise, but also by a property of the light itself; shot noise. Say that the a7r sensor has a full well capacity of 60000 (optimistic). Such a full well capacity means that at maximum illumination, the snr is sqrt(60k) is approximately 245, which can easily be resolved with an 8 bits ADC. so with 11 bits, there is room to spare. That is the thing with shot noise, its level goes up with the signal, not just as fast, but it goes up. So the 14 bits are needed for the deep shadows, but once there is enough light on a pixel, you do not need them anymore.

It would indeed be nice is Sony would also allow you to use all 11 (compressed) bits, without the second part of the conversion.

Sorry if all this is a bit technical, I do not know how express the above otherwise.
Title: Re: Sony a7 and a7R Raw Compression
Post by: Hans van Driest on February 17, 2014, 02:42:42 am
Well, I do not know about the number of bits used in the signal processor, but I do think that making a non linear slope is easy, since the slope is generated with a digital to analog converter, so can have any shape you like. It is not doing compression before the ADC, but as part of the ADC.
See, for example,
"http://www.google.nl/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CDYQFjAA&url=http%3A%2F%2Frepository.tudelft.nl%2Fassets%2Fuuid%3Ab6587681-1d1b-4bed-83e8-e7d0e6c55add%2FMaster_thesis_Jia_Guo.pdf&ei=kvoAU6P0BrTb7AbPkICACg&usg=AFQjCNGokRo-KRek6UfKfKd-0t5Ubv2nRw&bvm=bv.61535280,d.ZGU"

As for the precision of a slope type ADC; it share some of the problems with other designs. The only thing it is really better in is monotonicity, which is good in this application, since linearity (precision) in itself is hardly a requirement, given the not perfect linearity of the sensor itself.

It is indeed strange that the a99 takes longer for 14 bits. Could be a lot of things, but longer conversion time could indeed also be an explanation. that the D3x used to be so slow does not mean much. they used a 12 bit sensor for that (assuming it was the same one as used in the a900) and it can be they sampled the sensor four times to get two extra bits, or indeed the conversion was slower. this would mean that the Nikon version used a special version with larger counters and a higher resolution ramp generator (not very likely).
We not likely ever to know for sure how Sony does things, but hardware compression seems very logical to me. And it would be a nice explanation for the better live view. But there are other ways, like using less bits during live view (might explain the poor dynamic range of the image).
Title: Re: Sony a7 and a7R Raw Compression
Post by: ErikKaffehr on February 17, 2014, 05:20:33 am
Hi,

Thanks for the link. Very interesting!

The Sony DSLRs used to 12 bit files. I would recall the Alpha 99 or possibly Alpha 77 was the first one having 14 bits.

Best regards
Erik



Well, I do not know about the number of bits used in the signal processor, but I do think that making a non linear slope is easy, since the slope is generated with a digital to analog converter, so can have any shape you like. It is not doing compression before the ADC, but as part of the ADC.
See, for example,
"http://www.google.nl/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CDYQFjAA&url=http%3A%2F%2Frepository.tudelft.nl%2Fassets%2Fuuid%3Ab6587681-1d1b-4bed-83e8-e7d0e6c55add%2FMaster_thesis_Jia_Guo.pdf&ei=kvoAU6P0BrTb7AbPkICACg&usg=AFQjCNGokRo-KRek6UfKfKd-0t5Ubv2nRw&bvm=bv.61535280,d.ZGU"

As for the precision of a slope type ADC; it share some of the problems with other designs. The only thing it is really better in is monotonicity, which is good in this application, since linearity (precision) in itself is hardly a requirement, given the not perfect linearity of the sensor itself.

It is indeed strange that the a99 takes longer for 14 bits. Could be a lot of things, but longer conversion time could indeed also be an explanation. that the D3x used to be so slow does not mean much. they used a 12 bit sensor for that (assuming it was the same one as used in the a900) and it can be they sampled the sensor four times to get two extra bits, or indeed the conversion was slower. this would mean that the Nikon version used a special version with larger counters and a higher resolution ramp generator (not very likely).
We not likely ever to know for sure how Sony does things, but hardware compression seems very logical to me. And it would be a nice explanation for the better live view. But there are other ways, like using less bits during live view (might explain the poor dynamic range of the image).
Title: Re: Sony a7 and a7R Raw Compression
Post by: CptZar on February 19, 2014, 02:05:52 am
Thank you Hans for your very interesting posts. I remember you explained  the Sony Lossy Compression in another thread as well.

Beside the fact, that Sony compression obviously is required for the excellent live view, I welcome a new technology, which keeps files smaller, by presenting the same level of quality as an much older technology, who's limits are approaching. With resolution approaching quickly approaching 50Mpxs, there is definitely a need for smaller files. The approach of deleting obsolete data from the file is quite intelligent. If there are still flaws, there is need of improvement of that technology. Not the step back to yesterdays solutions.

Maybe Photographers are very conservative. EVF, mirrorless, and now file compression seam to be quite suspect. Some of it discussed from a more emotional point of view. But fact is, technology advances.
Title: Re: Sony a7 and a7R Raw Compression
Post by: Rudi Venter on February 22, 2014, 11:11:05 am
Is all RAW compression, such as cr2, lossy?

No, CR2 is loss-less, similar to ZIP, not sure why some camera manufacturers refuse to adopt loss-less compression, the technology is there.....
Title: Re: Sony a7 and a7R Raw Compression
Post by: jrsforums on February 22, 2014, 12:13:15 pm
No, CR2 is loss-less, similar to ZIP, not sure why some camera manufacturers refuse to adopt loss-less compression, the technology is there.....

Thank you for your response.  That is what I thought.  

I really did not understand the other experts analysing whether artifacts could be seen, rather than just complaining that Sony was using a lossy compression???  With cost of storage dropping faster than the increasing size of sensors, I see no reason to do this.

Title: Re: Sony a7 and a7R Raw Compression
Post by: CptZar on February 23, 2014, 02:56:58 am
Storing is not the real problem.  What about transferring files between devices? A notebook and a desktop via lan? What about working with files, importing into PS as Tiff files which will be much larger. And then TIFF files to work on, merging, blending, stitching. Then file size definitely becomes an issue. A 2GB file on a Mac Pro is no problem. Different picture on a Mac Boo Air, where storage might be a factor too, due to limited  SSD sizes.

But of course one may discuss how to reduce file size.
Title: Re: Sony a7 and a7R Raw Compression
Post by: robdickinson on February 23, 2014, 02:39:01 pm
With the amount of computing power we have now lossless compression should be used wherever possible. TIFF is very wasteful.

Title: Re: Sony a7 and a7R Raw Compression
Post by: jgcox on February 23, 2014, 06:52:03 pm

IMO the push to enable WiFi transfer and high frame rates is the reason for lossy compression.

With the amount of computing power we have now lossless compression should be used wherever possible. TIFF is very wasteful.


Title: Re: Sony a7 and a7R Raw Compression
Post by: Vladimirovich on February 23, 2014, 07:55:56 pm
Beside the fact, that Sony compression obviously is required for the excellent live view
how does it ? if it is applied only when you actually writing a raw file ?
Title: Re: Sony a7 and a7R Raw Compression
Post by: Vladimirovich on February 23, 2014, 08:02:10 pm
Well, I do not know about the number of bits used in the signal processor, but I do think that making a non linear slope is easy, since the slope is generated with a digital to analog converter, so can have any shape you like. It is not doing compression before the ADC, but as part of the ADC.
those sensors Sony Semi is selling to a lot of companies, not only to Sony Imaging... some of those were designed for customers getting 'em before Sony Imaging and to overburden the simple on-die ADCs with compression is not wise when the only Sony Imaging is using that feature... it is a purely digital compression done post ADC, off sensor by Bionz before writing the data in raw file.
Title: Re: Sony a7 and a7R Raw Compression
Post by: CptZar on February 24, 2014, 01:17:33 am
Vladimirovich, please see this posts from page 3. There is one more on the same page. I am referring to it.

Cheers

Jan
I think that the compression of the tone curve is mostly  irrelevant, since the part of the resolution that is lost is completely swamped in shot noise anyway. The thing is that this compression is almost certainly done in the ADC itself, and doing so is a very smart move.
Sony uses column conversion, meaning they use a lot of ADC's in parallel. This, in combination with some other tricks, seem to get rid of most, if not all, pattern noise and leaves the low read noise. All this is resulting in the great dynamic range of Sony sensors. A problem with having so much ADC's, is that they have to be simple. And simple they are. Sony uses the most basic of ADC's, where a voltage that is ramping up, is compared with the analog voltage out of the sensor. So called slope analog to digital converters. A problem with these is speed. For 14 bits, such an ADC needs 2^14 clock cycles for each conversion. when using say a 400MHz clock, this means 41us per conversion. Sounds fast, but they must perform over 6000 of such a conversions for each image, stretching the conversion time to a bit ore than 0.25 sec. This is a bit slow, for high frame rates, but also for live view.
The slope, or voltage ramp, going into the comparator is generated by an analog to digital converter, meaning that the shape can be made as desired. Sony uses a variation of an exponential slope (the compression). This cuts the conversion time down by a factor of eight (2^11 in stead of 2^14). Now the total conversion time is slightly over 0.03sec. Great for live view.
It might very well be that this explains why Nikon live view is as poor as it is (line skipping to reduce conversion time), compared to that of Sony.
And the elegant thing is that this compression is not really costing much, if anything.  14 bits are needed for he dynamic range. But signal to noise ratio, when light is hitting the sensor, is not only determined by the ADC and read noise, but also by a property of the light itself; shot noise. Say that the a7r sensor has a full well capacity of 60000 (optimistic). Such a full well capacity means that at maximum illumination, the snr is sqrt(60k) is approximately 245, which can easily be resolved with an 8 bits ADC. so with 11 bits, there is room to spare. That is the thing with shot noise, its level goes up with the signal, not just as fast, but it goes up. So the 14 bits are needed for the deep shadows, but once there is enough light on a pixel, you do not need them anymore.

It would indeed be nice is Sony would also allow you to use all 11 (compressed) bits, without the second part of the conversion.

Sorry if all this is a bit technical, I do not know how express the above otherwise.
Title: Re: Sony a7 and a7R Raw Compression
Post by: Vladimirovich on February 24, 2014, 02:41:15 am
It might very well be that this explains why Nikon live view is as poor as it is (line skipping to reduce conversion time), compared to that of Sony.

Sony does the same line skipping... you are not stating that Sony does whole frame readout to feed XGA viewfinder, are you ?

for example E-M1 and GH4 are using this sensor http://www.semicon.panasonic.co.jp/ds8/c3/IS00006AE.pdf

it does 10bit readout and 12 bit readout... why do you need to complicate simple ADC w/ curve compression when you can just do 10bit linear for a speed (to feed EVF/LCD and probably CDAF) when necessary ? it seems it can feed that @ 120 fps w/ lines skipping for EVF and at the same time (w/ uninterrupted EVF/LCD feed) feed CDAF from focusing point subarea @ 120/240/480 readouts per second... I have both E-M1 and A7 and I'd not say that A7 has a better EVF/LCD feed at all  ;), on the contrary...
Title: Re: Sony a7 and a7R Raw Compression
Post by: Hans van Driest on February 24, 2014, 03:35:03 am
I was referring to 100% live view. there one can see a clear difference between Nikon and Sony. it is indeed also possible to do this with a reduce word length (10bits for example), but then one could ask why live view, at 100%, of the d800 is so (relatively) poor.
Title: Re: Sony a7 and a7R Raw Compression
Post by: Vladimirovich on February 24, 2014, 11:08:49 am
I was referring to 100% live view.

100% LV does not require all rows to be read from sensor - you are reading subset of rows, so non issue either...

there one can see a clear difference between Nikon and Sony.

yes, that is Nikon is dSLR and Sony is dSLM with different priorities in design...
Title: Is this possible?
Post by: Guillermo Luijk on May 11, 2015, 03:53:42 pm
Sorry to revitalize such an old post, but today I came across a Sony A7 II RAW file and there is something I don't understand. The compression seems evident in the RAW histogram obtained with dcraw -D:

(http://www.guillermoluijk.com/misc/sonya7histogram.gif)

Just counting used levels we find 4 zones in the range of decoded values:
Total number of used levels: 674+311+149+259=1.393 levels -> log2(1393)=10,44 bits

What I do not fully understand is that if one now develops this RAW file with DCRAW (gamma 1.0 output), levels spread in the final image in the same way as in the decoded RAW file, without any compression curve applied. This shocks me because given the high DR of this sensor, if the decoded values are already linear in a 12-bit range (from which only 10,44 bits are actually used), how can it avoid shadow posterization when lifting the shadows? and if they are not linear, how can the output from DCRAW not linearize them?.

Unless I am missing something or DCRAW is simply ignoring Sony's compression, I don't understand what's going on.

Regards
Title: Re: Sony a7 and a7R Raw Compression
Post by: NancyP on May 15, 2015, 06:21:47 pm
This is all quite interesting to a non-engineer like myself. Thank you, thread contributors.