Pages: 1 [2] 3   Go Down

Author Topic: Sony a7 and a7R Raw Compression  (Read 33264 times)

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Sony a7 and a7R Raw Compression
« Reply #20 on: February 15, 2014, 03:36:13 am »

Hi Jim,

I am quite aware of that. But it stroke me that aliasing causes real artefacts. In swedish we have a proverb, "sila mygg och svälja elefanter", according to google translator it would be: "straining at gnats and swallowing elephants", I was thinking about that.

The way I think, at high photon counts where the tone curve is compressed the photon noise will be large in numerical values. So I think that we won't se a lot of quantisation effect because the data is much less exact than the quantisation step.

My guess is Sony does this compression so they can use 12 bit pipeline to process data having 14 bits of bandwidth, seems reasonable to me. I don't understand the reason for the delta compression, and I think it may cause some issues.

Thanks for good info!

Best regards
Erik

Erik, remember that this is a simulation, and I simulated a perfect lens. No aberrations, no diffraction, no field curvature, no mis-focusing, etc. A real lens would blur the moire somewhat. I did simulate capture averaging with a 100% fill factor. If you simulate point capture, the false colors are something else. Also, bilinear interpolation is far from the best demosaicing method. It has the advantage that it is readily available, non-proprietary, and reproducible by others.

Jim
Logged
Erik Kaffehr
 

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Sony a7 and a7R Raw Compression
« Reply #21 on: February 15, 2014, 02:04:33 pm »

I came up with a really tough synthetic image to throw at the Sony raw compression algorithm. It didn't do as well as with the ISO 12233 target. Some of the artifacts look like those in the star trails image that Lloyd Chambers posted.

The difference image:



http://blog.kasson.com/?p=4838

Jim
« Last Edit: February 15, 2014, 02:10:14 pm by Jim Kasson »
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Sony a7 and a7R Raw Compression
« Reply #22 on: February 15, 2014, 06:12:40 pm »

The way I think, at high photon counts where the tone curve is compressed the photon noise will be large in numerical values. So I think that we won't see a lot of quantisation effect because the data is much less exact than the quantisation step.

Erik, it looks like simulating the photon noise (assuming camera is set to unity gain ISO) obscures the background moire on my tough test, and obscures the detail in the larger, more prominant artifacts, but not the artifacts themselves.

http://blog.kasson.com/?p=4847



Jim

Vladimirovich

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1311
Re: Sony a7 and a7R Raw Compression
« Reply #23 on: February 15, 2014, 06:34:13 pm »

I added the noise in Photoshop in Adobe RGB, using the add noise filter. I think, but I don't know, that when the filter is set to 20%, Photoshop adds noise in the working color space with either the peak, or the peak-to-peak, or the rms value equal to one-fifth of the full-scale signal or one fifth of the current value. If it's important to figure that out, I can do some research, or just add the noise in Matlab, where I can know exactly what's going on. The gamma-encoded file was linearized before the simulated sampling.

You're right about that all being before the nonlinear encoding of the post-ADC data.

So, in the gamma 2.2 space, I'd say 4 bits signal and 1.678 bits (log2(16*0.2)) noise.  But there's a lot of guessing in that calculation.

I was just trying to get a feel for whether noise pre-quantization could reduce contouring. It looks like it can.

Jim

right, but applying sony style curve will reduce your 4 bits to how many exactly in lights ? from 4 to 1 or 2 ? so that demosaicking will be dealing with quite less resolution.
« Last Edit: February 15, 2014, 06:37:05 pm by Vladimirovich »
Logged

Vladimirovich

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1311
Re: Sony a7 and a7R Raw Compression
« Reply #24 on: February 15, 2014, 06:42:26 pm »

My guess is Sony does this compression so they can use 12 bit pipeline to process data having 14 bits of bandwidth, seems reasonable to me.
it depends when they do it... if the firmare does that just to write the date to a raw file then what kind of pipeline we are talking about... only to save space/time when writing to SC cards.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Sony a7 and a7R Raw Compression
« Reply #25 on: February 15, 2014, 08:05:25 pm »

Is all RAW compression, such as cr2, lossy?

Hi John,

No. Sofar, e.g. Canon CR2's are compressed lossless (unless one uses lower than full size Raws). The compression used seems to be Run-Length-Encoding, which means that multiple small differences in a sequence are compressed to a single difference and a multiplier for the number of occurrences.

As a basic principle for slowly changing adjacent values, RLE encoding is often more efficient to encode only the differences in a value sequence (e.g. a slope of one in a gradient), than the absolute value (which may be a sequence of large multi-bit numbers).

This is not what the Sony compression does, it skips progressively more histogram bins (values are not exactly encoded, but skipped and accumulated) as the intensities increase.

Cheers,
Bart
« Last Edit: February 15, 2014, 08:08:04 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Sony a7 and a7R Raw Compression
« Reply #26 on: February 15, 2014, 10:19:10 pm »

Hi Jim,

That is what I would expect. I don't see effects of the tonal compression but I see artefacts of caused by the delta compression.

My guess is that Sony may release a firmware change that doesn't use the delta compression.

My expectation was that adding photon noise would essentially eliminate all visible banding effects due to the tonal compression. Would be interesting if you could switch either compression on and off, now you have invested so much effort.

Best regards
Erik

Erik, it looks like simulating the photon noise (assuming camera is set to unity gain ISO) obscures the background moire on my tough test, and obscures the detail in the larger, more prominant artifacts, but not the artifacts themselves.

http://blog.kasson.com/?p=4847



Jim
« Last Edit: February 15, 2014, 10:25:11 pm by ErikKaffehr »
Logged
Erik Kaffehr
 

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Sony a7 and a7R Raw Compression
« Reply #27 on: February 15, 2014, 10:42:19 pm »

My expectation was that adding photon noise would essentially eliminate all visible banding effects due to the tonal compression. Would be interesting if you could switch either compression on and off, now you have invested so much effort.

That's not hard, Erik. I'll take a look at it tomorrow.

Jim

Hans van Driest

  • Newbie
  • *
  • Offline Offline
  • Posts: 25
Re: Sony a7 and a7R Raw Compression
« Reply #28 on: February 16, 2014, 05:08:35 am »

I think that the compression of the tone curve is mostly  irrelevant, since the part of the resolution that is lost is completely swamped in shot noise anyway. The thing is that this compression is almost certainly done in the ADC itself, and doing so is a very smart move.
Sony uses column conversion, meaning they use a lot of ADC's in parallel. This, in combination with some other tricks, seem to get rid of most, if not all, pattern noise and leaves the low read noise. All this is resulting in the great dynamic range of Sony sensors. A problem with having so much ADC's, is that they have to be simple. And simple they are. Sony uses the most basic of ADC's, where a voltage that is ramping up, is compared with the analog voltage out of the sensor. So called slope analog to digital converters. A problem with these is speed. For 14 bits, such an ADC needs 2^14 clock cycles for each conversion. when using say a 400MHz clock, this means 41us per conversion. Sounds fast, but they must perform over 6000 of such a conversions for each image, stretching the conversion time to a bit ore than 0.25 sec. This is a bit slow, for high frame rates, but also for live view.
The slope, or voltage ramp, going into the comparator is generated by an analog to digital converter, meaning that the shape can be made as desired. Sony uses a variation of an exponential slope (the compression). This cuts the conversion time down by a factor of eight (2^11 in stead of 2^14). Now the total conversion time is slightly over 0.03sec. Great for live view.
It might very well be that this explains why Nikon live view is as poor as it is (line skipping to reduce conversion time), compared to that of Sony.
And the elegant thing is that this compression is not really costing much, if anything.  14 bits are needed for he dynamic range. But signal to noise ratio, when light is hitting the sensor, is not only determined by the ADC and read noise, but also by a property of the light itself; shot noise. Say that the a7r sensor has a full well capacity of 60000 (optimistic). Such a full well capacity means that at maximum illumination, the snr is sqrt(60k) is approximately 245, which can easily be resolved with an 8 bits ADC. so with 11 bits, there is room to spare. That is the thing with shot noise, its level goes up with the signal, not just as fast, but it goes up. So the 14 bits are needed for the deep shadows, but once there is enough light on a pixel, you do not need them anymore.

It would indeed be nice is Sony would also allow you to use all 11 (compressed) bits, without the second part of the conversion.

Sorry if all this is a bit technical, I do not know how express the above otherwise.
« Last Edit: February 16, 2014, 01:10:26 pm by Hans van Driest »
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Sony a7 and a7R Raw Compression
« Reply #29 on: February 16, 2014, 01:52:55 pm »

My expectation was that adding photon noise would essentially eliminate all visible banding effects due to the tonal compression. Would be interesting if you could switch either compression on and off, now you have invested so much effort.

I can't switch just the tone compression off, since the delta modulation scheme expects the image in 11-bit tone compressed form. But I can, and did, switch the delta modulation off and ran the gradient image through just the tone compression/decompression algorithm.

http://blog.kasson.com/?p=4854

Your expectation is correct, Erik, at least for the test image.

Jim

Jim
« Last Edit: February 16, 2014, 01:59:48 pm by Jim Kasson »
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Sony a7 and a7R Raw Compression
« Reply #30 on: February 16, 2014, 04:41:20 pm »

Hi,

Thanks for making the test!

Best regards
Erik




I can't switch just the tone compression off, since the delta modulation scheme expects the image in 11-bit tone compressed form. But I can, and did, switch the delta modulation off and ran the gradient image through just the tone compression/decompression algorithm.

http://blog.kasson.com/?p=4854

Your expectation is correct, Erik, at least for the test image.

Jim

Jim
Logged
Erik Kaffehr
 

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Sony a7 and a7R Raw Compression
« Reply #31 on: February 16, 2014, 04:50:08 pm »

Hi,

The ADC you describe is called a Wilkinson converter, indeed it is simple but it is also very accurate.

I agree with regard to conversion times, but I don't share your conclusions. On  the D3X, there was a choice between 14 bit and 12 bit, but 14 bit was limited to 2FPS. On the Alpha 99 the camera has 14 bits in single shot but 12 bits in continuous modes.

I don't think they implement the tonal compression before the ADC, it would be to complex I think. The reason I think they do it is that the Bionz is probably only 12 bit wide. With tonal compression they can push 14 bit wide data trough a 12 bit ASIC. So they save an expensive redesign of the AICS and all algorithms.

Best regards
Erik

I think that the compression of the tone curve is mostly  irrelevant, since the part of the resolution that is lost is completely swamped in shot noise anyway. The thing is that this compression is almost certainly done in the ADC itself, and doing so is a very smart move.
Sony uses column conversion, meaning they use a lot of ADC's in parallel. This, in combination with some other tricks, seem to get rid of most, if not all, pattern noise and leaves the low read noise. All this is resulting in the great dynamic range of Sony sensors. A problem with having so much ADC's, is that they have to be simple. And simple they are. Sony uses the most basic of ADC's, where a voltage that is ramping up, is compared with the analog voltage out of the sensor. So called slope analog to digital converters. A problem with these is speed. For 14 bits, such an ADC needs 2^14 clock cycles for each conversion. when using say a 400MHz clock, this means 41us per conversion. Sounds fast, but they must perform over 6000 of such a conversions for each image, stretching the conversion time to a bit ore than 0.25 sec. This is a bit slow, for high frame rates, but also for live view.
The slope, or voltage ramp, going into the comparator is generated by an analog to digital converter, meaning that the shape can be made as desired. Sony uses a variation of an exponential slope (the compression). This cuts the conversion time down by a factor of eight (2^11 in stead of 2^14). Now the total conversion time is slightly over 0.03sec. Great for live view.
It might very well be that this explains why Nikon live view is as poor as it is (line skipping to reduce conversion time), compared to that of Sony.
And the elegant thing is that this compression is not really costing much, if anything.  14 bits are needed for he dynamic range. But signal to noise ratio, when light is hitting the sensor, is not only determined by the ADC and read noise, but also by a property of the light itself; shot noise. Say that the a7r sensor has a full well capacity of 60000 (optimistic). Such a full well capacity means that at maximum illumination, the snr is sqrt(60k) is approximately 245, which can easily be resolved with an 8 bits ADC. so with 11 bits, there is room to spare. That is the thing with shot noise, its level goes up with the signal, not just as fast, but it goes up. So the 14 bits are needed for the deep shadows, but once there is enough light on a pixel, you do not need them anymore.

It would indeed be nice is Sony would also allow you to use all 11 (compressed) bits, without the second part of the conversion.

Sorry if all this is a bit technical, I do not know how express the above otherwise.
Logged
Erik Kaffehr
 

Hans van Driest

  • Newbie
  • *
  • Offline Offline
  • Posts: 25
Re: Sony a7 and a7R Raw Compression
« Reply #32 on: February 17, 2014, 02:42:42 am »

Well, I do not know about the number of bits used in the signal processor, but I do think that making a non linear slope is easy, since the slope is generated with a digital to analog converter, so can have any shape you like. It is not doing compression before the ADC, but as part of the ADC.
See, for example,
"http://www.google.nl/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CDYQFjAA&url=http%3A%2F%2Frepository.tudelft.nl%2Fassets%2Fuuid%3Ab6587681-1d1b-4bed-83e8-e7d0e6c55add%2FMaster_thesis_Jia_Guo.pdf&ei=kvoAU6P0BrTb7AbPkICACg&usg=AFQjCNGokRo-KRek6UfKfKd-0t5Ubv2nRw&bvm=bv.61535280,d.ZGU"

As for the precision of a slope type ADC; it share some of the problems with other designs. The only thing it is really better in is monotonicity, which is good in this application, since linearity (precision) in itself is hardly a requirement, given the not perfect linearity of the sensor itself.

It is indeed strange that the a99 takes longer for 14 bits. Could be a lot of things, but longer conversion time could indeed also be an explanation. that the D3x used to be so slow does not mean much. they used a 12 bit sensor for that (assuming it was the same one as used in the a900) and it can be they sampled the sensor four times to get two extra bits, or indeed the conversion was slower. this would mean that the Nikon version used a special version with larger counters and a higher resolution ramp generator (not very likely).
We not likely ever to know for sure how Sony does things, but hardware compression seems very logical to me. And it would be a nice explanation for the better live view. But there are other ways, like using less bits during live view (might explain the poor dynamic range of the image).
« Last Edit: February 17, 2014, 04:23:06 am by Hans van Driest »
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Sony a7 and a7R Raw Compression
« Reply #33 on: February 17, 2014, 05:20:33 am »

Hi,

Thanks for the link. Very interesting!

The Sony DSLRs used to 12 bit files. I would recall the Alpha 99 or possibly Alpha 77 was the first one having 14 bits.

Best regards
Erik



Well, I do not know about the number of bits used in the signal processor, but I do think that making a non linear slope is easy, since the slope is generated with a digital to analog converter, so can have any shape you like. It is not doing compression before the ADC, but as part of the ADC.
See, for example,
"http://www.google.nl/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CDYQFjAA&url=http%3A%2F%2Frepository.tudelft.nl%2Fassets%2Fuuid%3Ab6587681-1d1b-4bed-83e8-e7d0e6c55add%2FMaster_thesis_Jia_Guo.pdf&ei=kvoAU6P0BrTb7AbPkICACg&usg=AFQjCNGokRo-KRek6UfKfKd-0t5Ubv2nRw&bvm=bv.61535280,d.ZGU"

As for the precision of a slope type ADC; it share some of the problems with other designs. The only thing it is really better in is monotonicity, which is good in this application, since linearity (precision) in itself is hardly a requirement, given the not perfect linearity of the sensor itself.

It is indeed strange that the a99 takes longer for 14 bits. Could be a lot of things, but longer conversion time could indeed also be an explanation. that the D3x used to be so slow does not mean much. they used a 12 bit sensor for that (assuming it was the same one as used in the a900) and it can be they sampled the sensor four times to get two extra bits, or indeed the conversion was slower. this would mean that the Nikon version used a special version with larger counters and a higher resolution ramp generator (not very likely).
We not likely ever to know for sure how Sony does things, but hardware compression seems very logical to me. And it would be a nice explanation for the better live view. But there are other ways, like using less bits during live view (might explain the poor dynamic range of the image).
Logged
Erik Kaffehr
 

CptZar

  • Full Member
  • ***
  • Offline Offline
  • Posts: 157
Re: Sony a7 and a7R Raw Compression
« Reply #34 on: February 19, 2014, 02:05:52 am »

Thank you Hans for your very interesting posts. I remember you explained  the Sony Lossy Compression in another thread as well.

Beside the fact, that Sony compression obviously is required for the excellent live view, I welcome a new technology, which keeps files smaller, by presenting the same level of quality as an much older technology, who's limits are approaching. With resolution approaching quickly approaching 50Mpxs, there is definitely a need for smaller files. The approach of deleting obsolete data from the file is quite intelligent. If there are still flaws, there is need of improvement of that technology. Not the step back to yesterdays solutions.

Maybe Photographers are very conservative. EVF, mirrorless, and now file compression seam to be quite suspect. Some of it discussed from a more emotional point of view. But fact is, technology advances.

Rudi Venter

  • Newbie
  • *
  • Offline Offline
  • Posts: 3
Re: Sony a7 and a7R Raw Compression
« Reply #35 on: February 22, 2014, 11:11:05 am »

Is all RAW compression, such as cr2, lossy?

No, CR2 is loss-less, similar to ZIP, not sure why some camera manufacturers refuse to adopt loss-less compression, the technology is there.....
Logged

jrsforums

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1288
Re: Sony a7 and a7R Raw Compression
« Reply #36 on: February 22, 2014, 12:13:15 pm »

No, CR2 is loss-less, similar to ZIP, not sure why some camera manufacturers refuse to adopt loss-less compression, the technology is there.....

Thank you for your response.  That is what I thought.  

I really did not understand the other experts analysing whether artifacts could be seen, rather than just complaining that Sony was using a lossy compression???  With cost of storage dropping faster than the increasing size of sensors, I see no reason to do this.

« Last Edit: February 22, 2014, 12:16:31 pm by jrsforums »
Logged
John

CptZar

  • Full Member
  • ***
  • Offline Offline
  • Posts: 157
Re: Sony a7 and a7R Raw Compression
« Reply #37 on: February 23, 2014, 02:56:58 am »

Storing is not the real problem.  What about transferring files between devices? A notebook and a desktop via lan? What about working with files, importing into PS as Tiff files which will be much larger. And then TIFF files to work on, merging, blending, stitching. Then file size definitely becomes an issue. A 2GB file on a Mac Pro is no problem. Different picture on a Mac Boo Air, where storage might be a factor too, due to limited  SSD sizes.

But of course one may discuss how to reduce file size.

robdickinson

  • Full Member
  • ***
  • Offline Offline
  • Posts: 239
Re: Sony a7 and a7R Raw Compression
« Reply #38 on: February 23, 2014, 02:39:01 pm »

With the amount of computing power we have now lossless compression should be used wherever possible. TIFF is very wasteful.

Logged

jgcox

  • Newbie
  • *
  • Offline Offline
  • Posts: 19
    • The World Exposed
Re: Sony a7 and a7R Raw Compression
« Reply #39 on: February 23, 2014, 06:52:03 pm »


IMO the push to enable WiFi transfer and high frame rates is the reason for lossy compression.

With the amount of computing power we have now lossless compression should be used wherever possible. TIFF is very wasteful.


Logged
My work www.theworldexposed.com
Photographic search tool www.gearsearch.info
Pages: 1 [2] 3   Go Up