Pages: [1] 2   Go Down

Author Topic: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II  (Read 10695 times)

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2005
    • http://www.guillermoluijk.com

Just wanted to share a quick test to make clear the advantage of the 14-bit uncompressed RAW option (available on the Mark II A7's, not on the first models) when strongly lifting the shadows:

Scene with neutral RAW development (plain DCRAW):



I shot it 3 times. Find here 100% crops with S curve to clearly show the deep shadows (the images need to be displayed on a PC monitor; on mobile devices the posterization cannot be seen).

ISO100, compressed 12-bit, 5 stops underexposure, +5EV in pp: -> CLEAR POSTERIZATION


ISO100, uncompressed 14-bit, 5 stops underexposure, +5EV in pp:


ISO3200, uncompressed 14-bit, correct exposure:


- ISO3200 14-bits has the lowest noise (expected since the A7 II is not 100% ISO-invariant from ISO100 to ISO3200)
- ISO100 12-bits shows clear posterization
- ISO100 14-bits is a bit noisier than ISO3200 and is as noisy as ISO100 12-bits, but has no posterization


RAW histograms:
- ISO100 compressed 12-bits has barely 32 filled levels
- ISO100 uncompressed 14-bits has 128 filled levels. Not many but enough to avoid posterization




Conclusion: 14-bits uncompressed is recommended if strong shadow processing is going to be applied. It seems the posterization issue is not caused by the compressed vs uncompressed variable, but simply by the available RAW leveles (12 vs 14 bits).

Probably at higher ISOs the 14 vs 12-bit advantage vanishes as the noise will prevail over the lack of levels.

Regards
« Last Edit: May 20, 2016, 09:10:55 pm by Guillermo Luijk »
Logged

Pete Berry

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 445
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #1 on: May 21, 2016, 02:05:30 pm »

Just wanted to share a quick test to make clear the advantage of the 14-bit uncompressed RAW option (available on the Mark II A7's, not on the first models) when strongly lifting the shadows:

Scene with neutral RAW development (plain DCRAW):



I shot it 3 times. Find here 100% crops with S curve to clearly show the deep shadows (the images need to be displayed on a PC monitor; on mobile devices the posterization cannot be seen).

ISO100, compressed 12-bit, 5 stops underexposure, +5EV in pp: -> CLEAR POSTERIZATION


ISO100, uncompressed 14-bit, 5 stops underexposure, +5EV in pp:


ISO3200, uncompressed 14-bit, correct exposure:


- ISO3200 14-bits has the lowest noise (expected since the A7 II is not 100% ISO-invariant from ISO100 to ISO3200)
- ISO100 12-bits shows clear posterization
- ISO100 14-bits is a bit noisier than ISO3200 and is as noisy as ISO100 12-bits, but has no posterization


RAW histograms:
- ISO100 compressed 12-bits has barely 32 filled levels
- ISO100 uncompressed 14-bits has 128 filled levels. Not many but enough to avoid posterization




Conclusion: 14-bits uncompressed is recommended if strong shadow processing is going to be applied. It seems the posterization issue is not caused by the compressed vs uncompressed variable, but simply by the available RAW leveles (12 vs 14 bits).

Probably at higher ISOs the 14 vs 12-bit advantage vanishes as the noise will prevail over the lack of levels.

Regards

I'm at a loss trying to see the "posterization" in the shadows you are seeing. What I see is the big difference in chroma noise between the 12-bit 5-stop pushed 100 ISO and the other two - not the compressed gradation that produces posterization. At 12 bits with it's 4096 levels you have 16X JPG 8-bit gradation levels.

Taking your JPG images into PS, the histogram for all show a full 256 levels. And after applying chroma NR to all, still no posterization to my eyes. My 10" 1080HD tablet screen shows identical images to my 24" sRGB 8-bit 1080HD desktop, which will definitely show posteriztion on the screen in some difficult colors that 16-bit PPRGB prints eliminate.

Pete
« Last Edit: May 21, 2016, 02:11:07 pm by Pete Berry »
Logged

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2005
    • http://www.guillermoluijk.com
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #2 on: May 21, 2016, 03:54:12 pm »

I'm at a loss trying to see the "posterization" in the shadows you are seeing. (...) At 12 bits with it's 4096 levels you have 16X JPG 8-bit gradation levels.

Maybe a 400% zoom helps, but at 100% posterization is clearly seen as well.

ISO100 12 bits + 5EV: this is posterization (insufficient levels to create a gradient, including contiguous pixels clipped to black and other contiguous pixels sharing the same values):




ISO100 14 bits + 5EV: this is not posterization (few levels but still enough to make noise dither any visible posterization):




A 12-bit RAW file has 4096 levels only when they are full of information. This is not the case, 5 stops underexposure provoques that at least the first 5 stops are empty. In this particular case, the ISO100+5EV only has 32 levels containing information for the entire scene as the RAW histogram shows, that's far from 4096.

Regards





« Last Edit: May 21, 2016, 03:58:35 pm by Guillermo Luijk »
Logged

digitaldog

  • Sr. Member
  • ****
  • Online Online
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #3 on: May 21, 2016, 03:56:14 pm »

Wow, fascinating, thanks for sharing.
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

Zorki5

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 486
    • AOLib
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #4 on: May 21, 2016, 09:23:06 pm »

My take from this:

One more proof that 14 bits are indeed 14 bits of information, not same 12 bits saved in 14-bit-format file.

Other than that, can't draw any conclusions; do not see compression artifacts; and yes, 12-bit image does look more noisy (for whatever reason!)
Logged

AlterEgo

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1995
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #5 on: May 21, 2016, 10:31:27 pm »

One more proof that 14 bits are indeed 14 bits of information, not same 12 bits saved in 14-bit-format file.
and now switch Sony to continuous shooting mode  ;D
Logged

Zorki5

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 486
    • AOLib
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #6 on: May 21, 2016, 10:51:02 pm »

and now switch Sony to continuous shooting mode  ;D

Oh, I don't even have to do that to enjoy... err, compactness of the 12 bits -- I shoot a6000.  :)
Logged

Pete Berry

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 445
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #7 on: May 21, 2016, 10:56:01 pm »

Yes, 400% does show it up rather well! Looks a lot like JPG compression. What's the file size difference between the 12-bit compressed and standard 14-bit RAWS?

I did a quick similar test with my GH4 between 10-bit elect. shutter RAW's and the standard shutter 12-bit depth. There's a huge difference seen in the -5/+5 EV pushed files comparison - the 12-bit with mild banding and moderate noise; the10-bit with much more chroma and lum. noise obscuring detail, but no banding or micro posterization I could see.

At uncomp'd 3200, though, much less difference, with the 10-bit showing some more difficult chroma noise to remove and slightly diminished detail at 200%, but no shadow banding.

So it seems pretty clear to me that it's the compression in the A7 II 12-bit RAWs rather than the diminished bit depth causing the posterization.

Pete
Logged

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2005
    • http://www.guillermoluijk.com
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #8 on: May 22, 2016, 04:04:48 am »

We are talking about the A7 II here. I have two reasons to think this posterization has nothing to do with the compression algorithm (which would just substract redundant information from the highlights), but with the fact that we are dealing with 12-bit linear data:


1. NUMBER OF FILLED LEVELS IN THE 12-BIT RAW FILES

RAW histogram (portion of the whole range):



Just counting used levels in the whole 12-bit range, we find 4 zones in the decoded values:
  • From 128 to 801 all levels used -> 674 levels (128 is the black level for this camera in 12-bit mode; it becomes 512 in 14-bit mode)
  • From 802 to 1424 half the levels used -> (1424-802)/2=311 levels
  • From 1427 to 2023 one out of each 4 used -> (2023-1427)/4=149 levels
  • From 2029 to 4101 one out of each 8 used -> (4101-2029)/8=259 levels
Total number of used levels: 674+311+149+259=1.393 levels (far from the maximium 4.096)
In terms of encoding efficiency, the A7II in 12-bit compressed mode is a log2(1393)=10,44 bits camera.

In the shadows there seems to be no loss because of the compression, but we're still restricted to 12-bit linear density. This answers a question I made in the forum some months ago:

"What I do not fully understand is that if one now develops this RAW file with DCRAW (gamma 1.0 output), levels spread in the final image in the same way as in the decoded RAW file, without any compression curve applied. This shocks me because given the high DR of this sensor, if the decoded values are already linear in a 12-bit range (from which only 10,44 bits are actually used), how can it avoid shadow posterization when lifting the shadows? and if they are not linear, how can the output from DCRAW not linearize them?."

The answer is clear to me now that I have the camera: indeed there is posterization (call it visible lack of levels if you wish), something I never expected the clever engineers from Sony would let happen.

Sony's 12-bit compression algorithm could be much more efficient (at CPU cost), becoming non-linear to put extra levels in the deep shadows substracting them from the highlights where there is still room for a reduction with no IQ penalty. The very efficient 8-bit compression algorithm in the Leica M8 does this kind of non-linear clever compression, managing to obtain all the information from its sensor with 8-bit non-linear RAW files (just 256 different levels).


2. 12-BIT RAW DEVELOPMENT FROM 14-BIT UNCOMPRESSED RAW FILES

To explain the fallacy of 14-bit RAW files in some cameras (Canon 40D), and the real need of them in some others (Pentax K5), time ago I pushed 5-stops a RAW file from the Pentax K5 (Sony Sensor) decimating its 14-bits to 12-bit prior to demosaicing. This is the same as having a 12-bit uncompressed RAW file. The result was exactly the same kind of posterization:

14-bits regular RAW development:



12-bit RAW development:


The article is here: DO RAW BITS MATTER?.


---


So my conclusion is that Sony's compression doesn't affect image quality, it is simply that 12 linear bits are not enough for the high dynamic range of these sensors since their debut. The compression just allows to make the files smaller, so maybe a compressed 14-bit format would be the best trade off:
- ISO100 12-bits compressed: 24MB
- ISO100 14-bit uncompressed: 48MB
- ISO3200 14-bit uncompressed: 48MB

If someone wants to play with the RAW files:

http://www.guillermoluijk.com/davidcoffin/

Regards
« Last Edit: May 22, 2016, 08:29:24 am by Guillermo Luijk »
Logged

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2005
    • http://www.guillermoluijk.com
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #9 on: May 22, 2016, 04:39:54 am »

Other than that, can't draw any conclusions; do not see compression artifacts; and yes, 12-bit image does look more noisy (for whatever reason!)

Both the ISO100 compressed 12-bit and the ISO100 uncompressed 14-bit had the same noise (SNR) at capture time because they shared exposure and ISO gain. But after the RAW encoding, the ISO100 compressed 12-bit becomes noisier because of the quantization roundings. Posterization is just the result of quantization errors, and quantization is a form of noise.

Regards
« Last Edit: May 22, 2016, 05:02:00 am by Guillermo Luijk »
Logged

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #10 on: May 22, 2016, 08:30:06 am »

Maybe a 400% zoom helps, but at 100% posterization is clearly seen as well.

ISO100 12 bits + 5EV: this is posterization (insufficient levels to create a gradient, including contiguous pixels clipped to black and other contiguous pixels sharing the same values):




ISO100 14 bits + 5EV: this is not posterization (few levels but still enough to make noise dither any visible posterization):




A 12-bit RAW file has 4096 levels only when they are full of information. This is not the case, 5 stops underexposure provoques that at least the first 5 stops are empty. In this particular case, the ISO100+5EV only has 32 levels containing information for the entire scene as the RAW histogram shows, that's far from 4096.

Regards

Hi Guillermo,

Thanks for doing this.  I am having a little difficulty seeing posterization in those images.  A couple of comments:

1) Can you repeat the test at 12 and 14 bits both uncompressed to get rid of Sony's funny compression variable?  I wouldn't be surprised if some of what we see is delta encoding of the noise.
2) The threshold for visibility of posterization seems to be around 1 ADU of random noise dithering, independently of where the noise comes from.  In that case one could expect blocking and posterization in the deepest shadows of the ISO100 12 bit image because according to Bill Claff's site it would in theory have minimum (read) noise of 0.35 12 bit ADU; I would not expect it in the ISO 100 14 bit image because its minimum random noise would be 1.4 14 bit ADU.  Nor in the ISO3200 12 bit image because its minimum random noise would be 2.6 12-bit ADU.  I would not expect to see it anywhere where there is enough signal for photon noise to provide the dithering.
3) Empty stops are perfectly ok in the presence of enough dithering.  Just because there are more levels does not mean that there is more information in the data.  See here for a demonstration.

Jack
« Last Edit: May 22, 2016, 08:33:30 am by Jack Hogan »
Logged

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2005
    • http://www.guillermoluijk.com
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #11 on: May 22, 2016, 08:51:40 am »

1) Can you repeat the test at 12 and 14 bits both uncompressed to get rid of Sony's funny compression variable?  I wouldn't be surprised if some of what we see is delta encoding of the noise.

Hi Jack, unfortunately the funny Sony firmware 3.0 just allows two options: 'Compressed' (which is 12-bit compressed RAW), and 'Uncompressed' (which is 14-bit uncompressed RAW). It has no uncompressed 12-bit nor compressed 14-bit.

If you (or any in the thread) have the ability to compile DCRAW, we can reproduce 12-bit uncompressed by developing a 14-bit uncompressed RAW file adding the following bold sentence in DCRAW's scale_colors() function which decimates 2 bits in the RAW data:

    for (i=0; i < size*4; i++) {
    val = image[0];
    if (!val) continue;
    val = (val >> 2) << 2
    val -= black;
    val *= scale_mul[i & 3];
    image[0] = CLIP(val);
    }

Unfortunately I don't have my PC ready now to compile DCRAW.

Regards
« Last Edit: May 22, 2016, 08:55:16 am by Guillermo Luijk »
Logged

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #12 on: May 22, 2016, 01:48:13 pm »

I am afraid compilation is not one of my skills, Guillermo :(
Logged

Zorki5

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 486
    • AOLib
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #13 on: May 22, 2016, 02:25:16 pm »

But after the RAW encoding, the ISO100 compressed 12-bit becomes noisier because of the quantization roundings. Posterization is just the result of quantization errors, and quantization is a form of noise.

The more roundings -- the more posterization and the less noise. Always.

Do this mental experiment: start decreasing number of colors in your image (in post), until you have only one. At that point, posterization will become 100%, and noise will become 0%, so to speak. Question: at which point noise stopped increasing and started decreasing? Correct answer is "at no point"; it was this way all along.

Even if, while dropping bits, at some point you observe an effect that might look to you as an increase in noise, that would not be noise, that would be something else.

As a side note: when you blow your image to > 100%, try using "nearest neighbor", or some other scaling algorithm that doesn't do smoothing, or do less smoothing (i.e. not bicubic or Lanczos); this way, what's really going on will become more apparent.
Logged

Zorki5

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 486
    • AOLib
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #14 on: May 22, 2016, 02:31:59 pm »

    val = (val >> 2) << 2

Or:

    val &= ~3;

 ;)
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #15 on: May 22, 2016, 02:37:25 pm »

I am having a little difficulty seeing posterization in those images.

Hi Jack,

I agree, but that is probably because a more accurate term to describe the issue would be "Quantization errors".

One could call it local posterization, but there is already enough confusion between "posterization", and "banding". I'd stick to "quantization errors" in Guillermo's clear examples, especially because it's mostly showing the effect of the lack of levels to represent the lowest bits.

Of course, Raw processors also play a role in how they convert those truncated values in the shadows to mid-tones. Mid-tones and highlights have enough shot noise to dither the smoothness in gradients so that the lack of a few bits of precision doesn't become too obvious.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Zorki5

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 486
    • AOLib
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #16 on: May 22, 2016, 02:42:53 pm »

I agree, but that is probably because a more accurate term to describe the issue would be "Quantization errors".

+1

Agree with the rest of the message as well, but what's important for the topic of this discussion is in the above quote.
Logged

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2005
    • http://www.guillermoluijk.com
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #17 on: May 22, 2016, 02:45:03 pm »

Or:

    val &= ~3;

 ;)
C guys always wanting to obfuscate code

www.guillermoluijk.com

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2005
    • http://www.guillermoluijk.com
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #18 on: May 22, 2016, 02:47:49 pm »

Ok let's call the issue "quantization errors" rather than "posterization", quite a semantic discusion IMO.

The point is Sony is providing truly lossy RAW files, with no alternative option in the Mark I series of the A7.

Regards

www.guillermoluijk.com

Zorki5

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 486
    • AOLib
Re: Compressed 12-bit vs uncompressed 14-bit shadows on the A7 II
« Reply #19 on: May 22, 2016, 02:48:55 pm »

C guys always wanting to obfuscate code

Yep, Duff's device had been my inspiration for decades  :D
Logged
Pages: [1] 2   Go Up