Pages: 1 [2]   Go Down

Author Topic: Is IIQ 16bit pointless at higher ISO?  (Read 6201 times)

landscapephoto

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 623
Re: Is IIQ 16bit pointless at higher ISO?
« Reply #20 on: October 15, 2016, 01:58:52 am »

When you shoot IIQL 16 bit, IIQL, and IIQS, there are color differences. In the few tests I've done, it is very subtle, but there is a difference. Whenever I do this type of test, I feel like I can see it, and it feels to me like the 14 bit captures are splashing the color round while the 16 bit capture is differentiating more effectively and accurately. Today I shot a quick few shots at ISO 1600. There are color differences, it is very subtle, but I can see it.

It is more likely that the effect you observe comes from inaccuracies in handling the differences between 14 and 16 bits in colour profiles.
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Is IIQ 16bit pointless at higher ISO?
« Reply #21 on: October 15, 2016, 02:42:24 am »

Hi Steve,

I can see the differences but it is very hard to find an explanation for their existence.

Landscapephoto's suggestion that it is difference in handling of colour profiles is interesting. But, little is know about Capture One internals. Bart may have some interesting input.

Best regards
Erik


Ok, thank you and everyone for pitching in on that question. But I am aware of the relationship between bit depth and dynamic range. My question was specifically why is Dynamic Range exclusively being discussed with regard to bit depth? Because for years, I've heard from clients who shoot 35mm DSLR and medium format and complain about color with 35mm DSLR. About the "global color response". Some have attributed this to CMOS vs CCD. I have not felt this to be the case. Instead, I attribute this to 14 bit vs 16 bit (if not completely, at least to a significant degree).

When you shoot IIQL 16 bit, IIQL, and IIQS, there are color differences. In the few tests I've done, it is very subtle, but there is a difference. Whenever I do this type of test, I feel like I can see it, and it feels to me like the 14 bit captures are splashing the color round while the 16 bit capture is differentiating more effectively and accurately. Today I shot a quick few shots at ISO 1600. There are color differences, it is very subtle, but I can see it. Inevitably, these differences can be hidden or magnified as one shoots many different scenes. Screenshots are below.

Keep in mind, for me, it's a moot point. I don't have any need to try and optimize storage or increase capture rate by shooting 14 bit files. I want the full 16 bits, no matter how subtle a difference there is.


Steve Hendrix/CI
Logged
Erik Kaffehr
 

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: Is IIQ 16bit pointless at higher ISO?
« Reply #22 on: October 15, 2016, 06:05:52 am »

What if I'm not increasing ISO ... a low dynamic range scene which I would "overexpose" to get ETTR, so I end up with a histogram empty of any data on the left 1/4-1/3. Perhaps even if it isn't exposed to the right (example attached), there still isn't any data on the left and in this case on the right side of the histogram.

In that case, it helps to shoot at 16-bit precision. When exposure is high, or even ETTR, the Accuracy of the signal capture is relatively high (a high S/N ratio). Then increasing the precision at which the levels are recorded helps to create more robust data for Raw conversion (similar to 8-b/ch versus 16-b/ch postprocessing, but instead here we compare 14-bit to 16-bit encoding).

The bits are not only used to cover the DR, but also to more precisely encode the more Accurate signal levels. A 1-bit ADC can record a huge DR, 0 for deepest black, a 1 for the lightest white, but without any precision for intermediate tones. With 14-bits we can encode luminosity differences as small as 1/2^14 = 1/16384, and with 16-bits we can encode luminance differences as small as 1/2^16 = 1/65536, or 4 times as precise.

Quote
Does the 16bit file offer anything at all in flexibility in post processing?

Yes, but it already helps to make a more precise demosaicing and Raw conversion.

Quote
Does the process of making a 16bit tiff from a 14bit raw produce an identical result or is there perhaps some small  gains to be had from making a 16bit tiff from a 16bit raw?

This is post-capture. It doesn't help the demosaicing and Raw conversion much since no higher precision data is added but the range is only stretched and divided more finely. The 16-bits do help when the demosaiced data is manipulated, e.g. gamma precompensated or contrast adjusted, because intermediate tones have an up to 4x higher precision (which helps during cascaded processing steps, but is overkill for output). Raw converters like Capture One already use higher bit levels to do most of the caculations with higher precision.

Quote
My unscientific "logic" based on this discussion tells me there is no point in shooting 16bit on scenes with lower dynamic range, or probably when pushing the ISO past 400 up since all of the available dynamic range can be encoded precisely/accurately enough with 14bits.

DR is not the sole reason to use 16-bit encoding. The old analogy of a ladder applies. The height between the first and last rung is the DR (and it can be a long or a short ladder), the number of bits is the number of rungs from the lowest to the highest rung, and determines how precise intermediate levels can be achieved and how easy it is to go from one level to the next.

It would require more in-depth analysis to establish whether the IQ3 100's Analog to Digital Converters (ADCs) achieve a higher ladder, or only adds more rungs to the ladder, but it's circuits do warrant the use of 16-bit encoding since it effectively expands the recordable DR beyond 14-bits. More importantly, it also increases precision, which is helpful in the demosaicing of the ETTR signal levels and produces more solid files for postprocessing.

Cheers,
Bart
« Last Edit: October 15, 2016, 06:19:15 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: Is IIQ 16bit pointless at higher ISO?
« Reply #23 on: October 15, 2016, 06:13:47 am »

It is more likely that the effect you observe comes from inaccuracies in handling the differences between 14 and 16 bits in colour profiles.

Yes, that and Demosaicing (they are linked). The ADC settings may also play a role, but that would need further analysis.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: ETTR: SNR, Not #levels
« Reply #24 on: October 15, 2016, 09:09:22 am »

In that case, it helps to shoot at 16-bit precision. When exposure is high, or even ETTR, the Accuracy of the signal capture is relatively high (a high S/N ratio). Then increasing the precision at which the levels are recorded helps to create more robust data for Raw conversion (similar to 8-b/ch versus 16-b/ch postprocessing, but instead here we compare 14-bit to 16-bit encoding).

Bart,

This is a rare instance in which I do not entirely agree with you. With the low DR high key image, using 16 bits does allow capturing high bit data with ETTR. In this case high DR is not needed, since the DR of the scene is low; however the high bit data with ETTR will collect more photo electrons and improve the SNR. ETTR has more to do with SNR than the number of levels in the high bit data.

However I'm not sure that increased precision is all that helpful since the high bit data are dithered by shot noise. It does not make sense to quantize the data much finer than the noise in the signal, and shot noise is highest in the highlights, as Emil Martinec explains here. As he explains, the Nikon lossy NEF compression discards superfluous highlight data with no visual loss of IQ.

Bill 
« Last Edit: October 15, 2016, 09:37:29 am by bjanes »
Logged

Steve Hendrix

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1662
    • http://www.captureintegration.com/
Re: Is IIQ 16bit pointless at higher ISO?
« Reply #25 on: October 15, 2016, 10:18:29 am »

Hi Steve,

I can see the differences but it is very hard to find an explanation for their existence.

Landscapephoto's suggestion that it is difference in handling of colour profiles is interesting. But, little is know about Capture One internals. Bart may have some interesting input.

Best regards
Erik


This perhaps is an example of my priorities in terms of the subject matter. I am not so much concerned with the why (I am interested, but not as interested) as I am with the what. And regardless of the reason, the color from the 16 bit files - at any ISO setting - appears superior. Whatever the reason, the 16 bit capability yields the best possible result. So unless I need faster capture rates, or I have a storage limitation, I see no reason to not shoot 16 bit, in fact I only see a preference for anyone wanting to take advantage of the best possible quality.


Steve Hendrix/CI
« Last Edit: October 15, 2016, 10:53:14 am by Steve Hendrix »
Logged
Steve Hendrix • 404-543-8475 www.captureintegration.com (e-mail Me)
Phase One | Leaf | Leica | Alpa | Cambo | Sinar | Arca Swiss

Paul2660

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4067
    • Photos of Arkansas
Re: Is IIQ 16bit pointless at higher ISO?
« Reply #26 on: October 15, 2016, 11:03:52 am »

Lr sees the files on the card and shows the image unlike the 16bit files.  I assume it will import them, though I didn't actually follow through.  but then I opened them in ACR and was able to manipulate them fine.

Wayne,

I did a few tests this morning.  You are right, LR sees the 14 bit files and will import them, however it uses a "matrix" color profile which is way off the real colors.  I was also not able to select a different profile, as I wanted to try a different CMOS profile.  I may be doing something wrong on the selection of a different profile as I don't do that very often.  But the "matrix" is really off on the yellows and greens.

Back to the waiting game.

Paul C
Logged
Paul Caldwell
Little Rock, Arkansas U.S.
www.photosofarkansas.com

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: ETTR: SNR, Not #levels
« Reply #27 on: October 15, 2016, 11:13:40 am »

Hi Bill,

It seems that Steve can demonstrate a clear difference, and that is what really matters.

I also don't buy Bart's explanation.

1) Highlight and midtone data is extremely noisy in absolute numbers. Say that FWC is a 64000 and mid tones are exposed 3EV below FWC they would have 8000 e-. That 8000 e- would lead to noise with sigma of 89. So an absolute grey would vary between 7911 and 8089, to be correct 65% of the pixels would be in this range. The quantisation error of 16 bit representation is 1 that is 0.05% of the 2*sigma, absolutely negligible. The quantisation error of 12 bits would be 0.2% still absolutely negligible.

Going six stops below, we would have 1000 e- and Sigma would be 32, so that near black area would vary between 968 and 1032. Quantization error for 12 bits would be 0.4% while shot noise would vary 3.2%.

So shot noise would mask any quantisation error by large margin.

2) Once the data are read in, in all probability they are automatically converted to 16 or 32 bits. Computers normally use data width that are multiples of 8. So 8 bits, 16 bits and 32 bits are natural for computers.

To find out the reason for colour depending on coding data in 14 or 16 bits we would need to analyse C1 internals. Would be interesting of the difference is unique to C1 or if it would show up in a an OpenSource raw converter like RawTherapee.

But, it is reality that matters. Whatever the explanation, it seems that using 16 bit is the safest choice.

Or we could say, would someone give me an IQ3-100MP I would gladly invest in a bunch of 8TB hard disks.

Best regards
Erik

Bart,

This is a rare instance in which I do not entirely agree with you. With the low DR high key image, using 16 bits does allow capturing high bit data with ETTR. In this case high DR is not needed, since the DR of the scene is low; however the high bit data with ETTR will collect more photo electrons and improve the SNR. ETTR has more to do with SNR than the number of levels in the high bit data.

However I'm not sure that increased precision is all that helpful since the high bit data are dithered by shot noise. It does not make sense to quantize the data much finer than the noise in the signal, and shot noise is highest in the highlights, as Emil Martinec explains here. As he explains, the Nikon lossy NEF compression discards superfluous highlight data with no visual loss of IQ.

Bill
Logged
Erik Kaffehr
 

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Is IIQ 16bit pointless at higher ISO?
« Reply #28 on: October 15, 2016, 04:11:30 pm »

Hi Paul,

If you really need LR you could make your own profiles. Both DNG Profile Editor and Color Checker Passport generate usable profiles. Or, you could invest some time in Anders Torger's DCamProf.

I don't know what Phase One gives up in 14 bit files. Steves samples show a visible difference for which I have no good explanation.

I looked into a couple of IIQ L and IIQ S files from my P45+. "Here be dragons!" is all I can say…

Best regards
Erik

Wayne,

I did a few tests this morning.  You are right, LR sees the 14 bit files and will import them, however it uses a "matrix" color profile which is way off the real colors.  I was also not able to select a different profile, as I wanted to try a different CMOS profile.  I may be doing something wrong on the selection of a different profile as I don't do that very often.  But the "matrix" is really off on the yellows and greens.

Back to the waiting game.

Paul C
Logged
Erik Kaffehr
 

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Is IIQ 16bit pointless at higher ISO?
« Reply #29 on: October 16, 2016, 03:48:52 am »

Hi,

Steve presents very good empirical evidence that 16 bits vs. 14 bits affects colour rendition. It is quite obvious it matters.

IMHO it doesn't play well with theory, but any good experiment can prove any theory wrong. But, you can never prove a theory with an experiment, theories are very hard to prove…

Still I find it interesting to discuss this.

Below is a screen dump showing dark grey patch on a IQ3100MP shot 16-bit file, taken from DigitalTransitions testing:

The nice gaussians represent a single tone of grey. In absence of shot noise this would be just a spike, but nature of light gives these variations.

I would love to have an IQ3100MP, but I don't have one. But, I can show what happens going from 14 bit to 12 bit on my Sony A7rII. The image below is the 14 bit version.


While this one is 12 bits. Note that shapes are the same but sampling is more sparse. Now keep in mind that all these shapes correspond to a single colour.


It would be very interesting to check out a good ColorChecker shot with the IQ3100MP saved in different raw formats.

Best regards
Erik


« Last Edit: October 16, 2016, 04:01:08 am by ErikKaffehr »
Logged
Erik Kaffehr
 

Wayne Fox

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4237
    • waynefox.com
Re: Is IIQ 16bit pointless at higher ISO?
« Reply #30 on: October 17, 2016, 02:16:11 pm »


DR is not the sole reason to use 16-bit encoding. The old analogy of a ladder applies. The height between the first and last rung is the DR (and it can be a long or a short ladder), the number of bits is the number of rungs from the lowest to the highest rung, and determines how precise intermediate levels can be achieved and how easy it is to go from one level to the next.


I would agree, but it does seem the new format benefits dynamic range pretty dramatically which implies it has made the ladder taller, instead of perhaps placing the rungs more closely together.  But certainly both might be possible.
Logged

Wayne Fox

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4237
    • waynefox.com
Re: Is IIQ 16bit pointless at higher ISO?
« Reply #31 on: October 17, 2016, 02:24:00 pm »


This perhaps is an example of my priorities in terms of the subject matter. I am not so much concerned with the why (I am interested, but not as interested) as I am with the what. And regardless of the reason, the color from the 16 bit files - at any ISO setting - appears superior. Whatever the reason, the 16 bit capability yields the best possible result. So unless I need faster capture rates, or I have a storage limitation, I see no reason to not shoot 16 bit, in fact I only see a preference for anyone wanting to take advantage of the best possible quality.


Steve Hendrix/CI
I certainly feel similarly, although I would point out from your examples it just showed a difference, not necessarily a better capture and I would never shoot anything at 1600 ISO.  I’m just frustrated by the cataloging tools of C1 as well as some of the limitations of the local adjustments and so was curious in circumstances where the dynamic range is manageable if shooting 14 bit would lose anything.  But at this point I also don’t think I want to expend the energy and effort to create profiles for the 14bit files so LR can take advantage of them, since most of my conversions start from C1 anyway, only a few are done inside of Lr.

I’ve managed to kludge together a decent workflow at this point.  Doesn’t really seem to be an answerable question, which leaves me in the same position, better to stick with 16bit which I know won’t compromise anything.
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Is IIQ 16bit pointless at higher ISO?
« Reply #32 on: October 17, 2016, 02:25:04 pm »

Hi Wayne,

The IQ3-100MP has a dynamic range very close to 15 EV. Sometimes it will be beneficial.

Phase One always claimed to have 16 bit of data, so I can not really understand that they need a new raw format when they actually have a sensor being able to deliver 15 bits of data.

But it is a good thing they have 15 bit of DR and a new format actually supporting it.

Does it matter? I guess so, in a few very rare cases… There may be some other improvements on the sensor side that matters a lot more.

Best regards
Erik

I would agree, but it does seem the new format benefits dynamic range pretty dramatically which implies it has made the ladder taller, instead of perhaps placing the rungs more closely together.  But certainly both might be possible.
Logged
Erik Kaffehr
 

Steve Hendrix

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1662
    • http://www.captureintegration.com/
Re: Is IIQ 16bit pointless at higher ISO?
« Reply #33 on: October 17, 2016, 05:20:26 pm »

I certainly feel similarly, although I would point out from your examples it just showed a difference, not necessarily a better capture and I would never shoot anything at 1600 ISO.  I’m just frustrated by the cataloging tools of C1 as well as some of the limitations of the local adjustments and so was curious in circumstances where the dynamic range is manageable if shooting 14 bit would lose anything.  But at this point I also don’t think I want to expend the energy and effort to create profiles for the 14bit files so LR can take advantage of them, since most of my conversions start from C1 anyway, only a few are done inside of Lr.

I’ve managed to kludge together a decent workflow at this point.  Doesn’t really seem to be an answerable question, which leaves me in the same position, better to stick with 16bit which I know won’t compromise anything.


In my experience, and to my eyes (perfect score on the Xrite Color IQ Challenge, does that count?), yes, the captures do show a difference, but what I see is the capture is a better capture at 16 bit, if color is an important element.


Steve Hendrix/CI

Logged
Steve Hendrix • 404-543-8475 www.captureintegration.com (e-mail Me)
Phase One | Leaf | Leica | Alpa | Cambo | Sinar | Arca Swiss
Pages: 1 [2]   Go Up