Pages: [1]   Go Down

Author Topic: 12-bit analog/digital converter vs. 14-bit  (Read 15605 times)

The View

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1284
12-bit analog/digital converter vs. 14-bit
« on: April 30, 2008, 12:04:25 am »

Canon and Nikon now have 14-bit A/D converters, while Sony, Pentax (K20d actually a downgrade from the higher bit converter of the K10d), Olympus still have 12-bit.

Theoretically 14-bit has many more colors.

But do you get to see those smoother gradients, or is this all for the tech/spec sheet?
Logged
The View of deserts, forests, mountains. Not the TV show that I have never watched.

SeanFS

  • Full Member
  • ***
  • Offline Offline
  • Posts: 114
    • http://www.seanshadbolt.co.nz
12-bit analog/digital converter vs. 14-bit
« Reply #1 on: April 30, 2008, 12:55:47 am »

Quote
Canon and Nikon now have 14-bit A/D converters, while Sony, Pentax (K20d actually a downgrade from the higher bit converter of the K10d), Olympus still have 12-bit.

Theoretically 14-bit has many more colors.

But do you get to see those smoother gradients, or is this all for the tech/spec sheet?
[a href=\"index.php?act=findpost&pid=192598\"][{POST_SNAPBACK}][/a]


I can't see a huge difference between the 1ds2 and 3 , the colour does look smoother and fuller , but that could be the new digic 3 processor . The difference does seem to be in recovery in highlights ( maybe shadow too , but I haven't really looked there ) where there are more numbers to play with in a 14 bit file. The first thing I did was pull back an overexposed sky and unlike the 1ds2 , there was much more in the highlights to bring back.
Certainly the 1ds2 is no less of a camera for being only 12 bit and has some ability in those areas as well  - Canon does say there isn't much of a difference between 100 - 400 asa under normal conditions.
Logged

Tony Beach

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 452
    • http://imageevent.com/tonybeach/twelveimages
12-bit analog/digital converter vs. 14-bit
« Reply #2 on: April 30, 2008, 06:03:02 am »

Quote
The first thing I did was pull back an overexposed sky and unlike the 1ds2 , there was much more in the highlights to bring back.
Certainly the 1ds2 is no less of a camera for being only 12 bit and has some ability in those areas as well  - Canon does say there isn't much of a difference between 100 - 400 asa under normal conditions.
[{POST_SNAPBACK}][/a]

I also see a big difference between my D200 and D300 when it comes dynamic range; but I see no discernible difference in the D300's 12 bit and 14 bit files in this regard.  While there are some negligible differences between 12 bit and 14 bit on the D300, it could be that is more the result of how the data from the sensor is read than in the bit depth it is stored as:  [a href=\"http://luminous-landscape.com/forum/index.php?showtopic=24877]http://luminous-landscape.com/forum/index....showtopic=24877[/url]
Logged

01af

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 296
12-bit analog/digital converter vs. 14-bit
« Reply #3 on: April 30, 2008, 08:35:58 am »

Human vision can distinguish approximately 100 shades of tone in each colour on average---a bit less (60 - 80) in the violet-blue and in the deep-red ranges (i. e. at the ends of the visible spectrum), and a bit more (120 - 130) in the yellowish-green range (i. e. at the middle of the visible spectrum). So under ideal viewing conditions we may just barely see a difference between 6 bits and 7 bits. The usual 8 bits already provide us with considerable headroom by a factor of 2 - 3.

12 bits provide up to 4,096 shades of tone per colour which is 40× more than we can ever hope to be able to see with our eyes. More bits mean more native dynamic range in the A-to-D converter which basically is a good thing ... however we may just as well spread the range by multiplying as we already have so many shades. Not-too-small photosites, cleverly designed photosite read-out, A-to-D conversion, noise reduction, raw conversion, and image processing are way more significant to the final image quality than the A-to-D converter's width being 10, 12, or 14 bits.

-- Olaf
Logged

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20614
  • Andrew Rodney
    • http://www.digitaldog.net/
12-bit analog/digital converter vs. 14-bit
« Reply #4 on: April 30, 2008, 09:24:49 am »

Quote
More bits mean more native dynamic range in the A-to-D converter which basically is a good thing ..

You sure about that?
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

sojournerphoto

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 473
12-bit analog/digital converter vs. 14-bit
« Reply #5 on: April 30, 2008, 09:35:52 am »

Quote
You sure about that?
[a href=\"index.php?act=findpost&pid=192662\"][{POST_SNAPBACK}][/a]


More bits do give the potential to encode more dynamic range, as stated. However, the sensor may not be able to provide a clean signal over the full range available.

Is it better to have more - not sure.
Logged

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20614
  • Andrew Rodney
    • http://www.digitaldog.net/
12-bit analog/digital converter vs. 14-bit
« Reply #6 on: April 30, 2008, 09:39:10 am »

Quote
More bits do give the potential to encode more dynamic range, as stated. However, the sensor may not be able to provide a clean signal over the full range available.

Is it better to have more - not sure.
[a href=\"index.php?act=findpost&pid=192666\"][{POST_SNAPBACK}][/a]

Yes, it has the potential IF present. But the two are not inter-related (more bit depth doesn't mean more dynamic range).
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
12-bit analog/digital converter vs. 14-bit
« Reply #7 on: April 30, 2008, 10:58:36 am »

Quote
Human vision can distinguish approximately 100 shades of tone in each colour on average---a bit less (60 - 80) in the violet-blue and in the deep-red ranges (i. e. at the ends of the visible spectrum), and a bit more (120 - 130) in the yellowish-green range (i. e. at the middle of the visible spectrum). So under ideal viewing conditions we may just barely see a difference between 6 bits and 7 bits. The usual 8 bits already provide us with considerable headroom by a factor of 2 - 3.

12 bits provide up to 4,096 shades of tone per colour which is 40× more than we can ever hope to be able to see with our eyes. More bits mean more native dynamic range in the A-to-D converter which basically is a good thing ... however we may just as well spread the range by multiplying as we already have so many shades. Not-too-small photosites, cleverly designed photosite read-out, A-to-D conversion, noise reduction, raw conversion, and image processing are way more significant to the final image quality than the A-to-D converter's width being 10, 12, or 14 bits.

-- Olaf
[{POST_SNAPBACK}][/a]


As Olaf's analysis demonstrates, the increased number of levels provided by increasing the bit depth from 12 to 14 is of limited or no value, since the human eye can not distinguish these additional levels. Moreover, the number of actual levels with current cameras is limited by noise. Even with perfect sensors and analog to digital converters, photon sampling noise (shot noise) will limit the number of discrete levels that can be distinguished in the image. Emil Martinec gives a good analysis in a [a href=\"http://forums.dpreview.com/forums/read.asp?forum=1021&message=27646854]thread[/url] in the DPReview Nikon D3 forum.

Bit depth does impose an absolute limit on dynamic range, which is limited to 1 f/stop  per bit as explained by Sean McHugh on his web site. However, this maximal bit depth is not achieved with current cameras because of noise. Also, this maximal DR has only one level in the darkest f/stop. In practice, you would most likely want more levels.

The ideal SNR of an ADC equals 6.02N+1.76 dB, where N is the number of bits (explanation). A 12 bit ADC has an ideal SNR of 74 dB and a 14 bit ADC has an ideal SNR of 86 dB (6 dB represents a doubling of SNR). As Roger Clark explains in his digital sensor analysis, current 14 bit  ADCs do not attain the ideal, since it is difficult to design an ADC approaching ideal at the high bit rates required for cameras like the Nikon D3 or Canon EOS 1D MIII.

In summary, while a 14 bit ADC has theoretical advantages, these are not attained in practice with current ADCs.

Bill
Logged

EricV

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 270
12-bit analog/digital converter vs. 14-bit
« Reply #8 on: April 30, 2008, 03:05:22 pm »

Added bits can in principle be used for two distinct purposes: extending the brightness range or providing finer gradation within a given brightness range.  

Nearly all digital cameras have sensors and electronics with inherently linear response, so the number of photons or electrons in a pixel, and the voltage to which this is converted, is directly proportional to the amount of light hitting the pixel.  Pixels have limited charge storage capacity, which limits the brightness range before digitization.  Digitization should set the maximum digitized signal close to the pixel charge storage capacity to avoid wasting dynamic range.

The camera manufacturer has the option of using extra bits to extend the maximum light level which can be digitized, provided he also increases the pixel charge storage capacity, or to cover the same light level range with finer granularity.  In either case, noise will likely limit the improvement made possible by the extra bits.  

It would be interesting to find out which of these options is taken by various manufacturers in their new 14-bit cameras.  Does anyone actually know?
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
12-bit analog/digital converter vs. 14-bit
« Reply #9 on: April 30, 2008, 03:29:33 pm »

Quote
Added bits can in principle be used for two distinct purposes: extending the brightness range or providing finer gradation within a given brightness range. 

Nearly all digital cameras have sensors and electronics with inherently linear response, so the number of photons or electrons in a pixel, and the voltage to which this is converted, is directly proportional to the amount of light hitting the pixel.  Pixels have limited charge storage capacity, which limits the brightness range before digitization.  Digitization should set the maximum digitized signal close to the pixel charge storage capacity to avoid wasting dynamic range.

The camera manufacturer has the option of using extra bits to extend the maximum light level which can be digitized, provided he also increases the pixel charge storage capacity, or to cover the same light level range with finer granularity.  In either case, noise will likely limit the improvement made possible by the extra bits. 

It would be interesting to find out which of these options is taken by various manufacturers in their new 14-bit cameras.  Does anyone actually know?
[{POST_SNAPBACK}][/a]

With linear integer encoding the scale and granularity of the encoding at a given bit depth is fixed and I do not think these options are available. This has been discussed in previous threads.

The stair case analogy is often used, where the height of the staircase is likened to the dynamic range and the bit depth to the granularity of the measurement and the size of the individual steps. With linear integer encoding the height of the steps is determined by binary arithmetic and is fixed--the step size is large with small data numbers and small with big data numbers.

See this [a href=\"http://www.anyhere.com/gward/hdrenc/hdr_encodings.html]link[/url] for a graphic of the step size. With log encoding the step size can be equal for all values. With floating point encoding and a given number of bits, one can encode a large dynamic range with greater granularity or a small dynamic range with less granularity.
Logged

The View

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1284
12-bit analog/digital converter vs. 14-bit
« Reply #10 on: May 01, 2008, 12:43:04 am »

What does this mean for the current equipment.

Pentax, for example, had a much higher -bit converter in the K10d than now in the K20d, which is 12-bit.

I wonder why? Maybe it has to do with the change from Sony sensors to Samsung's?
Logged
The View of deserts, forests, mountains. Not the TV show that I have never watched.

Panopeeper

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1805
12-bit analog/digital converter vs. 14-bit
« Reply #11 on: May 01, 2008, 01:03:15 am »

Quote
Pentax, for example, had a much higher -bit converter in the K10d than now in the K20d, which is 12-bit.
I don't know if the bit depth of the K10D is selectable, like on the Nikon D3/D300, but all the K10D raw files I encountered are 12bit depth.
Logged
Gabor

Tony Beach

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 452
    • http://imageevent.com/tonybeach/twelveimages
12-bit analog/digital converter vs. 14-bit
« Reply #12 on: May 01, 2008, 01:22:25 am »

Quote
What does this mean for the current equipment.

Pentax, for example, had a much higher -bit converter in the K10d than now in the K20d, which is 12-bit.

I wonder why? Maybe it has to do with the change from Sony sensors to Samsung's?
[{POST_SNAPBACK}][/a]

FWIW, Nikon's D300 and D3 have 16 bit processing of the 12 or 14 bit A/D conversions that are then saved as 12 or 14 bit RAW data.  The K20D on the other hand did a 22 bit A/D conversion but then at some point saved that data as 12 bits.  It soon became clear that Pentax's 22 bit A/D conversion was a marketing gimmick since the camera not only didn't perform better than the competition, but it performed [a href=\"http://www.dpreview.com/reviews/PentaxK10D/page15.asp]worse.[/url]  The entire episode and Pentax's retreat illustrates that A/D conversion is only one link in the chain and means little or nothing isolated from the rest of the image pipeline.
Logged

Robin Balas

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 51
12-bit analog/digital converter vs. 14-bit
« Reply #13 on: May 01, 2008, 03:48:23 am »

Quote
Human vision can distinguish approximately 100 shades of tone in each colour on average---a bit less (60 - 80) in the violet-blue and in the deep-red ranges (i. e. at the ends of the visible spectrum), and a bit more (120 - 130) in the yellowish-green range (i. e. at the middle of the visible spectrum). So under ideal viewing conditions we may just barely see a difference between 6 bits and 7 bits. The usual 8 bits already provide us with considerable headroom by a factor of 2 - 3.
This is true if everything was encoded logarithmic as our eyes see it and the way f-stops work, but it isn't so. So to get clean and nice rendition in the shadows you need to go over the top in the bright end to achieve sufficient resolution in the shadows. So 12 bits/channel is not enough to render deep shadows as good as the best chips measures light. If you look in the highlights for improvements between 12 and 14 bits you really is looking at the wrong end. Improvements here is really sensor saturation improvements which is not related to A/D conversion. Look in the deep end and compare the smoothness of gradations in the darkest usable stops. If there is a difference it will be there.
Quote
12 bits provide up to 4,096 shades of tone per colour which is 40× more than we can ever hope to be able to see with our eyes. More bits mean more native dynamic range in the A-to-D converter which basically is a good thing ... however we may just as well spread the range by multiplying as we already have so many shades. Not-too-small photosites, cleverly designed photosite read-out, A-to-D conversion, noise reduction, raw conversion, and image processing are way more significant to the final image quality than the A-to-D converter's width being 10, 12, or 14 bits.

-- Olaf
[a href=\"index.php?act=findpost&pid=192657\"][{POST_SNAPBACK}][/a]
When comparing my 1Ds2 with my Aptus65 MFDB they have 12 bit and 16 bit A/D respective and can't really compare at all in shadow rendition. This could be the slightly larger photosites in the Aptus, but it is something else as well and it could be the sampling resolution in the darkest stops - but I have no way of confirming that so that is pure speculation.
MHO.
Logged

The View

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1284
12-bit analog/digital converter vs. 14-bit
« Reply #14 on: May 01, 2008, 05:44:33 pm »

Quote
I don't know if the bit depth of the K10D is selectable, like on the Nikon D3/D300, but all the K10D raw files I encountered are 12bit depth.
[a href=\"index.php?act=findpost&pid=192806\"][{POST_SNAPBACK}][/a]

This is a quote from Pentax' website.


QUOTE:

"PENTAX also incorporated a new high performance 22 bit A/D converter to quickly transfer images with accurate color tones and richer gradation from the CCD to the imaging engine."

From what I gather from the replies here, there is one stage of A/D conversion, and another, the conversion to RAW data, and that's where the 14-bit weighs in. The Pentax A/D 22bit converter doesn't adress the RAW bit-depth, and is more of a marketing gimmick.
« Last Edit: May 01, 2008, 05:50:51 pm by The View »
Logged
The View of deserts, forests, mountains. Not the TV show that I have never watched.

The View

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1284
12-bit analog/digital converter vs. 14-bit
« Reply #15 on: May 01, 2008, 05:48:14 pm »

Quote
This is true if everything was encoded logarithmic as our eyes see it and the way f-stops work, but it isn't so. So to get clean and nice rendition in the shadows you need to go over the top in the bright end to achieve sufficient resolution in the shadows. So 12 bits/channel is not enough to render deep shadows as good as the best chips measures light. If you look in the highlights for improvements between 12 and 14 bits you really is looking at the wrong end. Improvements here is really sensor saturation improvements which is not related to A/D conversion. Look in the deep end and compare the smoothness of gradations in the darkest usable stops. If there is a difference it will be there.

When comparing my 1Ds2 with my Aptus65 MFDB they have 12 bit and 16 bit A/D respective and can't really compare at all in shadow rendition. This could be the slightly larger photosites in the Aptus, but it is something else as well and it could be the sampling resolution in the darkest stops - but I have no way of confirming that so that is pure speculation.
MHO.
[a href=\"index.php?act=findpost&pid=192833\"][{POST_SNAPBACK}][/a]

Thank you.

Looks like my next camera will be the Nikon D300.
« Last Edit: May 01, 2008, 05:48:37 pm by The View »
Logged
The View of deserts, forests, mountains. Not the TV show that I have never watched.

DiaAzul

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 777
    • http://photo.tanzo.org/
12-bit analog/digital converter vs. 14-bit
« Reply #16 on: May 01, 2008, 07:16:39 pm »

Quote
When comparing my 1Ds2 with my Aptus65 MFDB they have 12 bit and 16 bit A/D respective and can't really compare at all in shadow rendition. This could be the slightly larger photosites in the Aptus, but it is something else as well and it could be the sampling resolution in the darkest stops - but I have no way of confirming that so that is pure speculation.
MHO.
[a href=\"index.php?act=findpost&pid=192833\"][{POST_SNAPBACK}][/a]

Another aspect of this debate which is not often brought out is the ability to recover signal from noise. This is widely used within CDMA based mobile phone networks (most 2nd and 3rd generation networks at the moment) to recover signals in a noisy environment. If you bear with me I will explain the relevance to photography...

In a CDMA signal multiple symbols are transmitted form the mobile handset to represent a single bit of information. By applying statistical analysis and having some apriori knowledge about the transmitted information it is possible to determine what information was transmitted even though many symbols may be lost in the noise. i.e if 16 symbols are transmitted to represent one bit of information and 5 symbols are corrupted in the noise it is possible to determine the value of the bit that was actually transmitted. High levels of redundancy and some apriori knowledge make information recovery in a noisy environment both possible and highly effective.

When you look at a noisy photograph it is possible to pick out an image, even though parts of that image are lost in the noise. The brains ability to guess the image and then correlate that guess with information in the picture enables detail to be seen even in the presence of significant background noise (i.e making a good guess at what should be their and then filling in the missing information allows more detail to be seen). A 12-bit conversion will suppress both signal and noise leaving a solid black image, however, 14-bits or more will show both signal and noise enabling the brain to recover detail, even if basic mathematical analysis based upon signal to noise ratios indicates it shouldn't be visible.

Most of the debate on 12 bit Vs high bit depth comes across as very simplistic in its treatment of the subject and misses a lot of work that has been done on recovering information in noisy environments and the ability of the brain to perceive information. This is a much more subjective debate than the 'you don't need more than 12-bits because of noise' statements implies.
Logged
David Plummer    http://photo.tanzo.org/

lovell

  • Full Member
  • ***
  • Offline Offline
  • Posts: 131
    • http://
12-bit analog/digital converter vs. 14-bit
« Reply #17 on: May 07, 2008, 12:06:12 pm »

The bigger an image is enlarged, the more one will see differences between 12 & 14 bit depths.  On small prints those differences are usually not noticable.

More bit depth does not mean wider DR.  The DR is a function of what the sensor can produce, and not the A/D converter.
Logged
After composition, everything else is secondary--Alfred Steiglitz, NYC, 1927.

I'm not afraid of death.  I just don't want to be there when it happens--Woody Allen, Annie Hall, '70s

The View

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1284
12-bit analog/digital converter vs. 14-bit
« Reply #18 on: May 07, 2008, 03:23:50 pm »

Quote
This is true if everything was encoded logarithmic as our eyes see it and the way f-stops work, but it isn't so. So to get clean and nice rendition in the shadows you need to go over the top in the bright end to achieve sufficient resolution in the shadows. So 12 bits/channel is not enough to render deep shadows as good as the best chips measures light. If you look in the highlights for improvements between 12 and 14 bits you really is looking at the wrong end.
[a href=\"index.php?act=findpost&pid=192833\"][{POST_SNAPBACK}][/a]

Thanks.
Logged
The View of deserts, forests, mountains. Not the TV show that I have never watched.
Pages: [1]   Go Up