Luminous Landscape Forum

Equipment & Techniques => Medium Format / Film / Digital Backs – and Large Sensor Photography => Topic started by: ErikKaffehr on February 18, 2011, 12:42:29 am

Title: 12, 14 or 16 bits?
Post by: ErikKaffehr on February 18, 2011, 12:42:29 am
Hi,

This discussion here is limited to number of bits in image processing pipeline, it's not a statement on image quality of different sensors, that depends on many more parameters than number of bits.

The first image below demonstrates clearly the number of bits actually used by the different sensor systems. It shows maximum signal (at saturation) divided by readout noise. Each EV corresponds to one bit so 13 EV is 13 bits. This figure does not take into account the fact the larger sensors have more pixels and large physical size. Those factors will result in better image quality.

Second figure shows latest generation CMOS sensors. Nikon and Pentax are close to utilizing 14 bits, Canon is lagging behind. This is due to different technology in readout.

The third image takes the number of pixels into account. The advantage of the Nikon gets much smaller (around half stop). This does not include resolution and MTF advantages of the larger sensor.

The intention with this posting is to illustrate the significance of bits, it's not intended to discuss aspects of image quality. I'd also add that the data from DxO-mark is very reliable and totally relevant in this context, that is how many bits are actually needed.

The finding is that no camera sensor today seems to need 14 bits and MFDBs would do fine with 12 bits.

If there is some who can explain the advantage of having a 16 bit pipeline on MFDBs please step forward and inform us!

Best regards
Erik
Title: Re: 12, 14 or 16 bits?
Post by: NikoJorj on February 18, 2011, 04:07:33 am
An example has already been posted here by Guillermo : http://www.luminous-landscape.com/forum/index.php?topic=49200.msg409770#msg409770
I personally conclude that with its higher DR, the latest K5/D7000 sony sensor is the first to give a vague hint of usefulness to the 13th and 14th bits.
Title: Re: 12, 14 or 16 bits?
Post by: Dick Roadnight on February 18, 2011, 06:30:28 am
Hi,

This discussion here is limited to number of bits in image processing pipeline, it's not a statement on image quality of different sensors, that depends on many more parameters than number of bits.

If there is some who can explain the advantage of having a 16 bit pipeline on MFDBs please step forward and inform us!

Best regards
Erik
Are you trying to prove:

We have all wasted 10s of Ks on equipment that is no better than prosumer kit?

DXO data is biased, inaccurate, nonsense...?

It is difficult to separate bit depth from IQ, as they tend to be related.

DXO have carefully chosen colours which are indistinguishable on my calibrated monitor, so I do not know which line refers to which camera.

All the lines are pretty linear for most of the range, and none of the lines deviates much from linearity both ends, as would be the case if the graph indicated the limits of dynamic range of any of the sensors.

If you wanted to do an experiment to determine the latitude or Dynamic range of a sensor, you would have to have a subject with a wider range of luminosity than can be accommodated by the sensor... so 20 levels would be about right.

Even if you never photograph subjects with light areas 2^16 times brighter than the dark areas, higher dynamic range gives smoother gradation between colours, reducing banding, especially in files where the contrast has been enhanced.



Title: Re: 12, 14 or 16 bits?
Post by: ejmartin on February 18, 2011, 08:20:41 am
Are you trying to prove:

We have all wasted 10s of Ks on equipment that is no better than prosumer kit?

No I don't think that was the point, at least I hope not.

Quote
DXO data is biased, inaccurate, nonsense...?

I suspect DxO data was being used to bolster a contention that 16 bits is superfluous for MFDB files whose pixel values have no more than 12 bits of data.  The last four bits are swamped by noise.

Quote
It is difficult to separate bit depth from IQ, as they tend to be related.

Between 8 bits and 12 bits, most certainly.  Between 12 bits and 16 bits, hardly.  The last four bits are largely random due to noise.

Quote
DXO have carefully chosen colours which are indistinguishable on my calibrated monitor, so I do not know which line refers to which camera.


Agreed.  Nevertheless the D3x is the outlier.

Quote
All the lines are pretty linear for most of the range, and none of the lines deviates much from linearity both ends, as would be the case if the graph indicated the limits of dynamic range of any of the sensors.

It's not the linearity of the curve, it's the value at base ISO that matters.

Quote
Even if you never photograph subjects with light areas 2^16 times brighter than the dark areas, higher dynamic range gives smoother gradation between colours, reducing banding, especially in files where the contrast has been enhanced.

Noise dithers tonal transitions, so that there is little benefit to extra bits when the noise level exceeds the bit depth.  When the noise ratio exceeds the gradation steps in tonality, then the noise properly dithers transitions; when it doesn't you get banding.  The tonal steps of 8 bit data are too coarse for the noise to dither.  Anything beyond 12 for a camera with 12 bit DR, the noise is sufficient to dither.  16 bits is overkill, the data is being oversampled by 4 bits.
Title: Re: 12, 14 or 16 bits?
Post by: cunim on February 18, 2011, 09:24:01 am
When working with ultra-low light images one quickly appreciates the benefits of higher bit densities.  The low res monochrome images make random noise very evident at the pixel level.  You engineer the camera package to cool enough and read out slowly enough, making trade offs to get a particular precision (12, 14 16 bits whatever).  We accept these trade offs because we want the SNR to be adequate to our application.

Photography, in contrast, is much less demanding.  Photographic quality is perceptual (this is 5 on a scale of 1-10) as opposed to quantitative (this is the SNR) and I am not sure how one quality metric relates to the other.  Certainly, photographs have a whole lot of spatially convolved things going on, both in hardware and in our heads.  For example, the visual system generates its own perceptual SNR by making adjacent pixel operations that decrease the perceived noise.

What does this mean to required bit density for good photographs?  God knows.  I don't think the pixel SNR of a high end color sensor means much, to tell the truth.  Once you get to the level of quality available from the top DSLR or MFD systems the obvious (to me) image quality differences become global and perceptual as opposed to quantitative.  They are sort of like an MTF chain, in which the final result is a product of all the input factors - sensor, lens, raw decode, camera electronics, etc. 

I expect we will have a relevant metric one day.  Great topic for a psychophysics PhD thesis.
Title: Re: 12, 14 or 16 bits?
Post by: ErikKaffehr on February 18, 2011, 11:43:12 am
Hi,

The very simple reason for posting this information was twofold:

- Someone on the forum was asking about usefulness of bith depth
- In my view it is good information and therefore worth sharing

My opinion is that if you are paying 10K for 16 bit pipeline instead of 14 bit pipeline you are wasting money. If you are paying 10K to get better images that is a different issue.

Best regards
Erik

Are you trying to prove:

We have all wasted 10s of Ks on equipment that is no better than prosumer kit?

DXO data is biased, inaccurate, nonsense...?

It is difficult to separate bit depth from IQ, as they tend to be related.

DXO have carefully chosen colours which are indistinguishable on my calibrated monitor, so I do not know which line refers to which camera.

All the lines are pretty linear for most of the range, and none of the lines deviates much from linearity both ends, as would be the case if the graph indicated the limits of dynamic range of any of the sensors.

If you wanted to do an experiment to determine the latitude or Dynamic range of a sensor, you would have to have a subject with a wider range of luminosity than can be accommodated by the sensor... so 20 levels would be about right.

Even if you never photograph subjects with light areas 2^16 times brighter than the dark areas, higher dynamic range gives smoother gradation between colours, reducing banding, especially in files where the contrast has been enhanced.




Title: Re: 12, 14 or 16 bits?
Post by: ErikKaffehr on February 18, 2011, 01:41:22 pm
Hi,

I'm of the opinion that dynamic range is not normally determining or limiting image quality. There are certainly situations where it matters a lot, however. In many of those cases HDR may be an option.

The idea was pretty much to shed some light on the importance of having 16 bits.

As a small comment, much criticism has been directed against DxO's definition of DR, claiming that SNR for a photographic DR needs to be much higher than one. Applying such a criteria would reduce the number of useful bits even more.

So, why are MFDB images conceived better? My answers are:

- I don't know
- A larger sensor collects more photons so it would have less noise, but this effect would be quite small
- An MFDB can have significantly higher MTF for a detail of given size, this may matter a lot!
- MFDBs can have better individual calibration that may be used optimally in vendor specific raw converters

In general, having a larger sensor has many benefits, and those benefits should not be ignored. It's easy to invent situations where sensor size is of no advantage, but if we assume that MFDBs are used where they function best the sensor size is a definitive benefit.

Best regards
Erik


When working with ultra-low light images one quickly appreciates the benefits of higher bit densities.  The low res monochrome images make random noise very evident at the pixel level.  You engineer the camera package to cool enough and read out slowly enough, making trade offs to get a particular precision (12, 14 16 bits whatever).  We accept these trade offs because we want the SNR to be adequate to our application.

Photography, in contrast, is much less demanding.  Photographic quality is perceptual (this is 5 on a scale of 1-10) as opposed to quantitative (this is the SNR) and I am not sure how one quality metric relates to the other.  Certainly, photographs have a whole lot of spatially convolved things going on, both in hardware and in our heads.  For example, the visual system generates its own perceptual SNR by making adjacent pixel operations that decrease the perceived noise.

What does this mean to required bit density for good photographs?  God knows.  I don't think the pixel SNR of a high end color sensor means much, to tell the truth.  Once you get to the level of quality available from the top DSLR or MFD systems the obvious (to me) image quality differences become global and perceptual as opposed to quantitative.  They are sort of like an MTF chain, in which the final result is a product of all the input factors - sensor, lens, raw decode, camera electronics, etc.  

I expect we will have a relevant metric one day.  Great topic for a psychophysics PhD thesis.
Title: Re: 12, 14 or 16 bits?
Post by: douglasf13 on February 18, 2011, 03:34:18 pm
  Joakim, "theSuede," who is on various forums and works in the industry, claims that MFDB isn't true 16bit in the first place, but, rather, it their 16bits are interpolated up from 12bits.
Title: Re: 12, 14 or 16 bits?
Post by: deejjjaaaa on February 18, 2011, 11:51:19 pm
  Joakim, "theSuede," who is on various forums and works in the industry.

he says about himself - "Has worked with the press since 1992, pre-press process engineer since 1999."
Title: Re: 12, 14 or 16 bits?
Post by: Dick Roadnight on February 19, 2011, 03:21:22 am
Noise dithers tonal transitions, so that there is little benefit to extra bits when the noise level exceeds the bit depth.  When the noise ratio exceeds the gradation steps in tonality, then the noise properly dithers transitions; when it doesn't you get banding.  The tonal steps of 8 bit data are too coarse for the noise to dither.  Anything beyond 12 for a camera with 12 bit DR, the noise is sufficient to dither.  16 bits is overkill, the data is being oversampled by 4 bits.
Beyond the dynamic range, there is no tone to dither - it is all black or white.

If 16 bits gives you shadow and highlight detail and colour, then that is a real benefit - even if the MTF res is not so high. To deny this would be like insisting that lens manufacturers specify image circle diameters that give the same res as the center of the lens.

If, in the shadows, 1 pixel in 4 or 10 captures a photon, and the software interpolates that to a soothe shade of some colour, that too is a benefit: a trade-off between res and noise.
Title: Re: 12, 14 or 16 bits?
Post by: ErikKaffehr on February 19, 2011, 03:36:24 am
Hi,

The problem is that each pixel will see 10-15 fake photons on the average. So even if the photon would be detected you couldn't tell a real photons from fake photons. The technical definition of DR says that number of fake photons (noise) equates number real photons (signal).

The simplest way to measure readout noise to make an exposure with lens cap on and measure the signal on each channel (red, green, blue, green2) in linear space, that is before gamma correction has been applied. You need a tool like "Rawanalyser" or Guillermo's program for that.

The other end of DR is full well capacity. As far as I understand it you need to a surface near saturation and calculate noise. The standard of deviation is square root of the number of photons detected (more correctly electrons collected). So there is no magic involved, just simple math.

MTF has nothing to do with bits or DR. It's the amount of contrast (called modulation) the lens can transfer from subject to sensor. Having a high MTF would thus increase signal. MTF is by definition 1 at zero frequency and drops almost linearly with increasing frequency, at least for an ideal, diffraction limited lens. So, very good lenses assumed, a sensor with 12 micron pitch would have about twice the MTF at pixel resolution compared with a 6 micron pitch sensor. Also the 12 micron sensor could hold four times the electrons, so signal would be 8 times larger. That's and advantage of large pixels. Todays MFDB sensors don't have 12 micron pixels, 6 microns is more typical. If you bin 4 6-micron in to a binned 12 micron pixel in software the 'shot noise', that is photon statistics will act as a big pixel but read noise will not be reduced so DR is less improved. Phase Pxx+ series has binning in hardware which is said to also reduce readout noise. That binning is called Sensor+.

The enclosed figures show the effect of binning at actual pixels and normalized for print.


Best regards
Erik

Ps. I'm not to happy about DxO's coloring choices, but I can easily tell them apart. I'm also using a calibrated monitor, BTW.

Beyond the dynamic range, there is no tone to dither - it is all black or white.

If 16 bits gives you shadow and highlight detail and colour, then that is a real benefit - even if the MTF res is not so high. To deny this would be like insisting that lens manufacturers specify image circle diameters that give the same res as the center of the lens.

If, in the shadows, 1 pixel in 4 or 10 captures a photon, and the software interpolates that to a soothe shade of some colour, that too is a benefit: a trade-off between res and noise.
Title: Re: 12, 14 or 16 bits?
Post by: Dick Roadnight on February 19, 2011, 03:43:30 am
Hi,

The problem is that each pixel will see 10-15 fake photons on the average. So even if the photon would be detected you couldn't tell a real photons from fake photons. The technical definition of DR says that number of fake photons (noise) equates number real photons (signal).
So there we have it - Dynamic Range, as defined, is meaningless.
Title: Re: 12, 14 or 16 bits?
Post by: ErikKaffehr on February 19, 2011, 04:20:21 am
Yeah,

On the other hand DR and bit depth are the same thing. So if DR is irrelevant than 16, 14, 12 or 10 bits are also irrelevant

But you can simply reduce it by subtracting 1, 2, 3 stops whatever depending on the criteria you want.

So if you need an SNR of 8 you subtract 3 EV from the measured range.

On the other hand, read noise will always only be present in the very darkest part of the image. So DR essentially says how much you can increase "exposure" in postprocessing until noise in the darks will be obvious:

The  images from Marc McCalmont taken with Pentax K5 and P45+ illustrates the issue. The Pentax K5 image shows more shadow detail than the P45+. That of course says nothing about the other end (highlights). Being able to do exact comparison is one of the advantages of using lab tests.

There is a proverb in engineering. Measure with micrometer, mark with chalk and cut with axe. 16 bit is like micrometer.

Best regards
Erik



So there we have it - Dynamic Range, as defined, is meaningless.
Title: Re: 12, 14 or 16 bits?
Post by: Dick Roadnight on February 19, 2011, 05:50:39 am
Dynamic Range, as defined, is meaningless.

Yeah,

On the other hand DR and bit depth are the same thing. So if DR is irrelevant than 16, 14, 12 or 10 bits are also irrelevant
Dynamic Range is relevant, and would be a (more) useful quantitative yardstick for comparing cameras if the definition was more "real world".
Quote

But you can simply reduce it by subtracting 1, 2, 3 stops whatever depending on the criteria you want.

So if you need an SNR of 8 you subtract 3 EV from the measured range.
Yes - but you would add to the measured DR rather than subtract.
Title: Re: 12, 14 or 16 bits?
Post by: ErikKaffehr on February 19, 2011, 06:29:34 am
Hi,

Nope, say that SNR for your needs is 8. That this signal is eight times noise. This would be three steps as 2^3 = 8. So now you can take the DR measured by DxO and subtract three steps.

So:
A Hasselblad H3D250 would give you 12.7 - 3 = 9.7 eV
A Canon EOS 5DII would give you 11.9 - 3 = 8.9 eV
A Nikon D3X would give you 13.7 - 3 = 10.7 eV

These figure are already normalized for megapixels. If we looked at actual pixels the Nikon would have a larger advantage.

The figures are measurement of the noise but doesn't describe the look of the noise. Some noise is more ugly.

Best regards
Erik


Dynamic Range, as defined, is meaningless.
Dynamic Range is relevant, and would be a (more) useful quantitative yardstick for comparing cameras if the definition was more "real world".Yes - but you would add to the measured DR rather than subtract.

Title: Re: 12, 14 or 16 bits?
Post by: Dick Roadnight on February 19, 2011, 07:06:28 am
Hi,

Nope, say that SNR for your needs is 8. That this signal is eight times noise. This would be three steps as 2^3 = 8. So now you can take the DR measured by DxO and subtract three steps.

So:
A Hasselblad H3D250 would give you 12.7 - 3 = 9.7 eV
A Canon EOS 5DII would give you 11.9 - 3 = 8.9 eV
A Nikon D3X would give you 13.7 - 3 = 10.7 eV

These figure are already normalized for megapixels. If we looked at actual pixels the Nikon would have a larger advantage.
I give up... subtracting stops would be valid if the measured DR was for a high SNR, and you wanted to know what the DR was for 1:1 SNR... there is a trade-off between DR and SNR, and the  lower you set the SNR yardstick, the higher the DR.

So, with DR measured @1:1 SNR=12.7, even my lowly Hasselblad H3D2-50 (which I have upgraded to an H4D-60) had a "real world" Dynamic range of 15.7 stops - thank you for the information.

Perhaps I will not waste any more of my time on this topic.
Quote
The figures are measurement of the noise but doesn't describe the look of the noise. Some noise is more ugly.

Best regards
Erik
Now you are talking about real world IQ... but you said about that "fake" pixels were counted as noise? What you call fake pixels are interpolation, and there is a difference between interpolated data (signal) and noise.

Title: Re: 12, 14 or 16 bits?
Post by: ErikKaffehr on February 19, 2011, 09:28:37 am
Hi,

In my math book 12.7 - 3.0 is 9.7.

Anyway the textbook definition of DR is based on SNR = 1. That is what DxO uses. So if you want SNR of say 8 you make the requirement more stringent. Therefore you would reduce the DR, that is subtract. That said, it is not given that readout noise would dominate when you specify SNR of 8, shot noise may come into play.

Please keep in mind that the origin posting was about the usefulness of bits. And the technical definition of DR is essentially the same as the number of useful bits.

Best regards
Erik

I give up... subtracting stops would be valid if the measured DR was for a high SNR, and you wanted to know what the DR was for 1:1 SNR... there is a trade-off between DR and SNR, and the  lower you set the SNR yardstick, the higher the DR.

So, with DR measured @1:1 SNR=12.7, even my lowly Hasselblad H3D2-50 (which I have upgraded to an H4D-60) had a "real world" Dynamic range of 15.7 stops - thank you for the information.

Perhaps I will not waste any more of my time on this topic. Now you are talking about real world IQ... but you said about that "fake" pixels were counted as noise? What you call fake pixels are interpolation, and there is a difference between interpolated data (signal) and noise.


Title: Re: 12, 14 or 16 bits?
Post by: ejmartin on February 19, 2011, 09:52:47 am
Beyond the dynamic range, there is no tone to dither - it is all black or white.

If 16 bits gives you shadow and highlight detail and colour, then that is a real benefit - even if the MTF res is not so high. To deny this would be like insisting that lens manufacturers specify image circle diameters that give the same res as the center of the lens.

If, in the shadows, 1 pixel in 4 or 10 captures a photon, and the software interpolates that to a soothe shade of some colour, that too is a benefit: a trade-off between res and noise.

I think you are misunderstanding the nature of DR.   At base ISO, a typical FF DSLR or MFDB is capturing 40,000 to 80,000 photons (depending on pixel size and efficiency).  Now, a 16-bit capture records data to a part in 2^16 = 65536, so let's take a figure in the middle, 60,000 photons, and so one digital level would naively seem like a change in illumination by one photon's worth.  But it doesn't work that way; the camera electronics has noise in it, and the voltage fluctuations from the noise are indistinguishable from the voltage change due to an increased or decreased signal.  So the noise causes random fluctuations up or down on top of the signal, and therefore throws off the count in the raw data so that it doesn't completely accurately reflect the actual photon count that the camera recorded.

One can translate the camera's electronic noise into an 'equivalent photons' count.  For the D3x at base ISO, say, it is a tad over 6 photons' worth of noise, with a saturation capacity of a little under 50,000 photons.  For the P65+ it seems (estimating from DxO data) that the saturation capacity is also a tad under 50,000 photons, and the electronic noise at base ISO is about 16 photons' worth.

Now let's ask if all those bits are worthwhile.  For the D3x, and 14 bits data recording, the precision of the recording is one part in 2^14; the full scale range of 0-50,000 photons is divided up into 2^14=16,384 steps, and so each step is about 3 photons' worth.   In a perfect world, an extra two bits would help and the counts would distinguish individual photons, but since the camera's electronic noise amounts to +/- 6 photons' worth of inaccuracy, 14 is ample (in fact, 13 would do).  For the P65+, and 16 photons' worth of inaccuracy, 16/50,000 is more than a a part in 2^12, so 12 bits would have been sufficient.

Bit depth is not the same thing as DR; rather, DR bounds the number of bits needed to accurately specify the count delivered by the camera, given the inaccuracy in the count inherent in the camera electronics.

Finally, the pixel DR is but one of a whole host of measures of data quality, so it's not worth obsessing about.
Title: Re: 12, 14 or 16 bits?
Post by: ErikKaffehr on February 19, 2011, 10:18:31 am
Hi Emil,

Thanks for chiming in and explaining much better than I could. The intention with the initial posting was to shed some light on the real utilization of bits.

Best regards
Erik


I think you are misunderstanding the nature of DR.   At base ISO, a typical FF DSLR or MFDB is capturing 40,000 to 80,000 photons (depending on pixel size and efficiency).  Now, a 16-bit capture records data to a part in 2^16 = 65536, so let's take a figure in the middle, 60,000 photons, and so one digital level would naively seem like a change in illumination by one photon's worth.  But it doesn't work that way; the camera electronics has noise in it, and the voltage fluctuations from the noise are indistinguishable from the voltage change due to an increased or decreased signal.  So the noise causes random fluctuations up or down on top of the signal, and therefore throws off the count in the raw data so that it doesn't completely accurately reflect the actual photon count that the camera recorded.

One can translate the camera's electronic noise into an 'equivalent photons' count.  For the D3x at base ISO, say, it is a tad over 6 photons' worth of noise, with a saturation capacity of a little under 50,000 photons.  For the P65+ it seems (estimating from DxO data) that the saturation capacity is also a tad under 50,000 photons, and the electronic noise at base ISO is about 16 photons' worth.

Now let's ask if all those bits are worthwhile.  For the D3x, and 14 bits data recording, the precision of the recording is one part in 2^14; the full scale range of 0-50,000 photons is divided up into 2^14=16,384 steps, and so each step is about 3 photons' worth.   In a perfect world, an extra two bits would help and the counts would distinguish individual photons, but since the camera's electronic noise amounts to +/- 6 photons' worth of inaccuracy, 14 is ample (in fact, 13 would do).  For the P65+, and 16 photons' worth of inaccuracy, 16/50,000 is more than a a part in 2^12, so 12 bits would have been sufficient.

Bit depth is not the same thing as DR; rather, DR bounds the number of bits needed to accurately specify the count delivered by the camera, given the inaccuracy in the count inherent in the camera electronics.

Finally, the pixel DR is but one of a whole host of measures of data quality, so it's not worth obsessing about.
Title: Re: 12, 14 or 16 bits?
Post by: bjanes on February 19, 2011, 04:51:30 pm
The simplest way to measure readout noise to make an exposure with lens cap on and measure the signal on each channel (red, green, blue, green2) in linear space, that is before gamma correction has been applied. You need a tool like "Rawanalyser" or Guillermo's program for that.

That method will work for Canon and some other cameras that add a black offset to prevent clipping at the black point, but Nikon uses no offset and clips the black point. As shown below, the left half of the bell shaped curve has been clipped. A better way to determine the read noise is to plot points near the black point short of clipping and extend the regression line from the region where clipping isn't yet present as shown in the second figure.


The other end of DR is full well capacity. As far as I understand it you need to a surface near saturation and calculate noise. The standard of deviation is square root of the number of photons detected (more correctly electrons collected). So there is no magic involved, just simple math.

If you want to get full capacity in electrons, you need to isolate shot noise by subtracting out fixed pattern noise as Roger Clark shows in this (http://www.clarkvision.com/imagedetail/evaluation-1d2/index.html) post. If you want to convert from data numbers to electrons, you need to determine the camera gain as Roger explains. For the highlight you have to be careful to not clip the right tail of the bell shaped curve as shot noise will decrease when clipping starts and will fall to zero when the sensor is saturated or the ADC overflows.

Elimination of fixed pattern noise may give overly optimistic results, as banding and other fixed pattern noises will be eliminated and can be quite distracting in the image.

Regards,

Bill
Title: Re: 12, 14 or 16 bits?
Post by: cunim on February 20, 2011, 09:33:34 am
Bill, my hat is off to someone who understands DR.  I have an amusing story about regression lines that .....  Nah, what am I thinking.  Nothing to do with photography.

Peter
Title: Re: 12, 14 or 16 bits?
Post by: ErikKaffehr on February 20, 2011, 10:23:45 am
Hi,

I'm most thankful for Emil Martinec and Bill chiming in and putting some things right. The topic was not really intended to discuss DR but to shed some light on the usefulness of bits. DxO is measuring DR so I felt it was a good starting point on usefulness of bits. My suggestions on measuring DR was intended to illustrate that measuring that is no wizardry. I was not really aware of the issues that Bill mentioned.

Best regards
Erik

Bill, my hat is off to someone who understands DR.  I have an amusing story about regression lines that .....  Nah, what am I thinking.  Nothing to do with photography.

Peter
Title: Re: 12, 14 or 16 bits?
Post by: bjanes on February 20, 2011, 12:16:50 pm

I'm most thankful for Emil Martinec and Bill chiming in and putting some things right. The topic was not really intended to discuss DR but to shed some light on the usefulness of bits. DxO is measuring DR so I felt it was a good starting point on usefulness of bits. My suggestions on measuring DR was intended to illustrate that measuring that is no wizardry. I was not really aver of the issues that Bill mentioned.

Yes, but the number of bits needed to encode a raw image depends on the DR. DXO rates the DR of the new Pentax 645 at 11.37 EV (screen). According to the Kodak spec sheet the full well for the KAF 40000 used in that camera is 42,000 e- and the read noise is 13 e-. This would give an engineering DR of 11.7 EV, consistent with DR of 70.2 dB reported on the spec sheet (1 EV is approximately equal to 6 dB). Thus, 12 bits should be sufficient for this camera and 14 bits are not really needed. The useful photographic DR would be less than 11.37 EV reported by DXO, since a S:N of 1 would give poor shadow detail, and one could probably get by with fewer bits.

For a daylight exposure, the red and blue channels would be considerably below saturation and the demosiaced DR would be worse. Some photographers place a magenta filter over the lens to hold back some of the green and obtain a better exposure balance for the channels, but with modern cameras I don't think this is worthwhile. Usually, DR is calculated from green channels.

DXO rates the Nikon D7000 DR (screen) at 13.35 EV. The DPR review gives a DR of 9.2 EV. They are using a Stouffer wedge and are apparently using a demosaiced RGB image with a "defined 'black point' (about 2% luminance) or the signal-to-noise ratio drops below a predefined value (where shadow detail would be swamped by noise), whichever comes first". They are using the camera with a picture control which is not linear and has a black point well above 0. 2% luminance in a linear raw file is 5.64 EV below clipping, so they are obviously working with a gamma encoded file. What a joke :) :)
Title: Re: 12, 14 or 16 bits?
Post by: ErikKaffehr on February 20, 2011, 02:42:24 pm
Hi,

The starting point on this was really that someone stated that Leica DMR was 16 bits, and I wanted to demonstrate that 16 bit is not really useful with todays technology. I learned a lot from this discussion.

Best regards
Erik


Yes, but the number of bits needed to encode a raw image depends on the DR. DXO rates the DR of the new Pentax 645 at 11.37 EV (screen). According to the Kodak spec sheet the full well for the KAF 40000 used in that camera is 42,000 e- and the read noise is 13 e-. This would give an engineering DR of 11.7 EV, consistent with DR of 70.2 dB reported on the spec sheet (1 EV is approximately equal to 6 dB). Thus, 12 bits should be sufficient for this camera and 14 bits are not really needed. The useful photographic DR would be less than 11.37 EV reported by DXO, since a S:N of 1 would give poor shadow detail, and one could probably get by with fewer bits.

For a daylight exposure, the red and blue channels would be considerably below saturation and the demosiaced DR would be worse. Some photographers place a magenta filter over the lens to hold back some of the green and obtain a better exposure balance for the channels, but with modern cameras I don't think this is worthwhile. Usually, DR is calculated from green channels.

DXO rates the Nikon D7000 DR (screen) at 13.35 EV. The DPR review gives a DR of 9.2 EV. They are using a Stouffer wedge and are apparently using a demosaiced RGB image with a "defined 'black point' (about 2% luminance) or the signal-to-noise ratio drops below a predefined value (where shadow detail would be swamped by noise), whichever comes first". They are using the camera with a picture control which is not linear and has a black point well above 0. 2% luminance in a linear raw file is 5.64 EV below clipping, so they are obviously working with a gamma encoded file. What a joke :) :)
Title: Re: 12, 14 or 16 bits?
Post by: NikoJorj on February 20, 2011, 03:51:27 pm
[...] The useful photographic DR would be less than 11.37 EV reported by DXO, since a S:N of 1 would give poor shadow detail, and one could probably get by with fewer bits.
Yes, but...wouldn't reduce too far the number of bits actually makes more noise, due to quantization noise kicking in?

I say that mostly because when truncating bits of an image (as with Guillermo's example I linked in the beginning of this discussion, or see here (http://www.chassimages.com/forum/index.php/topic,18686.msg300478.html#msg300478) if you can read french for some more), the first effect is in noise ; the noise gets a bit uglier (more colorful maybe?), well before some posterization occurs in the tonalities of the image itself.
Title: Re: 12, 14 or 16 bits?
Post by: Dick Roadnight on February 24, 2011, 02:10:58 pm
Eric…
I know your were trying to be helpful, and you were trying to persuade me that you were right, while not realizing that you were barking up the wrong tree... but I now realize why you were confused.

In the ideal world, in a studio, when you have control over the lighting, 9.7 stops is adequate, and you expose for the highlight and avoid shadow noise.

In the real world, for the landscape photographer 15.7 stops DR can be useful for shadow detail… as I said above…

If 16 bits gives you shadow and highlight detail and colour, then that is a real benefit - even if the MTF res is not so high. To deny this would be like insisting that lens manufacturers specify image circle diameters that give the same res as the centre of the lens.

If, in the shadows, 1 pixel in 4 or 10 captures a photon, and the software interpolates that to a soothe shade of some colour.. that too is a benefit: a trade-off between res and noise.

…"fake" pixels were counted as noise? What you call fake pixels are interpolation, and there is a difference between interpolated data (signal) and noise.

the  lower you set the SNR yardstick, the higher the DR.

Note I am talking about pixels per photon, not photons per pixel… this is where you got confused about subtracting 3 stops instead of adding them.

You could interpolate where the light level is one photon per 500 pixels… but that would be low res or no res, and not very useful.

You said:
The figures are measurement of the noise but doesn't describe the look of the noise. Some noise is more ugly.

In the shadows it is better to have noise/data that looks OK, even if it is low-res.
Title: Re: 12, 14 or 16 bits?
Post by: ErikKaffehr on February 24, 2011, 02:58:29 pm
Hi,

Sorry for barking...

The issues are very well explained in Emil's (ejmartin) postings and also the postings of Bill (bjanes).

This posting by Emil is specially recommended: http://www.luminous-landscape.com/forum/index.php?topic=51510.msg424143#msg424143

Regarding the interpolation issue I don't agree. But, there is an advantage to having more pixels. This is discussed at some depth here:
http://www.dxomark.com/index.php/en/Our-publications/DxOMark-Insights/More-pixels-offset-noise!
or in this article on Luminous Landscape:
http://peter.vdhamer.com/2010/12/05/dxomarksensor/

To me it seems that we have some interesting developments in CMOS sensor recently, with Pentax K5 and Nikon D7000 making best use of Sony's latest technology. That improvement is probably mostly achieved by reducing readout noise. Shot noise, which is caused by photon statistics is said to be less disturbing.

Regarding MFDBs, I have nothing against MFDBs. I'm even considering buying one, now and than. Problem is expense and weight. I have difficulty carrying my DSLR gear on flights adding MFDB stuff would not make my gear lighter. Also I'm the kind of person who is using many lenses.

Best regards
Erik
Eric…
I know your were trying to be helpful, and you were trying to persuade me that you were right, while not realizing that you were barking up the wrong tree... but I now realize why you were confused.

In the ideal world, in a studio, when you have control over the lighting, 9.7 stops is adequate, and you expose for the highlight and avoid shadow noise.

In the real world, for the landscape photographer 15.7 stops DR can be useful for shadow detail… as I said above…

If 16 bits gives you shadow and highlight detail and colour, then that is a real benefit - even if the MTF res is not so high. To deny this would be like insisting that lens manufacturers specify image circle diameters that give the same res as the centre of the lens.

If, in the shadows, 1 pixel in 4 or 10 captures a photon, and the software interpolates that to a soothe shade of some colour.. that too is a benefit: a trade-off between res and noise.

…"fake" pixels were counted as noise? What you call fake pixels are interpolation, and there is a difference between interpolated data (signal) and noise.

the  lower you set the SNR yardstick, the higher the DR.

Note I am talking about pixels per photon, not photons per pixel… this is where you got confused about subtracting 3 stops instead of adding them.

You could interpolate where the light level is one photon per 500 pixels… but that would be low res or no res, and not very useful.

You said:
The figures are measurement of the noise but doesn't describe the look of the noise. Some noise is more ugly.

In the shadows it is better to have noise/data that looks OK, even if it is low-res.

Title: how many _significant_ bits?
Post by: BJL on February 24, 2011, 05:16:05 pm
In the real world, for the landscape photographer 15.7 stops DR can be useful for shadow detail… as I said above…
Dick, wat you say might well be true about the virtues of an imagined digital camera delivering a signal with 16 significant bits, but the point of Erik and others in this thread is that there is as of now now such camera: only some that provide sixteen buts, but with at most the first 12 or 13 being significant, the rest being noise from the sensor and other electronic components. A simple software hack involving a random number generator could convert camera phone output to 16 bits, or even 64, but ...
Title: Re: 12, 14 or 16 bits?
Post by: ondebanks on February 25, 2011, 10:20:15 am
even my lowly Hasselblad H3D2-50 (which I have upgraded to an H4D-60) had a "real world" Dynamic range of 15.7 stops - thank you for the information.

Dick,

You've used this 15.7 stops figure a couple of times in this thread - the first time I saw it I assumed it must be a typo, but then you repeated it separately.

Your H3D2-50 uses a Kodak KAF-50100 sensor. Kodak publishes a very clear data sheet on it, containing the following information:
Read noise = 12.5 electrons
Full well depth = 40,300 electrons
Dynamic Range = 70.2 dB

Now from the readnoise and full well depth figures I calculate DR = 70.2 dB (exactly verifying Kodak's own DR figure) and DR = 11.7 stops.

Now you claim that Hasselblad turned an 11.7 stop sensor into a 15.7 stop digital back!?   ::)
Hasselblad are good; but they're not that good!

Let's play with that notion for a moment. The only way that DR=15.7 stops could be done, with that full well depth, is if the read noise is reduced to 0.75 electrons. At present, sub-electron readout noise is far, far beyond any standard CCD technology. L3-CCDs with very high electron-multiplication gain and pipeline thresholding can achieve it (see Andor Technologies for example; I built a high speed astronomical photometer called GUFI around one of their iXon units) - but these are small, very specialist sensors, and their DR is still only around 12 stops when operated at such high E-M gain.

I think that if a regular Kodak CCD had achieved sub-electron readout noise, we'd have heard all about it, very loudly and proudly, from Kodak!

Needless to say, I am intrigued. How did you arrive at the 15.7 stop figure in the first place?

Ray


Title: Re: 12, 14 or 16 bits?
Post by: cunim on February 25, 2011, 10:55:13 am
Other problem is that at some point lens flare, blooming, reflections across the CCD etc. become limiting.  You can have 16 bits of SNR from the camera electronics package, but that is measured using localized detector illumination generated by an ideal optic.  In actual practice, the intrascene DR is limited by optical flare from brighter regions or electronic contamination from detector areas that contain bright data.  In my experience, the electronic contamination is much more pronounced with CMOS and the like than with CCDs.

In other words, you can achieve a true 16 bit precision only by using an ideal optic and only in images that contain a relatively narrow brightness range. 

Fortunately, I don't think this limitation matters very much in photography.  After all, the cameras are not actually precise enough to resolve 16 bit data, so why obsess?  In contrast, a technical imager deals with these issues with on a routine basis.  A 16 bit liquid nitrogen cooled camera reading out a 1 kHz does you no good if your lens has any flare at all - and all lenses have some.  In fact, it is usually only possible to achieve such high precision levels with intrinsically luminous targets (fluorescent, bioluminescent, astronomical) where we can use very aggressive filters to remove everything except the bandwidth of interest. 

I would love to hear experiences from the science users on this board but the discussion is probably not very relevant to making photographs.

Peter
Title: Re: 12, 14 or 16 bits?
Post by: ErikKaffehr on February 25, 2011, 11:44:27 am
Ray,

The DxO figures are calculated for SNR = 1, which I think is a pretty conventional figure. Also, DxO normalizes DR to 8 MPixels assuming a constant print size. A suggestion I made was that a criterion of SNR = 8 could be used instead and DR would be reduced by about three steps. So if the DxO figure for DR would be 12.7 eV a DR value for SNR = 8 would probably be errand 12.7 - 3 = 9.7 step. I know it's an oversimplification but in my view not a terrible one. For some reason Dick interpreted my writing as meaning 15.7 stops. So that figure is coming from me, although I stated it to be approximately 9.7 stops.

Thank you very much for elaborating on the issue!

Best regards
Erik


Dick,

You've used this 15.7 stops figure a couple of times in this thread - the first time I saw it I assumed it must be a typo, but then you repeated it separately.

Your H3D2-50 uses a Kodak KAF-50100 sensor. Kodak publishes a very clear data sheet on it, containing the following information:
Read noise = 12.5 electrons
Full well depth = 40,300 electrons
Dynamic Range = 70.2 dB

Now from the readnoise and full well depth figures I calculate DR = 70.2 dB (exactly verifying Kodak's own DR figure) and DR = 11.7 stops.

Now you claim that Hasselblad turned an 11.7 stop sensor into a 15.7 stop digital back!?   ::)
Hasselblad are good; but they're not that good!

Let's play with that notion for a moment. The only way that DR=15.7 stops could be done, with that full well depth, is if the read noise is reduced to 0.75 electrons. At present, sub-electron readout noise is far, far beyond any standard CCD technology. L3-CCDs with very high electron-multiplication gain and pipeline thresholding can achieve it (see Andor Technologies for example; I built a high speed astronomical photometer called GUFI around one of their iXon units) - but these are small, very specialist sensors, and their DR is still only around 12 stops when operated at such high E-M gain.

I think that if a regular Kodak CCD had achieved sub-electron readout noise, we'd have heard all about it, very loudly and proudly, from Kodak!

Needless to say, I am intrigued. How did you arrive at the 15.7 stop figure in the first place?

Ray



Title: Re: 12, 14 or 16 bits?
Post by: bjanes on February 26, 2011, 12:36:12 pm
The DxO figures are calculated for SNR = 1, which I think is a pretty conventional figure. Also, DxO normalizes DR to 8 MPixels assuming a constant print size. A suggestion I made was that a criterion of SNR = 8 could be used instead and DR would be reduced by about three steps. So if the DxO figure for DR would be 12.7 eV a DR value for SNR = 8 would probably be errand 12.7 - 3 = 9.7 step. I know it's an oversimplification but in my view not a terrible one. For some reason Dick interpreted my writing as meaning 15.7 stops. So that figure is coming from me, although I stated it to be approximately 9.7 stops.


As Emil Martinec has pointed out in an earlier post, one can calculate the DR for any S:N by interpolating the DXO graphs. The difficult part is interpolation of a log scale which is explained in an article (http://en.wikipedia.org/wiki/Logarithmic_scale) on Wikipedia. Using the DXO SNR plot of the Pentax 645D as an example, to determine the S:N at 0 dB at base ISO one has to determine the gray scale value for 0 dB. It is somewhere between 0.01% and 0.1%. One can do a screen capture of the plot and measure the distances using the ruler in Photoshop and perform the log interpolation.

(http://bjanes.smugmug.com/Photography/DXO/DXOPentax645D/1199093386_UWD5p-O.png)

I get a value of 0.0391% as shown on an Excel spreadsheet. The S:N at 0 dB is 100%/0.0391% or 11.32 stops, which is in agreement with the reported screen DR of 11.37 stops. For S:N of 6 dB, one performs a similar interpolation, and I get a gray scale value of 0.0779%, which gives a DR of 10.3 stops.

(http://bjanes.smugmug.com/Photography/DXO/645interpolation/1199115169_B8Syk-O.gif)

Regards,

Bill
Title: Re: 12, 14 or 16 bits?
Post by: ondebanks on February 26, 2011, 01:19:05 pm
Ray,

The DxO figures are calculated for SNR = 1, which I think is a pretty conventional figure.

Correct to first order, but one nuance is that DxO is based on a total empirical noise (they do not attempt to disentangle the various contributions: read noise, signal poisson noise, dark poisson noise, quantization noise, fixed pattern noise, and pixel response non-uniformity); whereas conventional DR only considers read noise, which is normally the dominant type of noise at extremely low signal levels.


Also, DxO normalizes DR to 8 MPixels assuming a constant print size. A suggestion I made was that a criterion of SNR = 8 could be used instead and DR would be reduced by about three steps. So if the DxO figure for DR would be 12.7 eV a DR value for SNR = 8 would probably be errand 12.7 - 3 = 9.7 step. I know it's an oversimplification but in my view not a terrible one.


That is a slight oversimplification alright (again, because you're thinking only of read noise). But your suggestion made sense to me.


For some reason Dick interpreted my writing as meaning 15.7 stops. So that figure is coming from me, although I stated it to be approximately 9.7 stops.


I still don't see how Dick got 15.7 stops. What he said was this:
So, with DR measured @1:1 SNR=12.7, even my lowly Hasselblad H3D2-50 (which I have upgraded to an H4D-60) had a "real world" Dynamic range of 15.7 stops - thank you for the information.

So he wasn't using your approach - your figures were going in the opposite direction (and yours also passed the "sanity check" that DR can only decrease as the SNR threshold is increased). I've no idea what calculation or measurement he was using, and hope he chips in again to explain.

Ray

Title: Re: 12, 14 or 16 bits?
Post by: Dick Roadnight on February 26, 2011, 05:47:54 pm
Also, DxO normalizes DR to 8 MPixels assuming a constant print size. A suggestion I made was that a criterion of SNR = 8 could be used instead and DR would be reduced by about three steps. So if the DxO figure for DR would be 12.7 eV a DR value for SNR = 8 would probably be errand 12.7 - 3 = 9.7 step. I know it's an oversimplification but in my view not a terrible one.

That is a slight oversimplification alright (again, because you're thinking only of read noise). But your suggestion made sense to me.

I still don't see how Dick got 15.7 stops. What he said was this:
So, with DR measured @1:1 SNR=12.7, even my lowly Hasselblad H3D2-50 (which I have upgraded to an H4D-60) had a "real world" Dynamic range of 15.7 stops - thank you for the information.

So he wasn't using your approach - your figures were going in the opposite direction (and yours also passed the "sanity check" that DR can only decrease as the SNR threshold is increased). I've no idea what calculation or measurement he was using, and hope he chips in again to explain.

Ray
Thank you for your input.

I was using Eric's approach, but in a inverse way, calculating an increase in DR with a loss in res and an increase in "noise" or interpolation.

In the real world we do not always achieve a Disc Of Confusion (DOC) or resolution of 10 microns all the time in all parts on the image, (particularly if using small apertures and/or hand-holding) and if we use a SNR of 1/8, using his logic, we add 3 instead of subtracting 3, get 15.7 stops of DR,,, even if it only gives us a DOC of 80 microns in deep shadows.

Good interpolation and smoothing can make the shadows less noisy, whatever the DR count.


Title: Re: 12, 14 or 16 bits?
Post by: Ray on February 26, 2011, 08:43:44 pm
Other problem is that at some point lens flare, blooming, reflections across the CCD etc. become limiting.  You can have 16 bits of SNR from the camera electronics package, but that is measured using localized detector illumination generated by an ideal optic.  In actual practice, the intrascene DR is limited by optical flare from brighter regions or electronic contamination from detector areas that contain bright data.  In my experience, the electronic contamination is much more pronounced with CMOS and the like than with CCDs.


Peter,
I've heard of this DR limitation due to lens flare mentioned before but I can no find no specifics or examples or comparisons.

We're all familiar with the obvious examples of lens flare when the camera is pointed in the direction of the sun or some very bright reflection. However, the proposition that lens flare, even when the sun is behind the camera and when a lens hood is in place, will still limit dynamic range, needs investigating.

The questions that spring to my mind are:

(1) What is the DR limit of lens flare, in terms of EV or F/stops, in lenses considered to have the best flare protection?

(2) How does such a limit vary amongst different models of lenses?

(3) Is there any benefit, in terms of shadow quality, to be gained when the DR of the sensor exceeds the DR limit of the lens used?

Before I bought the D7000, I attempted to find out just how significant in practice were the DXO claims of such exceptional DR for the D7000.

I came across a lot of negative comments along the lines of, "Roger Clarke has demonstrated that shot noise is the predominant noise in DSLRs at extemely low 'pixel saturation' and therefore claims of 13 stops of DR are meaningless in practice."

Yet, oddly enough, the standard studio scenes used by Dpreview in their reviews of cameras demonstrated clearly, cleaner shadows in the D7000 images compared with, for example, the Canon 60D, so this claim of  'shot noise' limitation on DR seemed bogus to me.

Shortly after taking delivery of my D7000, I retrieved from my archives a Dynamic Range Test Chart created by Jonathan Wienke for the purpose of assessing the subjective significance of DR limitations.

But such a method completely bypasses the problem of lens flare since it involves progressively reducing exposure of a fixed target under constant lighting conditions, then examining the quality of the image which has been underexposed by a specific number of EVs.

Below are two exposures of this test chart which differ by 13EV, the first one is a reasonable ETTR at 4 seconds' exposure, and the second exposure at 1/2000th of a second demonstrates the extreme degree of image degradation in the 14th stop.
Needless to say, image quality 3 stops up from the 14th stop, the 11th stop, is much, much better, and quite acceptable for shadows.

Can anyone comment on flaws in such methodology?

Title: Re: 12, 14 or 16 bits?
Post by: PierreVandevenne on February 26, 2011, 10:24:25 pm
You can't reliably measure dynamic range with something printed on a sheet of paper. It seems to me you are just moving a 6 or 7 stops target over the range of a +/- 13 stops capture device. See that as two rules sliding side by side. Of course, when the light coming from the target is insufficient, the bottom 3 stops (for example) become irrelevant. Stretching the 4 remaining stops to match the appearance of the 7 stops properly recorded will, of course, lead to problems. Look at your histogram - its a very coarse sampling of the signal.

These links show how it can be done. Arri's method (2nd link) is said to correlate well with standards and I found interesting how they defined the lower treshold for signal validity

"a signal is valid when it is able to transport the spatial content"

That definition takes into account, to some extent (one can always argue that some setups would be better for some frequencies, etc, ad infinitum), the lens flare, blooming, reflections across the CCD, MTF etc... issues.

http://www.dxomark.com/index.php/Learn-more/DxOMark-database/DxOMark-testing-protocols/Noise-dynamic-range

http://www.arri.com/fileadmin/media/arri.com/downloads/Camera/Camera_Technologies/2009_09-08_DRTC_Brochure.pdf

Title: Re: 12, 14 or 16 bits?
Post by: Ray on February 27, 2011, 12:54:36 am
You can't reliably measure dynamic range with something printed on a sheet of paper. It seems to me you are just moving a 6 or 7 stops target over the range of a +/- 13 stops capture device. See that as two rules sliding side by side.http://www.arri.com/fileadmin/media/arri.com/downloads/Camera/Camera_Technologies/2009_09-08_DRTC_Brochure.pdf

Pierre,
That printed sheet of paper is a real-world object. I could have photographed something more beautiful, such as a vase of flowers, or the head of David. The reason I didn't is because subtle differences in  detail, with increasing underexposure, would not have been so readily identifiable.

This chart has been designed so that smallest variation in the degree of meaningful detail is readily apparent with changes in DR, expressed as changes in exposure.

Quote
Of course, when the light coming from the target is insufficient, the bottom 3 stops (for example) become irrelevant. Stretching the 4 remaining stops to match the appearance of the 7 stops properly recorded will, of course, lead to problems. Look at your histogram - its a very coarse sampling of the signal

Don't be fooled by the histogram. Look at the image. Below is a screen grab of a 50D image of the same real-world object, at the same exposure representing the 14th stop, under the same lighting conditions.

The histogram of the 50D image looks much more beautiful than the D7000 histogram which is much coarser by comparison.
What I see is a coarse D7000 histogram of a degraded image with some meaningful detail, as opposed to a less coarse 50D histogram of a totally degraded image with virtualy no detail at all.

I include also the 50D, with histogram, at 4 secs exposure to demonstrate that the initial 50D ETTR shot was not underexposed. In fact, it looks more exposed than the initial D7000 ETTR shot to me.
Title: Re: 12, 14 or 16 bits?
Post by: Dick Roadnight on February 27, 2011, 06:02:17 am
Peter,

Below are two exposures of this test chart which differ by 13EV, the first one is a reasonable ETTR at 4 seconds' exposure, and the second exposure at 1/2000th of a second demonstrates the extreme degree of image degradation in the 14th stop.
What is of interest to me is that the colour is almost completely lost in the 14th stop as well as the detail... so there is little or no useful data there at any res.
Title: Re: 12, 14 or 16 bits?
Post by: cunim on February 27, 2011, 10:17:55 am
Peter,
I've heard of this DR limitation due to lens flare mentioned before but I can no find no specifics or examples or comparisons.


Ray, I wish I could provide useful answers.  I can do that for my discipline (low light imaging) but not for my hobby (photography).  The discipline is just that - a highly optimized and standardized procedure for sensitive and precise (different things) detection of targets at the very limits of the envelope.  Biology, physics, astronomy - all similar.  We can start with a true 16 bit detector, for example, and apply techniques that bring optical noise down to a small enough proportion of total flux that we achieve adequate precision from the entire system.  That won't be 16 bits but might be pretty close.  Doing this creates some absolutely beautiful images but that is not usually the point.  The point is that we can relate a given intesity level in the image to a target characteristic in the real world.

See what I mean?  Very little to do with photography, which is an interpretative process.  Photography is inherently confounded with all sorts of known and unknown signal and noise factors.  An example of an unknown noise factor is internal flare within your favorite lens.  Sometimes you actually like that lens because of its flare characteristics so what benefit arises from speficying a system SNR with it in place?  Even if you could, manufacturing variability from lens to lens would swamp the precision of any high-bit system.

I agree that manufacturers need to specify the SNR of pro-grade cameras using standardized testing.  That gives us an idea of how well the hardware is implemented.  However, the SNR value of any of the top cameras is poorly predicitive of how well the camera will generate pleasing photographs.  It will have much more to do with how well the camera handles narrow parts of the dynamic range - shadows, for example - in which electronic noise contributions can be very evident.  At that point we start to get into the relative benefits of amplification (high iso) vs integration (longer exposures) and I really don't want to go there.

No worries.  Make pictures with equipment that gives you results that you like. For me, it's MFD and film.

Peter





Title: Re: 12, 14 or 16 bits? (Lens flare)
Post by: ErikKaffehr on February 27, 2011, 10:32:36 am
Hi,

I have the impression that lens flare is difficult to measure. It's definitively an issue in certain situations, like shooting portraits with an evenly illuminated window as background.

The reason it's hard to measure is that the appearance of flare depends much on how light is falling on lens.

I enclose a sample where lens flare caused serious problems but the image could be saved with some work with graduated filters in Lightroom.

Best regards
Erik



Title: Re: 12, 14 or 16 bits?
Post by: bjanes on February 27, 2011, 07:48:43 pm
I've heard of this DR limitation due to lens flare mentioned before but I can no find no specifics or examples or comparisons.

We're all familiar with the obvious examples of lens flare when the camera is pointed in the direction of the sun or some very bright reflection. However, the proposition that lens flare, even when the sun is behind the camera and when a lens hood is in place, will still limit dynamic range, needs investigating.

The questions that spring to my mind are:

(1) What is the DR limit of lens flare, in terms of EV or F/stops, in lenses considered to have the best flare protection?

(2) How does such a limit vary amongst different models of lenses?

(3) Is there any benefit, in terms of shadow quality, to be gained when the DR of the sensor exceeds the DR limit of the lens used?

Ray,

I can't answer all of your questions, but an experiment with the Nikon 60 mm AFS Micro-Nikkor and the Nikon D3 may be of some help. This lens is a prime and has 12 elements in 9 groups with multicoating and the Nano coating of some of the elements and should have reasonably low flare. I shot a Stouffer wedge with out masking off the target and another with the target masked to lessen flare.

(http://bjanes.smugmug.com/Photography/Stouffer/smallcomp/1200591705_B7bvj-O.png)

I then rendered the images into 16 bit sRGB with ACR using a linear tone curve and a composite shot with eyedropper readings is shown. The flare light lightens the darker wedges and limits DR.

(http://bjanes.smugmug.com/Photography/Stouffer/SCRComp/1200591833_hcBw7-O.png)

I then measured the DR with Imatest and the results are shown. The DR of the unmasked wedge is less and the flare is indicated by the upward bowing of the density curve in the darker wedges. There is still some flare in the masked image.

(http://bjanes.smugmug.com/Photography/Stouffer/DRComp/1200591611_Bn2J7-O.png)

Shortly after taking delivery of my D7000, I retrieved from my archives a Dynamic Range Test Chart created by Jonathan Wienke for the purpose of assessing the subjective significance of DR limitations.

But such a method completely bypasses the problem of lens flare since it involves progressively reducing exposure of a fixed target under constant lighting conditions, then examining the quality of the image which has been underexposed by a specific number of EVs.

Below are two exposures of this test chart which differ by 13EV, the first one is a reasonable ETTR at 4 seconds' exposure, and the second exposure at 1/2000th of a second demonstrates the extreme degree of image degradation in the 14th stop.
Needless to say, image quality 3 stops up from the 14th stop, the 11th stop, is much, much better, and quite acceptable for shadows.

Can anyone comment on flaws in such methodology?

I think the methodology is valid. In Roger Clark's methodology, he takes multiple flat fields of a white wall at various shutter speeds to determine dynamic range and full well capacity of the sensor. He eliminates fixed pattern noise by subtracting two identical exposures. One shot methods include PRNU (pixel response nonuniformity), uneven illumination and defects in the target. Because of the problems getting good exposures of the Stouffer transmission wedge, Norman Koren (Imatest) also offers a method where one takes exposures of a Kodak Q12 reflection wedge at various exposures and the program reconstructs the luminances mathematically. Obviously, the darker images have less flare light, analogous to Jonathin's method.

Regards,

Bill
Title: Re: 12, 14 or 16 bits?
Post by: Ray on February 27, 2011, 09:35:15 pm
Ray,

I can't answer all of your questions, but an experiment with the Nikon 60 mm AFS Micro-Nikkor and the Nikon D3 may be of some help. This lens is a prime and has 12 elements in 9 groups with multicoating and the Nano coating of some of the elements and should have reasonably low flare. I shot a Stouffer wedge with out masking off the target and another with the target masked to lessen flare.


Thanks for your response, Bill. I think I might have to reshoot Jonathan's DR Test Target positioned in a dark corner of a room as I shoot the scene out of the window on a bright day.

The concept that DR is limited by unavoidable lens flare implies that in practice the 'engineering' DR values quoted by DXOMark may not be an accurate guide.

If we use Erik's suggestion that a usable DR would be about 3 stops down from the DXO figure, say 11EV for the D7000, and 8.5EV for the Canon 50D, then one wonders if the DR differences between these two cameras would be narrowed, and by how much, as a result of a lens-flare limitation to DR.
Title: Re: 12, 14 or 16 bits?
Post by: Dick Roadnight on February 28, 2011, 04:28:09 am
Other problem is that at some point lens flare...
Peter
Yes...

To get the best out of a decent MF digiback you need non-retro-focus prime lenses and a good, pro shift-able lens shade, like my Sinar bellows mask2, which has 4 independently adjustable roller-blinds.

One of the problems with zoom lenses is that the lens shades do not zoom.

Title: Re: 12, 14 or 16 bits?
Post by: Bart_van_der_Wolf on February 28, 2011, 04:34:46 am
The concept that DR is limited by unavoidable lens flare implies that in practice the 'engineering' DR values quoted by DXOMark may not be an accurate guide.

Yet they are, for the sensor response. As usual, a single number doesn't tell the whole story though. Lens characteristics, the use of (clean) filters, a properly dimensioned lens hood, they all matter for the DR end result of the whole system.

Quote
If we use Erik's suggestion that a usable DR would be about 3 stops down from the DXO figure, say 11EV for the D7000, and 8.5EV for the Canon 50D, then one wonders if the DR differences between these two cameras would be narrowed, and by how much, as a result of a lens-flare limitation to DR.

Veiling glare has the most effect on the shadow areas of an image. So its effect also depends on the image content. Raw conversion can also make a difference. What ultimately matters is if the sensor can offer low noise low exposure level detail. When it does, then postprocessing can attempt to retrieve it.

Cheers,
Bart
Title: Re: 12, 14 or 16 bits?
Post by: Ray on February 28, 2011, 08:47:52 am
Yet they are, for the sensor response. As usual, a single number doesn't tell the whole story though. Lens characteristics, the use of (clean) filters, a properly dimensioned lens hood, they all matter for the DR end result of the whole system.

Veiling glare has the most effect on the shadow areas of an image. So its effect also depends on the image content. Raw conversion can also make a difference. What ultimately matters is if the sensor can offer low noise low exposure level detail. When it does, then postprocessing can attempt to retrieve it.

Cheers,
Bart

Bart,
I always look at the graphs. At base ISO, DXOMark claim the D7000 has approximately 2.5 stops better DR than the Canon 50D. My shots of Jonathan's DR Test Chart confirm that this is the case, that is, a 50D shot at a 350th sec exposure shows about the same detail, and same degree of image degradation, as a D7000 shot at a 2000th sec exposure, at ISO 100.

So for me, there's no doubt that DXOMark's test results for the sensor are valid and accurate.

However, we usually need a lens to take a photograph. Now, it's clear that lens flare and veiling glare can impact on the DR of the processed image, and one tries one's best to reduce such effects. I always use a lens hood. I never have a fixed, protective UV filter on any of my lenses because of the risk of reflections between the filter and the front element of the lens, and I often use a book or a card, or just my hand to block any direct sunlight when shooting in the broad direction of the sun.

The question I'm asking is, after taking all precautions against veiling glare and the obvious examples of lens flare, and after choosing a subject with the sun or major light source behind the camera rather than in front of the camera, is DR still limited by lens flare, and if so, how does that affect the relative DR performance amongst camera sensors, as described by DXOMark?

For example, if the DR difference between the 50D and the D7000 sensors is 2.5EV at ISO 100, is that difference reduced in practice, even under ideal circumstances, to maybe 1EV or 1.5EV?
Title: Re: 12, 14 or 16 bits?
Post by: Bart_van_der_Wolf on February 28, 2011, 09:57:53 am
The question I'm asking is, after taking all precautions against veiling glare and the obvious examples of lens flare, and after choosing a subject with the sun or major light source behind the camera rather than in front of the camera, is DR still limited by lens flare, and if so, how does that affect the relative DR performance amongst camera sensors, as described by DXOMark?

Ray,

System DR is negatively impacted by veiling glare, because total contrast is negatively impacted. The veil adds a certain uniform amount of signal, so that means that as a percentage the shadows will lose more of their density than the highlights gain brightness. Theoretically that could be simply solved by increasing contrast, if only the loss of contrast were uniform across the tonescale (which it isn't). So we are confronted with a loss of contrast mainly in the low exposure areas of the image which happens to be where the S/N ratio is worst already. Therefore, when we boost the contrast of the shadows more, we will also amplify the noise more.

As a general conclusion we can therefore say that a sensor with a better DR will allow to compensate for the contrast loss due to veiling glare better. It is not a coincidence that an application like HDR Expose (http://www.unifiedcolor.com/hdr-expose) has a control to reduce glare. An HDR image has a relatively high S/R ratio in the shadows because it is more shot-noise than read-noise limited.

IOW, the reduction of system DR due to veiling glare can be compensated better if the sensor DR is higher. Therefore the (engineering definition of) DR is still a good predictor of potential image quality.

Cheers,
Bart
Title: Re: 12, 14 or 16 bits?
Post by: cunim on February 28, 2011, 11:55:32 am
Ray,

IOW, the reduction of system DR due to veiling glare can be compensated better if the sensor DR is higher. Therefore the (engineering definition of) DR is still a good predictor of potential image quality.

Cheers,
Bart

Bart, thanks for an excellent summary.  As you point out, SNR predicts dynamic range if the intrascene properties permit.  I am not familiar with how DXO makes its measurements but I am sure they are replicable.  Speaking to Ray's question, the fact that they are replicable is what makes them useful.  I suspect that, in amplified imaging of low contrast scenes - in other words when lens flare and other optical factors do not dominate - the DXO values are relevant within the precision limits they specify. 

If that sounds like a waffle, it is.  SNR measurements in area detectors are critical to defining detector parameters but have limited external validity.  In other words, what we are measuring may not model the real world very well.  For example, make an SNR measurement on one side of the detector.  Now shine a moderately bright beam onto 10% of the other side of the detector.  At anything more than 10 bit precision, I'll bet you will see a change in the first measured value.  How would you provide specs for that sort of thing?  I expect the payload guys at NASA have models that describe their customized imagers pretty well, but our cameras are wildly divergent devices – and use proprietary demosaic processing to boot.  Models do not exist as far as I know.  Could they be developed?  Possibly, but it would be very hard, and expensive, and would the target market really care?

Fortunately, we don’t need engineering backgrounds to pick a camera.  We can use DXO or manufacturer’s specs to ensure that a particular detector is not grossly deficient.  Then we can use our eyes.  Photographic image quality is an emergent property of a cloud of variables - hence the endless debates about DSLR, MFD, film, and so forth.  In the end, each of us winds up with a system that suits his mission, personal style, and wallet. 

Peter
Title: Re: 12, 14 or 16 bits?
Post by: bjanes on February 28, 2011, 02:20:49 pm
Ray,

System DR is negatively impacted by veiling glare, because total contrast is negatively impacted. The veil adds a certain uniform amount of signal, so that means that as a percentage the shadows will lose more of their density than the highlights gain brightness. Theoretically that could be simply solved by increasing contrast, if only the loss of contrast were uniform across the tonescale (which it isn't). So we are confronted with a loss of contrast mainly in the low exposure areas of the image which happens to be where the S/N ratio is worst already. Therefore, when we boost the contrast of the shadows more, we will also amplify the noise more.

As a general conclusion we can therefore say that a sensor with a better DR will allow to compensate for the contrast loss due to veiling glare better. It is not a coincidence that an application like HDR Expose (http://www.unifiedcolor.com/hdr-expose) has a control to reduce glare. An HDR image has a relatively high S/R ratio in the shadows because it is more shot-noise than read-noise limited.

IOW, the reduction of system DR due to veiling glare can be compensated better if the sensor DR is higher. Therefore the (engineering definition of) DR is still a good predictor of potential image quality.

Bart,

It is interesting to compare the DXO results with data published by Kodak for the KAF 40000 CCD which is used in that camera. Kodak lists the DR as 70.2 dB and an estimated linear DR of 69.3 dB. I would assume that the signal departs from linear when the right tail of the Gaussian curve for the shot noise reaches saturation. Kodak lists the full well at 42K e- and the read noise at 13 e-, giving an engineering DR of 42000/13 = 70.2 dB = 11.66 EV

DXO lists the DR at 11.37 EV = 68.5 dB, in good agreement with the Kodak figures. I assume that DXO are eliminating pattern noise and lens flare by subtracting two flat frames as Roger Clark does in his tests. Because of white balance limits the DR of the blue and red channels when the green channel clips with daylight exposure, they are probably using the green channel DR, but they could let that channel clip and attain saturation for the other channels and calculate a DR for each channel. Also they calculate DR for SNR = 1, whereas in the engineering definition the signal is zero (lens cap on, no exposure).

For a real world photographic DR, one would take only one exposure and compare the highlight and shadow areas, which would include lens flare, pixel response nonuniformity, and other sources of noise after demosaicing and white balance.

DXO lists the SNR at 100% signal at 40.6 dB (107.2:1), well below the expected value of 68.5 dB that one would expect from the DR, yet the interval between 100% gray value and SNR = 1 (0 dB) is 11.3 EV according to my previous post (http://www.luminous-landscape.com/forum/index.php?topic=51510.20). The number of photons collected can be calculated by squaring the SNR, and this would give a value of 11,400 e- at 100% gray value rather than 42K e-. I don't understand what is going on here and hope you can clarify the situation.

Regards,

Bill
Title: Re: 12, 14 or 16 bits?
Post by: Ray on February 28, 2011, 09:17:15 pm
Ray,

System DR is negatively impacted by veiling glare, because total contrast is negatively impacted. The veil adds a certain uniform amount of signal, so that means that as a percentage the shadows will lose more of their density than the highlights gain brightness. Theoretically that could be simply solved by increasing contrast, if only the loss of contrast were uniform across the tonescale (which it isn't). So we are confronted with a loss of contrast mainly in the low exposure areas of the image which happens to be where the S/N ratio is worst already. Therefore, when we boost the contrast of the shadows more, we will also amplify the noise more.

As a general conclusion we can therefore say that a sensor with a better DR will allow to compensate for the contrast loss due to veiling glare better. It is not a coincidence that an application like HDR Expose (http://www.unifiedcolor.com/hdr-expose) has a control to reduce glare. An HDR image has a relatively high S/R ratio in the shadows because it is more shot-noise than read-noise limited.

IOW, the reduction of system DR due to veiling glare can be compensated better if the sensor DR is higher. Therefore the (engineering definition of) DR is still a good predictor of potential image quality.

Cheers,
Bart

Okay, Bart. Thanks. That makes sense to me and is what I imagined the situation to be.

Putting it another way, whatever scene the camera captures, without exception, is a scene that has been modified by the lens in all sorts of ways, including loss of resolution, introduction of various distortions, and a loss of contrast in the shadows due to lens flare, depending on the nature of the scene, the type of lighting and the quality of the lens.

The 'Brightness Range' in the scene, after it's been modified by the lens, is separate from the capacity of the sensor to capture such brightness range, just as the resolution and detail content of the scene is separate from the sensor's capacity to capture such detail.

The reason I've been asking this question is because such statements that lens flare puts a limit on DR imply that there's a quantifiable limit to the range of brightness levels that a particular lens can transmit, just as there's a quantifiable limit to the resolution of a lens at a particular f/stop and contrast.

Searching the internet for such a quantifiable limit, I've come across claims that such a limit may be in the order of 10 stops, but no explanation as to how such a figure was derived.

However, it is understood, even if such a figure could be justified, that the D7000 sensor is capable of capturing a better quality image in the 10th stop than the 50D in the 10th stop.
Title: Re: 12, 14 or 16 bits?
Post by: ondebanks on March 01, 2011, 05:39:08 am
I can't answer all of your questions, but an experiment with the Nikon 60 mm AFS Micro-Nikkor and the Nikon D3 may be of some help. This lens is a prime and has 12 elements in 9 groups with multicoating and the Nano coating of some of the elements and should have reasonably low flare. I shot a Stouffer wedge with out masking off the target and another with the target masked to lessen flare.

I then rendered the images into 16 bit sRGB with ACR using a linear tone curve and a composite shot with eyedropper readings is shown. The flare light lightens the darker wedges and limits DR.

I then measured the DR with Imatest and the results are shown. The DR of the unmasked wedge is less and the flare is indicated by the upward bowing of the density curve in the darker wedges. There is still some flare in the masked image.

(http://bjanes.smugmug.com/Photography/Stouffer/DRComp/1200591611_Bn2J7-O.png)

I think the methodology is valid. In Roger Clark's methodology, he takes multiple flat fields of a white wall at various shutter speeds to determine dynamic range and full well capacity of the sensor. He eliminates fixed pattern noise by subtracting two identical exposures. One shot methods include PRNU (pixel response nonuniformity), uneven illumination and defects in the target. Because of the problems getting good exposures of the Stouffer transmission wedge, Norman Koren (Imatest) also offers a method where one takes exposures of a Kodak Q12 reflection wedge at various exposures and the program reconstructs the luminances mathematically. Obviously, the darker images have less flare light, analogous to Jonathin's method.

Regards,

Bill

Bill,

What your tests demonstrate is that a little lens flare actually DOES NOT REDUCE the DR captured. People seem to have got the wrong idea about the impact of (slight) lens flare. It compresses the output tonal range, but it does not lessen the amount of input tonal range which is successfully transferred to the sensor. For example, every step in the Stouffer wedge in your non-flared shot is also reproduced in the flared shot. The output range is merely compressed (by becoming non-linear) from 10.6 stops to 8.86 stops per your measurements. This does not mean that (10.6 - 8.86 = ) 1.74 stops have been lost/clipped. Those wedge steps/stops are still there, clearly distinguished in the graphs. What you should be measuring is DR captured from the input side rather than the output side. That's what matters to photography.

Edit: oops, before anyone tells me - I just realised that I got this quite wrong: I didn't notice the re-scaling of the x-axis between the two graphs. And that the DR numbers from Imatest were indeed coming from the input side. 

If we did the same test with colour negative film like Ektar 100, we'd measure totally crap output DR (around 6 stops of negative density), but we all know that c-neg film like Ektar can distinguish at least 11 stops of input levels, because of its characterisic curve, which is both nonlinear and has a non-unity slope in the linear portion (see its datasheet).

Too much lens flare will hurt though, at the pixel level; the poisson noise of the flare component in the deep shadows will overwhelm the read noise which would otherwise have been the dominant and limiting noise. But if one is averaging over blocks of pixels of similar input tones (as can be done with the Stouffer wedge steps), the faint end will still survive detectability, at a lower net S/N, even in the presence of enhanced background noise from flare.

(The other) Ray
Title: Re: 12, 14 or 16 bits?
Post by: ondebanks on March 01, 2011, 05:45:28 am
One other thing, Bill:

Also they calculate DR for SNR = 1, whereas in the engineering definition the signal is zero (lens cap on, no exposure).


No, you're thinking of how the bias level is measured. The engineering definition of DR takes the lower limit as the point where SNR=1 (equal levels of signal and read noise) - just as they calculate it.

Ray
Title: Re: 12, 14 or 16 bits?
Post by: ondebanks on March 01, 2011, 06:46:39 am
Also, DxO normalizes DR to 8 MPixels assuming a constant print size. A suggestion I made was that a criterion of SNR = 8 could be used instead and DR would be reduced by about three steps. So if the DxO figure for DR would be 12.7 eV a DR value for SNR = 8 would probably be errand 12.7 - 3 = 9.7 step. I know it's an oversimplification but in my view not a terrible one.
Thank you for your input.

I was using Eric's approach, but in a inverse way, calculating an increase in DR with a loss in res and an increase in "noise" or interpolation.

In the real world we do not always achieve a Disc Of Confusion (DOC) or resolution of 10 microns all the time in all parts on the image, (particularly if using small apertures and/or hand-holding) and if we use a SNR of 1/8, using his logic, we add 3 instead of subtracting 3, get 15.7 stops of DR,,, even if it only gives us a DOC of 80 microns in deep shadows.

Good interpolation and smoothing can make the shadows less noisy, whatever the DR count.



Dick,

Thanks for coming back with your explanation. I see what you mean now...

It's very unorthodox to be considering SNR which is a fraction of 1 in any context; not least with DR, which has a crisp engineering definition, setting the lower limit to SNR=1 for a good reason.

But your general idea that smoothing or averaging the shadows out to larger spatial extents increases their net S/N, at the cost of spatial resolution, is valid. This is normal practice in photon-starved X-ray astronomy, for example: look at the background in any image from the Chandra X-ray satellite. It looks oddly, artificially, smooth; any features are clumpy; and this is simply because they literally do distribute 1 or 2 photons over several pixels by entropy-based smoothing.

But this works in X-ray astronomy, because true photon-counting detectors are normally used, i.e. detectors without any readout noise. The problem for doing this with optical CCD and CMOS detectors is that 1 photo-electron of signal is so lost in the many electrons of read noise in its own pixel, not to mention all the readnoise in the zero-signal pixels surrounding it, that the gains from smoothing are far less. Earlier in this thread you remarked:

            Note I am talking about pixels per photon, not photons per pixel… this is where you got confused about subtracting 3 stops instead of adding them.
            You could interpolate where the light level is one photon per 500 pixels… but that would be low res or no res, and not very useful.


In a best-case MFD sensor with 12 electrons read noise, we would have:
- noise of sqrt(500)  * 12 = 268 electrons
- signal of 1 electron
- SNR of 0.0037 (!!)

...completely indistinguishable from pure noise. No smoothing or interpolation in the world is going to help here.

Ray
Title: Re: 12, 14 or 16 bits?
Post by: bjanes on March 01, 2011, 01:52:56 pm
One other thing, Bill:

No, you're thinking of how the bias level is measured. The engineering definition of DR takes the lower limit as the point where SNR=1 (equal levels of signal and read noise) - just as they calculate it.

Ray

How can that be when read noise is typically measured with the lens cap on in a dark room and with a high shutter speed. In this case there is no signal unless you are counting the noise as signal.

Regards,

Bill
Title: Re: 12, 14 or 16 bits?
Post by: joofa on March 01, 2011, 02:04:52 pm
How can that be when read noise is typically measured with the lens cap on in a dark room and with a high shutter speed. In this case there is no signal unless you are counting the noise as signal.

Regards,

Bill

The intent is to find a measure of noise above which any measurement under ordinary operation is considered to have more signal than noise. Since, this measure is assumed to define equal signal and noise extents under normal operation, hence, SNR = 1.

Sincerely,

Joofa
Title: Re: 12, 14 or 16 bits?
Post by: ondebanks on March 02, 2011, 05:25:36 am
How can that be when read noise is typically measured with the lens cap on in a dark room and with a high shutter speed. In this case there is no signal unless you are counting the noise as signal.

Regards,

Bill

Ah, I see where the confusion arises. Yes, you can also estimate readout noise in that way (using subtracted pairs of frames, because the histogram of a single one is prone to being skewed by pattern noise). But that sort of frame has no signal, so it has no DR; it only yields one half of the DR equation, the read noise denominator. Once enough light impinges on the sensor, with the lenscap off, to push the signal level with the readout noise, then we are at the onset of DR.

Ray
Title: Re: 12, 14 or 16 bits?
Post by: bjanes on March 02, 2011, 11:07:35 am
Ah, I see where the confusion arises. Yes, you can also estimate readout noise in that way (using subtracted pairs of frames, because the histogram of a single one is prone to being skewed by pattern noise). But that sort of frame has no signal, so it has no DR; it only yields one half of the DR equation, the read noise denominator. Once enough light impinges on the sensor, with the lenscap off, to push the signal level with the readout noise, then we are at the onset of DR.

Ray

Ray,

It is a minor point, but I do not agree. SNR and DR are related, but the engineering definition of DR is full well/read noise and read noise is measured when the signal = 0. SNR would be undefined. However, signal:noise of < 1 is readily obtained as shown from this interactive plot on the Nikon Microscopy site (http://www.microscopyu.com/tutorials/java/digitalimaging/signaltonoise/index.html). The text also has a formula for calculating how stray light affects the SNR and this would also apply to veiling flare.

(http://bjanes.smugmug.com/Photography/DXO/NikonMicro/1203626809_BKnED-O.png)

The DXO SNR plots stop at a log value of zero, or 1:1. However one can extend the plot below 1:1 as shown in Emil's (http://bjanes.smugmug.com/Photography/DXO/EmilSNR/1203627044_upXe4-O.png) plot for the Canon 1D3. Using the plot, one could calculate DRs with SNR below zero.

(http://bjanes.smugmug.com/Photography/DXO/EmilSNR/1203627044_upXe4-O.png)

With high contrast images, a SNR of 1.05 can provide some useful data as shown on the Hamamatsu (http://learn.hamamatsu.com/articles/ccdsnr.html) web site.

(http://bjanes.smugmug.com/Photography/DXO/SNRHamamatsu/1203626949_HtbPa-O.png)

Regards,

Bill

Title: Re: 12, 14 or 16 bits?
Post by: ondebanks on March 03, 2011, 05:29:58 am
Ray,

It is a minor point, but I do not agree. SNR and DR are related, but the engineering definition of DR is full well/read noise and read noise is measured when the signal = 0. SNR would be undefined.

Bill,

That's exactly what I said. What are you not agreeing with? I get the impression that you are saying tomayto, I am saying tomahto, and we are both referring to the same red fruit!

However, signal:noise of < 1 is readily obtained as shown from this interactive plot on the Nikon Microscopy site (http://www.microscopyu.com/tutorials/java/digitalimaging/signaltonoise/index.html).

The DXO SNR plots stop at a log value of zero, or 1:1. However one can extend the plot below 1:1 as shown in Emil's (http://bjanes.smugmug.com/Photography/DXO/EmilSNR/1203627044_upXe4-O.png) plot for the Canon 1D3. Using the plot, one could calculate DRs with SNR below zero.

I'm sure that's a typo and you meant "with SNR below 1.0".
Of course it is possible; just remove a few more electrons of signal and S/N drops below 1.0. But is it useful data? Not unless you either spatially average like crazy; or else you stack n images to improve S/N by sqrt(n) and that gets you back to S/N >= 1.0.

I have countless astro images, both research and hobby, where I've had to do this to make certain stars detectable. In fact the conventional threshold in astronomy is S/N = 3 ("three sigma"), but that is measured by evaluating over most or all of the PSF; individual pixels within the PSF will have S/N well below 3.0.

With high contrast images, a SNR of 1.05 can provide some useful data as shown on the Hamamatsu (http://learn.hamamatsu.com/articles/ccdsnr.html) web site.

Absolutely, because (a) SNR of 1.05 falls within the definition for DR, and (b) in the sample image of the cells, adjacent pixels with the same source intensity reinforce each other in our perceptual system; it averages them spatially. But take any one of those brighter pixels in isolation, and set it against the same background, and the eye/brain would not be able to distinguish it from noise.

It muddies the waters when we allow spatial averaging to enter discussions like this, departing from per-pixel considerations; but I realise that we can't stop our brains from doing it! It has some fascinating consequences. For example, we cannot resolve point sources which are closer together than the Rayleigh limit (or Dawes limit in special circumstances); but we can resolve tighter angles than that in extended features. People routinely perceive the major divisions in Saturn's rings in telescopes which, according to Rayleigh and Dawes, cannot resolve such structures. This is due to spatial perceptual reinforcement, of detail which at any given point is sub-threshold, but as a connected ensemble is super-threshold. 

Ray
Title: Re: 12, 14 or 16 bits?
Post by: cunim on March 03, 2011, 11:18:27 am
I believe this thread started with a limited question: “How many bits of precision do I need in digital photography?”    Some of us (thanks Ray) have experience with making those judgments in applied imaging, but that may not be very relevant here.  I can look at a liquid scintillation count and relate disintegrations to photon flux to required detector characteristics.  Put a camera in my hands, though, and I’m pretty much helpless.  Actually, that’s true in a general sense but never mind.  The specific problem is that I cannot control the target characteristics so I do not know the relevant specifications for photography.  Hence my interest in topics such as this. 

I think the photographers here are struggling because they believe there must be a way to judge camera image quality using one or two simple variables.  Easy enough when comparing point and shoots to pro gear, but not easy within different models of pro gear.  To make that sort of comparison we need a psychophysics of digital photography and I am unaware of any such thing.  I guess the masters do it with craft as opposed to science.

Is this a fair summary of what posters are saying?  Our cameras are not providing anything like the 10-12 bits available from detector hardware – unless we work within a very narrow range of flux.  I think the postings here are suggesting about 8 stops with typical intra-scene flux levels and fair linearity.  Once we get beyond the linear response range the effects of degraded SNR are readily visible but poorly specified.  To minimize shadow detail problems, the DSLR crowd look for good high ISO performance while MFD users tend to look for low noise integration at base ISO or near it. My own impression is that these strategies probably reflect the different response properties of CMOS vs CCD but that is speculation.
Title: Re: 12, 14 or 16 bits?
Post by: ondebanks on March 03, 2011, 12:26:36 pm
I believe this thread started with a limited question: “How many bits of precision do I need in digital photography?”    Some of us (thanks Ray) have experience with making those judgments in applied imaging, but that may not be very relevant here. 

Indeed, we swung off on many interesting tangents, and mine may not have been very relevant the Erik's original question. I think that Erik's original contention was correct, that the DxO data backed it up, and that Emil Martinec provided the best explanation as to "why" in the thread.

I think the photographers here are struggling because they believe there must be a way to judge camera image quality using one or two simple variables.  Easy enough when comparing point and shoots to pro gear, but not easy within different models of pro gear.  To make that sort of comparison we need a psychophysics of digital photography and I am unaware of any such thing.  I guess the masters do it with craft as opposed to science.

I have a suspicion that in this context, good craft works because it is based on good science, even if the craftsman or woman doesn't realise this.

Is this a fair summary of what posters are saying?  Our cameras are not providing anything like the 10-12 bits available from detector hardware – unless we work within a very narrow range of flux.  I think the postings here are suggesting about 8 stops with typical intra-scene flux levels and fair linearity.  Once we get beyond the linear response range the effects of degraded SNR are readily visible but poorly specified.  To minimize shadow detail problems, the DSLR crowd look for good high ISO performance while MFD users tend to look for low noise integration at base ISO or near it. My own impression is that these strategies probably reflect the different response properties of CMOS vs CCD but that is speculation.


Yes I think that's a fair summary. I would add one further point which is this: because the signal is quantized as discrete electrons, then even in a noiseless sensor, there is no benefit to 16 bits once the maximum electron count can be accomodated in 15 bits (32768), or fewer. As pixels shrink to 6 microns and smaller, reducing their full well depth, we are already close to this point in MFD and probably beyond it in many DSLRs. Factor in the "bit redundancy" due to real-world noise, and we are way beyond it in all such sensors.

Ray
Title: Re: 12, 14 or 16 bits?
Post by: ErikKaffehr on March 03, 2011, 01:55:24 pm
Hi,

In my view it was nice to get some insight on SNR outside the scope of photography. Also, I'd suggest that anyone who has followed all the postings on this thread has a good understanding of the significance of bit depth, so I'd suggest the original intention with the posting has been achieved.

Best regards
Erik

Indeed, we swung off on many interesting tangents, and mine may not have been very relevant the Erik's original question. I think that Erik's original contention was correct, that the DxO data backed it up, and that Emil Martinec provided the best explanation as to "why" in the thread.

I have a suspicion that in this context, good craft works because it is based on good science, even if the craftsman or woman doesn't realise this.

Yes I think that's a fair summary. I would add one further point which is this: because the signal is quantized as discrete electrons, then even in a noiseless sensor, there is no benefit to 16 bits once the maximum electron count can be accomodated in 15 bits (32768), or fewer. As pixels shrink to 6 microns and smaller, reducing their full well depth, we are already close to this point in MFD and probably beyond it in many DSLRs. Factor in the "bit redundancy" due to real-world noise, and we are way beyond it in all such sensors.

Ray