Pages: 1 2 [3]   Go Down

Author Topic: 12, 14 or 16 bits?  (Read 23813 times)

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: 12, 14 or 16 bits?
« Reply #40 on: February 27, 2011, 07:48:43 pm »

I've heard of this DR limitation due to lens flare mentioned before but I can no find no specifics or examples or comparisons.

We're all familiar with the obvious examples of lens flare when the camera is pointed in the direction of the sun or some very bright reflection. However, the proposition that lens flare, even when the sun is behind the camera and when a lens hood is in place, will still limit dynamic range, needs investigating.

The questions that spring to my mind are:

(1) What is the DR limit of lens flare, in terms of EV or F/stops, in lenses considered to have the best flare protection?

(2) How does such a limit vary amongst different models of lenses?

(3) Is there any benefit, in terms of shadow quality, to be gained when the DR of the sensor exceeds the DR limit of the lens used?

Ray,

I can't answer all of your questions, but an experiment with the Nikon 60 mm AFS Micro-Nikkor and the Nikon D3 may be of some help. This lens is a prime and has 12 elements in 9 groups with multicoating and the Nano coating of some of the elements and should have reasonably low flare. I shot a Stouffer wedge with out masking off the target and another with the target masked to lessen flare.



I then rendered the images into 16 bit sRGB with ACR using a linear tone curve and a composite shot with eyedropper readings is shown. The flare light lightens the darker wedges and limits DR.



I then measured the DR with Imatest and the results are shown. The DR of the unmasked wedge is less and the flare is indicated by the upward bowing of the density curve in the darker wedges. There is still some flare in the masked image.



Shortly after taking delivery of my D7000, I retrieved from my archives a Dynamic Range Test Chart created by Jonathan Wienke for the purpose of assessing the subjective significance of DR limitations.

But such a method completely bypasses the problem of lens flare since it involves progressively reducing exposure of a fixed target under constant lighting conditions, then examining the quality of the image which has been underexposed by a specific number of EVs.

Below are two exposures of this test chart which differ by 13EV, the first one is a reasonable ETTR at 4 seconds' exposure, and the second exposure at 1/2000th of a second demonstrates the extreme degree of image degradation in the 14th stop.
Needless to say, image quality 3 stops up from the 14th stop, the 11th stop, is much, much better, and quite acceptable for shadows.

Can anyone comment on flaws in such methodology?

I think the methodology is valid. In Roger Clark's methodology, he takes multiple flat fields of a white wall at various shutter speeds to determine dynamic range and full well capacity of the sensor. He eliminates fixed pattern noise by subtracting two identical exposures. One shot methods include PRNU (pixel response nonuniformity), uneven illumination and defects in the target. Because of the problems getting good exposures of the Stouffer transmission wedge, Norman Koren (Imatest) also offers a method where one takes exposures of a Kodak Q12 reflection wedge at various exposures and the program reconstructs the luminances mathematically. Obviously, the darker images have less flare light, analogous to Jonathin's method.

Regards,

Bill
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10387
Re: 12, 14 or 16 bits?
« Reply #41 on: February 27, 2011, 09:35:15 pm »

Ray,

I can't answer all of your questions, but an experiment with the Nikon 60 mm AFS Micro-Nikkor and the Nikon D3 may be of some help. This lens is a prime and has 12 elements in 9 groups with multicoating and the Nano coating of some of the elements and should have reasonably low flare. I shot a Stouffer wedge with out masking off the target and another with the target masked to lessen flare.


Thanks for your response, Bill. I think I might have to reshoot Jonathan's DR Test Target positioned in a dark corner of a room as I shoot the scene out of the window on a bright day.

The concept that DR is limited by unavoidable lens flare implies that in practice the 'engineering' DR values quoted by DXOMark may not be an accurate guide.

If we use Erik's suggestion that a usable DR would be about 3 stops down from the DXO figure, say 11EV for the D7000, and 8.5EV for the Canon 50D, then one wonders if the DR differences between these two cameras would be narrowed, and by how much, as a result of a lens-flare limitation to DR.
Logged

Dick Roadnight

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1730
Re: 12, 14 or 16 bits?
« Reply #42 on: February 28, 2011, 04:28:09 am »

Other problem is that at some point lens flare...
Peter
Yes...

To get the best out of a decent MF digiback you need non-retro-focus prime lenses and a good, pro shift-able lens shade, like my Sinar bellows mask2, which has 4 independently adjustable roller-blinds.

One of the problems with zoom lenses is that the lens shades do not zoom.

Logged
Hasselblad H4, Sinar P3 monorail view camera, Schneider Apo-digitar lenses

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: 12, 14 or 16 bits?
« Reply #43 on: February 28, 2011, 04:34:46 am »

The concept that DR is limited by unavoidable lens flare implies that in practice the 'engineering' DR values quoted by DXOMark may not be an accurate guide.

Yet they are, for the sensor response. As usual, a single number doesn't tell the whole story though. Lens characteristics, the use of (clean) filters, a properly dimensioned lens hood, they all matter for the DR end result of the whole system.

Quote
If we use Erik's suggestion that a usable DR would be about 3 stops down from the DXO figure, say 11EV for the D7000, and 8.5EV for the Canon 50D, then one wonders if the DR differences between these two cameras would be narrowed, and by how much, as a result of a lens-flare limitation to DR.

Veiling glare has the most effect on the shadow areas of an image. So its effect also depends on the image content. Raw conversion can also make a difference. What ultimately matters is if the sensor can offer low noise low exposure level detail. When it does, then postprocessing can attempt to retrieve it.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10387
Re: 12, 14 or 16 bits?
« Reply #44 on: February 28, 2011, 08:47:52 am »

Yet they are, for the sensor response. As usual, a single number doesn't tell the whole story though. Lens characteristics, the use of (clean) filters, a properly dimensioned lens hood, they all matter for the DR end result of the whole system.

Veiling glare has the most effect on the shadow areas of an image. So its effect also depends on the image content. Raw conversion can also make a difference. What ultimately matters is if the sensor can offer low noise low exposure level detail. When it does, then postprocessing can attempt to retrieve it.

Cheers,
Bart

Bart,
I always look at the graphs. At base ISO, DXOMark claim the D7000 has approximately 2.5 stops better DR than the Canon 50D. My shots of Jonathan's DR Test Chart confirm that this is the case, that is, a 50D shot at a 350th sec exposure shows about the same detail, and same degree of image degradation, as a D7000 shot at a 2000th sec exposure, at ISO 100.

So for me, there's no doubt that DXOMark's test results for the sensor are valid and accurate.

However, we usually need a lens to take a photograph. Now, it's clear that lens flare and veiling glare can impact on the DR of the processed image, and one tries one's best to reduce such effects. I always use a lens hood. I never have a fixed, protective UV filter on any of my lenses because of the risk of reflections between the filter and the front element of the lens, and I often use a book or a card, or just my hand to block any direct sunlight when shooting in the broad direction of the sun.

The question I'm asking is, after taking all precautions against veiling glare and the obvious examples of lens flare, and after choosing a subject with the sun or major light source behind the camera rather than in front of the camera, is DR still limited by lens flare, and if so, how does that affect the relative DR performance amongst camera sensors, as described by DXOMark?

For example, if the DR difference between the 50D and the D7000 sensors is 2.5EV at ISO 100, is that difference reduced in practice, even under ideal circumstances, to maybe 1EV or 1.5EV?
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8915
Re: 12, 14 or 16 bits?
« Reply #45 on: February 28, 2011, 09:57:53 am »

The question I'm asking is, after taking all precautions against veiling glare and the obvious examples of lens flare, and after choosing a subject with the sun or major light source behind the camera rather than in front of the camera, is DR still limited by lens flare, and if so, how does that affect the relative DR performance amongst camera sensors, as described by DXOMark?

Ray,

System DR is negatively impacted by veiling glare, because total contrast is negatively impacted. The veil adds a certain uniform amount of signal, so that means that as a percentage the shadows will lose more of their density than the highlights gain brightness. Theoretically that could be simply solved by increasing contrast, if only the loss of contrast were uniform across the tonescale (which it isn't). So we are confronted with a loss of contrast mainly in the low exposure areas of the image which happens to be where the S/N ratio is worst already. Therefore, when we boost the contrast of the shadows more, we will also amplify the noise more.

As a general conclusion we can therefore say that a sensor with a better DR will allow to compensate for the contrast loss due to veiling glare better. It is not a coincidence that an application like HDR Expose has a control to reduce glare. An HDR image has a relatively high S/R ratio in the shadows because it is more shot-noise than read-noise limited.

IOW, the reduction of system DR due to veiling glare can be compensated better if the sensor DR is higher. Therefore the (engineering definition of) DR is still a good predictor of potential image quality.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

cunim

  • Full Member
  • ***
  • Offline Offline
  • Posts: 130
Re: 12, 14 or 16 bits?
« Reply #46 on: February 28, 2011, 11:55:32 am »

Ray,

IOW, the reduction of system DR due to veiling glare can be compensated better if the sensor DR is higher. Therefore the (engineering definition of) DR is still a good predictor of potential image quality.

Cheers,
Bart

Bart, thanks for an excellent summary.  As you point out, SNR predicts dynamic range if the intrascene properties permit.  I am not familiar with how DXO makes its measurements but I am sure they are replicable.  Speaking to Ray's question, the fact that they are replicable is what makes them useful.  I suspect that, in amplified imaging of low contrast scenes - in other words when lens flare and other optical factors do not dominate - the DXO values are relevant within the precision limits they specify. 

If that sounds like a waffle, it is.  SNR measurements in area detectors are critical to defining detector parameters but have limited external validity.  In other words, what we are measuring may not model the real world very well.  For example, make an SNR measurement on one side of the detector.  Now shine a moderately bright beam onto 10% of the other side of the detector.  At anything more than 10 bit precision, I'll bet you will see a change in the first measured value.  How would you provide specs for that sort of thing?  I expect the payload guys at NASA have models that describe their customized imagers pretty well, but our cameras are wildly divergent devices – and use proprietary demosaic processing to boot.  Models do not exist as far as I know.  Could they be developed?  Possibly, but it would be very hard, and expensive, and would the target market really care?

Fortunately, we don’t need engineering backgrounds to pick a camera.  We can use DXO or manufacturer’s specs to ensure that a particular detector is not grossly deficient.  Then we can use our eyes.  Photographic image quality is an emergent property of a cloud of variables - hence the endless debates about DSLR, MFD, film, and so forth.  In the end, each of us winds up with a system that suits his mission, personal style, and wallet. 

Peter
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: 12, 14 or 16 bits?
« Reply #47 on: February 28, 2011, 02:20:49 pm »

Ray,

System DR is negatively impacted by veiling glare, because total contrast is negatively impacted. The veil adds a certain uniform amount of signal, so that means that as a percentage the shadows will lose more of their density than the highlights gain brightness. Theoretically that could be simply solved by increasing contrast, if only the loss of contrast were uniform across the tonescale (which it isn't). So we are confronted with a loss of contrast mainly in the low exposure areas of the image which happens to be where the S/N ratio is worst already. Therefore, when we boost the contrast of the shadows more, we will also amplify the noise more.

As a general conclusion we can therefore say that a sensor with a better DR will allow to compensate for the contrast loss due to veiling glare better. It is not a coincidence that an application like HDR Expose has a control to reduce glare. An HDR image has a relatively high S/R ratio in the shadows because it is more shot-noise than read-noise limited.

IOW, the reduction of system DR due to veiling glare can be compensated better if the sensor DR is higher. Therefore the (engineering definition of) DR is still a good predictor of potential image quality.

Bart,

It is interesting to compare the DXO results with data published by Kodak for the KAF 40000 CCD which is used in that camera. Kodak lists the DR as 70.2 dB and an estimated linear DR of 69.3 dB. I would assume that the signal departs from linear when the right tail of the Gaussian curve for the shot noise reaches saturation. Kodak lists the full well at 42K e- and the read noise at 13 e-, giving an engineering DR of 42000/13 = 70.2 dB = 11.66 EV

DXO lists the DR at 11.37 EV = 68.5 dB, in good agreement with the Kodak figures. I assume that DXO are eliminating pattern noise and lens flare by subtracting two flat frames as Roger Clark does in his tests. Because of white balance limits the DR of the blue and red channels when the green channel clips with daylight exposure, they are probably using the green channel DR, but they could let that channel clip and attain saturation for the other channels and calculate a DR for each channel. Also they calculate DR for SNR = 1, whereas in the engineering definition the signal is zero (lens cap on, no exposure).

For a real world photographic DR, one would take only one exposure and compare the highlight and shadow areas, which would include lens flare, pixel response nonuniformity, and other sources of noise after demosaicing and white balance.

DXO lists the SNR at 100% signal at 40.6 dB (107.2:1), well below the expected value of 68.5 dB that one would expect from the DR, yet the interval between 100% gray value and SNR = 1 (0 dB) is 11.3 EV according to my previous post. The number of photons collected can be calculated by squaring the SNR, and this would give a value of 11,400 e- at 100% gray value rather than 42K e-. I don't understand what is going on here and hope you can clarify the situation.

Regards,

Bill
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10387
Re: 12, 14 or 16 bits?
« Reply #48 on: February 28, 2011, 09:17:15 pm »

Ray,

System DR is negatively impacted by veiling glare, because total contrast is negatively impacted. The veil adds a certain uniform amount of signal, so that means that as a percentage the shadows will lose more of their density than the highlights gain brightness. Theoretically that could be simply solved by increasing contrast, if only the loss of contrast were uniform across the tonescale (which it isn't). So we are confronted with a loss of contrast mainly in the low exposure areas of the image which happens to be where the S/N ratio is worst already. Therefore, when we boost the contrast of the shadows more, we will also amplify the noise more.

As a general conclusion we can therefore say that a sensor with a better DR will allow to compensate for the contrast loss due to veiling glare better. It is not a coincidence that an application like HDR Expose has a control to reduce glare. An HDR image has a relatively high S/R ratio in the shadows because it is more shot-noise than read-noise limited.

IOW, the reduction of system DR due to veiling glare can be compensated better if the sensor DR is higher. Therefore the (engineering definition of) DR is still a good predictor of potential image quality.

Cheers,
Bart

Okay, Bart. Thanks. That makes sense to me and is what I imagined the situation to be.

Putting it another way, whatever scene the camera captures, without exception, is a scene that has been modified by the lens in all sorts of ways, including loss of resolution, introduction of various distortions, and a loss of contrast in the shadows due to lens flare, depending on the nature of the scene, the type of lighting and the quality of the lens.

The 'Brightness Range' in the scene, after it's been modified by the lens, is separate from the capacity of the sensor to capture such brightness range, just as the resolution and detail content of the scene is separate from the sensor's capacity to capture such detail.

The reason I've been asking this question is because such statements that lens flare puts a limit on DR imply that there's a quantifiable limit to the range of brightness levels that a particular lens can transmit, just as there's a quantifiable limit to the resolution of a lens at a particular f/stop and contrast.

Searching the internet for such a quantifiable limit, I've come across claims that such a limit may be in the order of 10 stops, but no explanation as to how such a figure was derived.

However, it is understood, even if such a figure could be justified, that the D7000 sensor is capable of capturing a better quality image in the 10th stop than the 50D in the 10th stop.
Logged

ondebanks

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 858
Re: 12, 14 or 16 bits?
« Reply #49 on: March 01, 2011, 05:39:08 am »

I can't answer all of your questions, but an experiment with the Nikon 60 mm AFS Micro-Nikkor and the Nikon D3 may be of some help. This lens is a prime and has 12 elements in 9 groups with multicoating and the Nano coating of some of the elements and should have reasonably low flare. I shot a Stouffer wedge with out masking off the target and another with the target masked to lessen flare.

I then rendered the images into 16 bit sRGB with ACR using a linear tone curve and a composite shot with eyedropper readings is shown. The flare light lightens the darker wedges and limits DR.

I then measured the DR with Imatest and the results are shown. The DR of the unmasked wedge is less and the flare is indicated by the upward bowing of the density curve in the darker wedges. There is still some flare in the masked image.



I think the methodology is valid. In Roger Clark's methodology, he takes multiple flat fields of a white wall at various shutter speeds to determine dynamic range and full well capacity of the sensor. He eliminates fixed pattern noise by subtracting two identical exposures. One shot methods include PRNU (pixel response nonuniformity), uneven illumination and defects in the target. Because of the problems getting good exposures of the Stouffer transmission wedge, Norman Koren (Imatest) also offers a method where one takes exposures of a Kodak Q12 reflection wedge at various exposures and the program reconstructs the luminances mathematically. Obviously, the darker images have less flare light, analogous to Jonathin's method.

Regards,

Bill

Bill,

What your tests demonstrate is that a little lens flare actually DOES NOT REDUCE the DR captured. People seem to have got the wrong idea about the impact of (slight) lens flare. It compresses the output tonal range, but it does not lessen the amount of input tonal range which is successfully transferred to the sensor. For example, every step in the Stouffer wedge in your non-flared shot is also reproduced in the flared shot. The output range is merely compressed (by becoming non-linear) from 10.6 stops to 8.86 stops per your measurements. This does not mean that (10.6 - 8.86 = ) 1.74 stops have been lost/clipped. Those wedge steps/stops are still there, clearly distinguished in the graphs. What you should be measuring is DR captured from the input side rather than the output side. That's what matters to photography.

Edit: oops, before anyone tells me - I just realised that I got this quite wrong: I didn't notice the re-scaling of the x-axis between the two graphs. And that the DR numbers from Imatest were indeed coming from the input side. 

If we did the same test with colour negative film like Ektar 100, we'd measure totally crap output DR (around 6 stops of negative density), but we all know that c-neg film like Ektar can distinguish at least 11 stops of input levels, because of its characterisic curve, which is both nonlinear and has a non-unity slope in the linear portion (see its datasheet).

Too much lens flare will hurt though, at the pixel level; the poisson noise of the flare component in the deep shadows will overwhelm the read noise which would otherwise have been the dominant and limiting noise. But if one is averaging over blocks of pixels of similar input tones (as can be done with the Stouffer wedge steps), the faint end will still survive detectability, at a lower net S/N, even in the presence of enhanced background noise from flare.

(The other) Ray
« Last Edit: March 01, 2011, 08:52:38 am by ondebanks »
Logged

ondebanks

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 858
Re: 12, 14 or 16 bits?
« Reply #50 on: March 01, 2011, 05:45:28 am »

One other thing, Bill:

Also they calculate DR for SNR = 1, whereas in the engineering definition the signal is zero (lens cap on, no exposure).


No, you're thinking of how the bias level is measured. The engineering definition of DR takes the lower limit as the point where SNR=1 (equal levels of signal and read noise) - just as they calculate it.

Ray
Logged

ondebanks

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 858
Re: 12, 14 or 16 bits?
« Reply #51 on: March 01, 2011, 06:46:39 am »

Also, DxO normalizes DR to 8 MPixels assuming a constant print size. A suggestion I made was that a criterion of SNR = 8 could be used instead and DR would be reduced by about three steps. So if the DxO figure for DR would be 12.7 eV a DR value for SNR = 8 would probably be errand 12.7 - 3 = 9.7 step. I know it's an oversimplification but in my view not a terrible one.
Thank you for your input.

I was using Eric's approach, but in a inverse way, calculating an increase in DR with a loss in res and an increase in "noise" or interpolation.

In the real world we do not always achieve a Disc Of Confusion (DOC) or resolution of 10 microns all the time in all parts on the image, (particularly if using small apertures and/or hand-holding) and if we use a SNR of 1/8, using his logic, we add 3 instead of subtracting 3, get 15.7 stops of DR,,, even if it only gives us a DOC of 80 microns in deep shadows.

Good interpolation and smoothing can make the shadows less noisy, whatever the DR count.



Dick,

Thanks for coming back with your explanation. I see what you mean now...

It's very unorthodox to be considering SNR which is a fraction of 1 in any context; not least with DR, which has a crisp engineering definition, setting the lower limit to SNR=1 for a good reason.

But your general idea that smoothing or averaging the shadows out to larger spatial extents increases their net S/N, at the cost of spatial resolution, is valid. This is normal practice in photon-starved X-ray astronomy, for example: look at the background in any image from the Chandra X-ray satellite. It looks oddly, artificially, smooth; any features are clumpy; and this is simply because they literally do distribute 1 or 2 photons over several pixels by entropy-based smoothing.

But this works in X-ray astronomy, because true photon-counting detectors are normally used, i.e. detectors without any readout noise. The problem for doing this with optical CCD and CMOS detectors is that 1 photo-electron of signal is so lost in the many electrons of read noise in its own pixel, not to mention all the readnoise in the zero-signal pixels surrounding it, that the gains from smoothing are far less. Earlier in this thread you remarked:

            Note I am talking about pixels per photon, not photons per pixel… this is where you got confused about subtracting 3 stops instead of adding them.
            You could interpolate where the light level is one photon per 500 pixels… but that would be low res or no res, and not very useful.


In a best-case MFD sensor with 12 electrons read noise, we would have:
- noise of sqrt(500)  * 12 = 268 electrons
- signal of 1 electron
- SNR of 0.0037 (!!)

...completely indistinguishable from pure noise. No smoothing or interpolation in the world is going to help here.

Ray
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: 12, 14 or 16 bits?
« Reply #52 on: March 01, 2011, 01:52:56 pm »

One other thing, Bill:

No, you're thinking of how the bias level is measured. The engineering definition of DR takes the lower limit as the point where SNR=1 (equal levels of signal and read noise) - just as they calculate it.

Ray

How can that be when read noise is typically measured with the lens cap on in a dark room and with a high shutter speed. In this case there is no signal unless you are counting the noise as signal.

Regards,

Bill
Logged

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
Re: 12, 14 or 16 bits?
« Reply #53 on: March 01, 2011, 02:04:52 pm »

How can that be when read noise is typically measured with the lens cap on in a dark room and with a high shutter speed. In this case there is no signal unless you are counting the noise as signal.

Regards,

Bill

The intent is to find a measure of noise above which any measurement under ordinary operation is considered to have more signal than noise. Since, this measure is assumed to define equal signal and noise extents under normal operation, hence, SNR = 1.

Sincerely,

Joofa
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

ondebanks

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 858
Re: 12, 14 or 16 bits?
« Reply #54 on: March 02, 2011, 05:25:36 am »

How can that be when read noise is typically measured with the lens cap on in a dark room and with a high shutter speed. In this case there is no signal unless you are counting the noise as signal.

Regards,

Bill

Ah, I see where the confusion arises. Yes, you can also estimate readout noise in that way (using subtracted pairs of frames, because the histogram of a single one is prone to being skewed by pattern noise). But that sort of frame has no signal, so it has no DR; it only yields one half of the DR equation, the read noise denominator. Once enough light impinges on the sensor, with the lenscap off, to push the signal level with the readout noise, then we are at the onset of DR.

Ray
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: 12, 14 or 16 bits?
« Reply #55 on: March 02, 2011, 11:07:35 am »

Ah, I see where the confusion arises. Yes, you can also estimate readout noise in that way (using subtracted pairs of frames, because the histogram of a single one is prone to being skewed by pattern noise). But that sort of frame has no signal, so it has no DR; it only yields one half of the DR equation, the read noise denominator. Once enough light impinges on the sensor, with the lenscap off, to push the signal level with the readout noise, then we are at the onset of DR.

Ray

Ray,

It is a minor point, but I do not agree. SNR and DR are related, but the engineering definition of DR is full well/read noise and read noise is measured when the signal = 0. SNR would be undefined. However, signal:noise of < 1 is readily obtained as shown from this interactive plot on the Nikon Microscopy site. The text also has a formula for calculating how stray light affects the SNR and this would also apply to veiling flare.



The DXO SNR plots stop at a log value of zero, or 1:1. However one can extend the plot below 1:1 as shown in Emil's plot for the Canon 1D3. Using the plot, one could calculate DRs with SNR below zero.



With high contrast images, a SNR of 1.05 can provide some useful data as shown on the Hamamatsu web site.



Regards,

Bill

Logged

ondebanks

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 858
Re: 12, 14 or 16 bits?
« Reply #56 on: March 03, 2011, 05:29:58 am »

Ray,

It is a minor point, but I do not agree. SNR and DR are related, but the engineering definition of DR is full well/read noise and read noise is measured when the signal = 0. SNR would be undefined.

Bill,

That's exactly what I said. What are you not agreeing with? I get the impression that you are saying tomayto, I am saying tomahto, and we are both referring to the same red fruit!

However, signal:noise of < 1 is readily obtained as shown from this interactive plot on the Nikon Microscopy site.

The DXO SNR plots stop at a log value of zero, or 1:1. However one can extend the plot below 1:1 as shown in Emil's plot for the Canon 1D3. Using the plot, one could calculate DRs with SNR below zero.

I'm sure that's a typo and you meant "with SNR below 1.0".
Of course it is possible; just remove a few more electrons of signal and S/N drops below 1.0. But is it useful data? Not unless you either spatially average like crazy; or else you stack n images to improve S/N by sqrt(n) and that gets you back to S/N >= 1.0.

I have countless astro images, both research and hobby, where I've had to do this to make certain stars detectable. In fact the conventional threshold in astronomy is S/N = 3 ("three sigma"), but that is measured by evaluating over most or all of the PSF; individual pixels within the PSF will have S/N well below 3.0.

With high contrast images, a SNR of 1.05 can provide some useful data as shown on the Hamamatsu web site.

Absolutely, because (a) SNR of 1.05 falls within the definition for DR, and (b) in the sample image of the cells, adjacent pixels with the same source intensity reinforce each other in our perceptual system; it averages them spatially. But take any one of those brighter pixels in isolation, and set it against the same background, and the eye/brain would not be able to distinguish it from noise.

It muddies the waters when we allow spatial averaging to enter discussions like this, departing from per-pixel considerations; but I realise that we can't stop our brains from doing it! It has some fascinating consequences. For example, we cannot resolve point sources which are closer together than the Rayleigh limit (or Dawes limit in special circumstances); but we can resolve tighter angles than that in extended features. People routinely perceive the major divisions in Saturn's rings in telescopes which, according to Rayleigh and Dawes, cannot resolve such structures. This is due to spatial perceptual reinforcement, of detail which at any given point is sub-threshold, but as a connected ensemble is super-threshold. 

Ray
Logged

cunim

  • Full Member
  • ***
  • Offline Offline
  • Posts: 130
Re: 12, 14 or 16 bits?
« Reply #57 on: March 03, 2011, 11:18:27 am »

I believe this thread started with a limited question: “How many bits of precision do I need in digital photography?”    Some of us (thanks Ray) have experience with making those judgments in applied imaging, but that may not be very relevant here.  I can look at a liquid scintillation count and relate disintegrations to photon flux to required detector characteristics.  Put a camera in my hands, though, and I’m pretty much helpless.  Actually, that’s true in a general sense but never mind.  The specific problem is that I cannot control the target characteristics so I do not know the relevant specifications for photography.  Hence my interest in topics such as this. 

I think the photographers here are struggling because they believe there must be a way to judge camera image quality using one or two simple variables.  Easy enough when comparing point and shoots to pro gear, but not easy within different models of pro gear.  To make that sort of comparison we need a psychophysics of digital photography and I am unaware of any such thing.  I guess the masters do it with craft as opposed to science.

Is this a fair summary of what posters are saying?  Our cameras are not providing anything like the 10-12 bits available from detector hardware – unless we work within a very narrow range of flux.  I think the postings here are suggesting about 8 stops with typical intra-scene flux levels and fair linearity.  Once we get beyond the linear response range the effects of degraded SNR are readily visible but poorly specified.  To minimize shadow detail problems, the DSLR crowd look for good high ISO performance while MFD users tend to look for low noise integration at base ISO or near it. My own impression is that these strategies probably reflect the different response properties of CMOS vs CCD but that is speculation.
Logged

ondebanks

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 858
Re: 12, 14 or 16 bits?
« Reply #58 on: March 03, 2011, 12:26:36 pm »

I believe this thread started with a limited question: “How many bits of precision do I need in digital photography?”    Some of us (thanks Ray) have experience with making those judgments in applied imaging, but that may not be very relevant here. 

Indeed, we swung off on many interesting tangents, and mine may not have been very relevant the Erik's original question. I think that Erik's original contention was correct, that the DxO data backed it up, and that Emil Martinec provided the best explanation as to "why" in the thread.

I think the photographers here are struggling because they believe there must be a way to judge camera image quality using one or two simple variables.  Easy enough when comparing point and shoots to pro gear, but not easy within different models of pro gear.  To make that sort of comparison we need a psychophysics of digital photography and I am unaware of any such thing.  I guess the masters do it with craft as opposed to science.

I have a suspicion that in this context, good craft works because it is based on good science, even if the craftsman or woman doesn't realise this.

Is this a fair summary of what posters are saying?  Our cameras are not providing anything like the 10-12 bits available from detector hardware – unless we work within a very narrow range of flux.  I think the postings here are suggesting about 8 stops with typical intra-scene flux levels and fair linearity.  Once we get beyond the linear response range the effects of degraded SNR are readily visible but poorly specified.  To minimize shadow detail problems, the DSLR crowd look for good high ISO performance while MFD users tend to look for low noise integration at base ISO or near it. My own impression is that these strategies probably reflect the different response properties of CMOS vs CCD but that is speculation.


Yes I think that's a fair summary. I would add one further point which is this: because the signal is quantized as discrete electrons, then even in a noiseless sensor, there is no benefit to 16 bits once the maximum electron count can be accomodated in 15 bits (32768), or fewer. As pixels shrink to 6 microns and smaller, reducing their full well depth, we are already close to this point in MFD and probably beyond it in many DSLRs. Factor in the "bit redundancy" due to real-world noise, and we are way beyond it in all such sensors.

Ray
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: 12, 14 or 16 bits?
« Reply #59 on: March 03, 2011, 01:55:24 pm »

Hi,

In my view it was nice to get some insight on SNR outside the scope of photography. Also, I'd suggest that anyone who has followed all the postings on this thread has a good understanding of the significance of bit depth, so I'd suggest the original intention with the posting has been achieved.

Best regards
Erik

Indeed, we swung off on many interesting tangents, and mine may not have been very relevant the Erik's original question. I think that Erik's original contention was correct, that the DxO data backed it up, and that Emil Martinec provided the best explanation as to "why" in the thread.

I have a suspicion that in this context, good craft works because it is based on good science, even if the craftsman or woman doesn't realise this.

Yes I think that's a fair summary. I would add one further point which is this: because the signal is quantized as discrete electrons, then even in a noiseless sensor, there is no benefit to 16 bits once the maximum electron count can be accomodated in 15 bits (32768), or fewer. As pixels shrink to 6 microns and smaller, reducing their full well depth, we are already close to this point in MFD and probably beyond it in many DSLRs. Factor in the "bit redundancy" due to real-world noise, and we are way beyond it in all such sensors.

Ray
Logged
Erik Kaffehr
 
Pages: 1 2 [3]   Go Up