Luminous Landscape Forum

Equipment & Techniques => Cameras, Lenses and Shooting gear => Topic started by: ErikKaffehr on March 26, 2015, 02:03:28 pm

Title: Putting DR into perspective..
Post by: ErikKaffehr on March 26, 2015, 02:03:28 pm
Hi,

Lot of (DR) Dynamic Range oriented discussions recently. This posting is intended to put DR in perspective.

My experience is that my cameras mostly had decent DR for my needs. One of my observations was that I very seldom needed to resort to HDR to get good images. I also feel that the need of DR is often overrated.

Let's start looking at this image:
(http://echophoto.dnsalias.net/ekr/Articles/DRArticle/CanoeStadium/20141109-_DSC6262_photographic1.jpg)

Now, lets look at an area of deep shadow, brightened up in Lightroom:
(http://echophoto.dnsalias.net/ekr/Articles/DRArticle/CanoeStadium/20141109-_DSC6262_crop1.jpg)

The disc of the sun can be recovered in Lightroom pretty well:
(http://echophoto.dnsalias.net/ekr/Articles/DRArticle/CanoeStadium/20141109-_DSC6262_crop2_small.jpg)

Below is the raw histogram of the full image, something like 10EV of dynamic range in this image:
(http://echophoto.dnsalias.net/ekr/Articles/DRArticle/CanoeStadium/overview_small.jpg)

This is one of the very dark areas. Check the histogram, the pixels have a nice Gaussian distribution. Full well capacity on this sensor is around 60000 e/pixels and readout noise perhaps 2-3 electron charges. The raw data is 14 bit wide so each digital number corresponds to about 4 photons. The red channel is centered about 20 counts corresponding to about 80 photons. So noise should be SQRT(80) 8.9 photons, say  photons corresponding to +/-2 counts. So the histogram should be something like 2Sigma * 2 wide. Well it looks like a bit wider than that, but we still see very little evidence of readout noise.

I guess that most modern cameras would be able to handle this scene pretty well.

Best regards
Erik
Title: Re: Putting DR into perspective.. (part 2)
Post by: ErikKaffehr on March 26, 2015, 03:10:15 pm
Hi,

This is another image, with wider dynamic range, first let's look at an HDR exposure (P45+ exposures from 1s to 30s), fused in Lumariver HDR.

(http://echophoto.dnsalias.net/ekr/Articles/DRArticle/Lockenhous/20140617_lumariver.jpg)

The whole luminance range is impressive, perhaps 14 stops
(http://echophoto.dnsalias.net/ekr/Articles/DRArticle/Lockenhous/LumariverOverview.jpg)

Now, lets look at small detail of the piano:

(http://echophoto.dnsalias.net/ekr/Articles/DRArticle/Lockenhous/20140617_lumariver_piano_small.jpg)

And also check a small area on the piano cover in RawDigger. The peaks are nice gaussians.
(http://echophoto.dnsalias.net/ekr/Articles/DRArticle/Lockenhous/LumaRiverDetalj.jpg)

The image below is from a 2.5 s exposure on the Sony Alpha 99:
(http://echophoto.dnsalias.net/ekr/Articles/DRArticle/Lockenhous/20140617-_DSC4758_piano.jpg)

The histograms on the tiny part of the piano cover still look good, albeit each second channel is empty (due to Sony "lossless" compression?)
(http://echophoto.dnsalias.net/ekr/Articles/DRArticle/Lockenhous/SLT99Detail.jpg)

Here is the same part of the piano on the P45+, note 1s exposure compared to 2.5s on the SLT99:
(http://echophoto.dnsalias.net/ekr/Articles/DRArticle/Lockenhous/20140617-CF045290_piano.jpg)
The P45+ had about 1 stop less exposure, and here the piano cover got noisy, I guess we can see the effects of readout noise:
(http://echophoto.dnsalias.net/ekr/Articles/DRArticle/Lockenhous/ScreenShot1.jpg)

Raw images are here:

http://echophoto.dnsalias.net/ekr/Articles/DRArticle/NativeRaws/CF045286.IIQ
http://echophoto.dnsalias.net/ekr/Articles/DRArticle/NativeRaws/20140617_lumariver.dng
http://echophoto.dnsalias.net/ekr/Articles/DRArticle/NativeRaws/_DSC4758.ARW

Just a comment, these images were from a "real world" shooting. The P45+ was exposed at 1s while the SLT99 had a 2.5s exposure. Both exposures were based on camera histogram. The idea is not to demonstrate the difference between the two camera/sensor combination. Lab conditions are fare more appropriate for that kind of comparison.

Also, I have been told that my P45+ is a decent sample.
Best regards
Erik
Title: Re: Putting DR into perspective..
Post by: BernardLanguillier on March 26, 2015, 05:15:12 pm
Nice building!

Cheers,
Bernard
Title: Re: Putting DR into perspective..
Post by: NancyP on March 26, 2015, 05:20:07 pm
That's pretty interesting. RAW Digger seems to be a learning tool, I will have to look into it. I suppose a lot of the DR talk is centered around the sorts of images where one can't get a bracket set without some movement, backlit runners or birds, waving grain and deep shadow, etc.

Do you ever get posterization with the Sony lossless compression data?
Title: Re: Putting DR into perspective..
Post by: NancyP on March 26, 2015, 05:38:23 pm
Consulting Dr. Google, Dr. Google in the house?  ::)

The answer to my posterization question is yes, says the RAW Digger blog:
http://www.rawdigger.com/howtouse/sony-craw-arw2-posterization-detection
Title: Re: Putting DR into perspective..
Post by: ErikKaffehr on March 26, 2015, 05:48:13 pm
Hi,

Posterisation, I would say not.

Now, Sony employs two kinds of compression. The first is in essence similar to a gamma curve. At high data numbers they have larger steps. That is basically sound, but it may be that they would have overdone it, I don't think so, and I have not ever seen posterisation in my images.

In addition they have a kind of "delta coding", that can induce artefacts. I don't say I have observed it in my images clearly. I have seen some artefacts, in the very same image posted on this thread, but it is in the wrong direction, I think.

This image digged up by Diglloyd is the best illustration of the artefacts I have seen:
(http://www.rawdigger.com/sites/www.rawdigger.com/files/Posterization/image01.png)

And here is a long article from the "rawdigger site": http://www.rawdigger.com/howtouse/sony-craw-arw2-posterization-detection

My take on the issue is that they apply the tone curve so they can put more than 12 bit of data trough the "Bionz" processor which may be just 12-bit wide. I would thing it is absolutely OK.

The delta compression can clearly yield artefacts. On the Sony Alpha 99 I am using mostly there is RAW and short RAW. I always use the larger file format. It may be I don't have the "Delta" compression. I don't know.

Jim Kasson, a real scientist, has done a lot of research on this issue, perhaps a year ago. It was presented on his blog: http://blog.kasson.com/?p=4838

Best regards
Erik

Ps. I have seen that you googled on the issue while I was posting my response... :-)



Do you ever get posterization with the Sony lossless compression data?
Title: Re: Putting DR into perspective..
Post by: MarkL on March 26, 2015, 07:01:43 pm
Maybe 10stops is enough for most scenes but that means absolutely nailing it exposure-wise with zero latitude either way and the shadows will be right near the noise floor. Often though the raw files has some detail, torturing it out in software often leads to the tonal and colour information falling apart (though this has slowly improved over the years.)

You'd have to drag me back to my D700 DR from my D800E kicking and screaming.
Title: Re: Putting DR into perspective..
Post by: dwswager on March 26, 2015, 08:48:47 pm
Hi,

Lot of (DR) Dynamic Range oriented discussions recently. This posting is intended to put DR in perspective.

My experience is that my cameras mostly had decent DR for my needs. One of my observations was that I very seldom needed to resort to HDR to get good images. I also feel that the need of DR is often overrated.

Best regards
Erik

While I wholeheartedly agree that most shooting circumstances do not require extreme DR to execute the shot proplerly, if I have to choose between 2 $3000 cameras I'll take the one with more DR and less shadow noise.  While a most situations don't call for extended DR, some do and all situations may benefit from the DR if proper exposure is not selected.

DR is just another performance characteristic to be evaluated in the selection process along with other measures of performance and functionality.
Title: Re: Putting DR into perspective..
Post by: Jack Hogan on March 27, 2015, 03:56:54 am
Good examples, Erik.
Title: Re: Putting DR into perspective..
Post by: spidermike on March 27, 2015, 05:19:21 am
While I wholeheartedly agree that most shooting circumstances do not require extreme DR to execute the shot proplerly, if I have to choose between 2 $3000 cameras I'll take the one with more DR and less shadow noise.  While a most situations don't call for extended DR, some do and all situations may benefit from the DR if proper exposure is not selected.

DR is just another performance characteristic to be evaluated in the selection process along with other measures of performance and functionality.

I think Erik's comments point to the fact that nowadays the need to spend $3,000 in the first place is greatly reduced.
Title: Re: Putting DR into perspective..
Post by: Paulo Bizarro on March 27, 2015, 05:32:19 am
Good examples thanks for posting them.

It is always better to have "larger DR", to cover for any "if" situations that might arise. But for the majority of photographers, "normal DR" is more than enough.
Title: Re: Putting DR into perspective..
Post by: Bart_van_der_Wolf on March 27, 2015, 05:40:35 am
It is always better to have "larger DR", to cover for any "if" situations that might arise. But for the majority of photographers, "normal DR" is more than enough.

Hi,

Yes, that sums it up nicely.

The use of proper technique can help in that process, and sometimes it can be quite simple to handle a problematic situation. Then there can be tools to assist us, and some do a better job than others.

Cheers,
Bart
Title: Re: Putting DR into perspective..
Post by: Jimbo57 on March 27, 2015, 05:58:17 am
This discussion is really just one (of many) examples of how the Law of Diminishing Returns applies to camera features.

I normally try to demonstrate this to my advanced students (who are often toying with the idea of moving up the camera ladder towards more "professional" models) by showing how each doubling of camera price leads to geometrically reduced increases in features and capability.

My usual example goes along the lines of:

£250 will buy an entry-level DSLR and kit lens.

That will do 95% of what any enthusiast-level photographer is likely to want to do.

Spend £1000 and you will get a camera that will do 97%

Spend £2000 and you will get a camera that will do 98%

Spend £5000 and you will get a camera that will do 98.5%

.....and so on.

But the caveat that I have to place upon that is that each increment of price/capability takes the photographer farther into the extremes of performance.

And so it is with dynamic range of sensors. The difference between a 12 EV DR sensor and a 14 EV DR sensor might only make a significant difference to the image quality in, say, 5% of the shots that the average enthusiast photographer will take. An increase from 14 to 15 might only add 1% to that.

Exactly the same consideration applies to other advances in camera technology such as AF speed, High-ISO performance, etc.

The good news is that, progressively with each new "generation", the performance of entry-level cameras improves along all of those dimensions, so that the whole equation shifts laterally.

Few photographers "need" to spend silly money keeping at the forefront of those advances in performance - but, of course, most of us do!
Title: Re: Putting DR into perspective..
Post by: shadowblade on March 27, 2015, 09:05:43 am
Hi,

Yes, that sums it up nicely.

The use of proper technique can help in that process, and sometimes it can be quite simple to handle a problematic situation. Then there can be tools to assist us, and some do a better job than others.

Cheers,
Bart

Technique only helps you if the scene fits within the technical limits of the camera (be it DR, ISO or resolution). If a scene has 10 stops of DR and you can't capture it with a 12-stop camera, then it's just poor technique. If the scene has 13 stops, it's not poor technique - just that the scene is beyond the technical capabilities of the camera.

Once the scene you are trying to capture falls outside those limits, no amount of technique will help you.
Title: Re: Putting DR into perspective..
Post by: Bart_van_der_Wolf on March 27, 2015, 10:29:45 am
Technique only helps you if the scene fits within the technical limits of the camera (be it DR, ISO or resolution). If a scene has 10 stops of DR and you can't capture it with a 12-stop camera, then it's just poor technique. If the scene has 13 stops, it's not poor technique - just that the scene is beyond the technical capabilities of the camera.

Hi,

Just to mention one item that most people rarely consider testing, how well is your lens hood dimensioned?

Many people shoot with zoom-lenses (image quality can be excellent), which may already be sensitive to veiling glare due to the many lens groups/elements. The lenshood that comes with it is a compromise which also has to accommodate the wider angle short focal length settings. Not all barrel designs change the depth of the front lens element with focal length. Without proper shading of non-imageforming light, you're lucky if you get 9 stops of dynamic range out of the lens into the camera ...

Quote
Once the scene you are trying to capture falls outside those limits, no amount of technique will help you.

My camera allows to automatically shoot 2-bracketed exposures (or 3, or 5 or 7). In the 2-bracket mode it's simple to shoot one ETTR shot exposed for the highlights, and a much (e.g. 8x) longer shot for the shadows (that adds almost 3 stops of DR). Two shots are simple to blend together, even hand-held it's often possible.

One can also shoot multiple frames with the same exposure and use (median) averaging to reduce the noise, but of course multiple-exposures are not the most used technique for moving subjects, but even that's not the end of our possibilities.

There are lots of techniques possible, both at shooting time as well as at post-production time. As DxO have shown, a lot is possible when it comes to high ISO noise reduction, and there are new techniques being developed all the time. We can also use dark frame subtraction to get rid of some of the pattern noise in the shadows. Some post processing techniques even trickle down to photoediting software for the masses.

Cheers,
Bart
Title: Re: Putting DR into perspective..
Post by: shadowblade on March 27, 2015, 11:09:20 am
Hi,

Just to mention one item that most people rarely consider testing, how well is your lens hood dimensioned?

Many people shoot with zoom-lenses (image quality can be excellent), which may already be sensitive to veiling glare due to the many lens groups/elements. The lenshood that comes with it is a compromise which also has to accommodate the wider angle short focal length settings. Not all barrel designs change the depth of the front lens element with focal length. Without proper shading of non-imageforming light, you're lucky if you get 9 stops of dynamic range out of the lens into the camera ...

I wish that were the case! If glare was actually applied evenly, essentially adding a fixed amount of light to every point in the image (which could equal four or five stops in the shadows, but a fraction of a stop in the highlights) it would essentially work as a giant fill flash, reducing the dynamic range of the scene and making it easier to capture. Unfortunately it is not, and doesn't really reduce the DR across the whole frame - merely where the lens flare is. Fortunately, it's usually easy to completely shield it with a well-placed hand forward of the lens but outside the field of view.

Quote
My camera allows to automatically shoot 2-bracketed exposures (or 3, or 5 or 7). In the 2-bracket mode it's simple to shoot one ETTR shot exposed for the highlights, and a much (e.g. 8x) longer shot for the shadows (that adds almost 3 stops of DR). Two shots are simple to blend together, even hand-held it's often possible.

Doesn't work when things are moving. In landscape photography, wind is the usual culprit.

Quote
One can also shoot multiple frames with the same exposure and use (median) averaging to reduce the noise, but of course multiple-exposures are not the most used technique for moving subjects, but even that's not the end of our possibilities.

I often do that. Functionally, it's the same as halving the ISO - you're collecting twice as many photons by exposing for twice as long, so each photon counts for half as much. It certainly minimises photon shot noise. I'm not sure that it actually increases DR, though, since the read noise is also counted twice.
Title: Re: Putting DR into perspective..
Post by: NancyP on March 27, 2015, 11:39:42 am
Most people who use multiples specifically to reduce shot noise also take multiple dark frames to be able to subtract the read noise. It is only worthwhile if you are dealing with very low numbers of photons in the first place - astrophotography. Shot noise reduction is proportional to the square root of the number of multiples taken. Major PITA. You need it to image faint objects, but I can't imagine any non-astro / non-scientific situation where you would go to that degree of trouble in shooting and processing.
Title: Re: Putting DR into perspective..
Post by: shadowblade on March 27, 2015, 11:42:23 am
Most people who use multiples specifically to reduce shot noise also take multiple dark frames to be able to subtract the read noise. It is only worthwhile if you are dealing with very low numbers of photons in the first place - astrophotography. Shot noise reduction is proportional to the square root of the number of multiples taken. Major PITA. You need it to image faint objects, but I can't imagine any non-astro / non-scientific situation where you would go to that degree of trouble in shooting and processing.

Dark frames only work for removing fixed read noise (including fixed pattern noise), not random read noise.
Title: Re: Putting DR into perspective..
Post by: Bart_van_der_Wolf on March 27, 2015, 11:45:14 am
I wish that were the case! If glare was actually applied evenly, essentially adding a fixed amount of light to every point in the image (which could equal four or five stops in the shadows, but a fraction of a stop in the highlights) it would essentially work as a giant fill flash, reducing the dynamic range of the scene and making it easier to capture. Unfortunately it is not, and doesn't really reduce the DR across the whole frame - merely where the lens flare is.

Veiling glare contributes/adds mostly to the shadows where signal levels are low. Since the glare is a product of intra-lens and inter-lens element/group reflections (aggravated by dust and atmospheric deposits), it is not confined to the regions where light is (besides the lens receives all scene light everywhere on the lens before it is finally focused on the sensor).

Quote
Fortunately, it's usually easy to completely shield it with a well-placed hand forward of the lens but outside the field of view.

Some of it, yes, but it would take unwieldly deep petal-shaped lens hoods to really do a good job. Hence the on average mediocre shielding peple use if it's even given proper attention to begin with. I use a different lens hood on my TS-E 24mm II when not using it shifted, or only a little. The EW-88C to which I added flocking material, does a better job, even though it was designed for a different lens. I use a separate (Lee bellows) if I want something deeper, and have a petal shaped design ready for 3D printing if that makes enough of an additional difference.

Quote
Doesn't work when things are moving. In landscape photography, wind is the usual culprit.

On the contrary, it works fine in most cases. It's often not the horizon line or other moving features that are contrasted with the brightest parts of the image. Most of the info is in a single shadow exposure shot, and only parts are in the ETTR highlight shot.

Quote
I often do that. Functionally, it's the same as halving the ISO - you're collecting twice as many photons by exposing for twice as long, so each photon counts for half as much. It certainly minimises photon shot noise. I'm not sure that it actually increases DR, though, since the read noise is also counted twice.

Yes, photon shot noise gets reduced, but averaging also averaged read noise. It does it so well, that pattern noise will be better visible. That's where improved sensors (and/or black frame subtraction) will shine, that is by absence of pattern noise. The patterns become more noticeable because we humans are good at pattern recognition, even where there are none we see details (like shapes in clouds, or faces in moon rocks).

Cheers,
Bart
Title: Re: Putting DR into perspective..
Post by: Bart_van_der_Wolf on March 27, 2015, 11:54:59 am
Most people who use multiples specifically to reduce shot noise also take multiple dark frames to be able to subtract the read noise. It is only worthwhile if you are dealing with very low numbers of photons in the first place - astrophotography. Shot noise reduction is proportional to the square root of the number of multiples taken. Major PITA. You need it to image faint objects, but I can't imagine any non-astro / non-scientific situation where you would go to that degree of trouble in shooting and processing.

Hi,

A Raw converter like RawTherapee makes it easy. Just point it to a sub-directory with a number of darkframes and it will select and average them if multiples are present, and subtract their average from the Raw lights before demosaicing. It's implementation is not as sophisticated as in dedicated Astro photograhy applications, but then the average photographer has relatively many more photons available, although exposure times are much shorter.

Cheers,
Bart
Title: Re: Putting DR into perspective..
Post by: shadowblade on March 27, 2015, 12:09:17 pm
Veiling glare contributes/adds mostly to the shadows where signal levels are low. Since the glare is a product of intra-lens and inter-lens element/group reflections (aggravated by dust and atmospheric deposits), it is not confined to the regions where light is (besides the lens receives all scene light everywhere on the lens before it is finally focused on the sensor).

If it were even, it wouldn't be a problem. If it added 200 photons to every photosite, it would greatly reduce the dynamic range of the scene and make it much easier to capture, and you could get it all back by adjusting levels/contrast in postprocessing.

Unfortunately it's not even across the frame and usually manifests itself as specific areas of lens flare. Fortunately, that can usually be blocked out with a hand.

Quote
On the contrary, it works fine in most cases. It's often not the horizon line or other moving features that are contrasted with the brightest parts of the image. Most of the info is in a single shadow exposure shot, and only parts are in the ETTR highlight shot.

The usual culprit is branches/leaves on a tree sticking up above the horizon into the sky. When the camera is down low, long grass can also do it.

Quote
Yes, photon shot noise gets reduced, but averaging also averaged read noise. It does it so well, that pattern noise will be better visible. That's where improved sensors (and/or black frame subtraction) will shine, that is by absence of pattern noise. The patterns become more noticeable because we humans are good at pattern recognition, even where there are none we see details (like shapes in clouds, or faces in moon rocks).

It averages the read noise and makes it smoother, but does it actually improve the DR?

For argument's sake, let's say the noise floor is 4 and the full well capacity is 16000. That's a 1:4000 contrast ratio. If you take 2 shots and average it, you have a total of 8 noise and 32000 maximum signal. That's still a 1:4000 ratio. Averaging it out will mean that the noise is much smoother and less noticeable (there will be more points closer to 8 noise in the combined file than there are points close to 4 noise in the single file) but it's still the same min-to-max ratio and, thus, the same DR.
Title: Re: Putting DR into perspective..
Post by: Bart_van_der_Wolf on March 27, 2015, 01:16:31 pm
Unfortunately it's not even across the frame and usually manifests itself as specific areas of lens flare. Fortunately, that can usually be blocked out with a hand.

Just to make sure, there is a difference between flare (often colorful local reflected hotspots), and veiling glare. It's no just semantics. The veil is omni-present, not as strong everywhere but some of it is. The same happens as our eyes age and we develop some level of glaucoma. The image is not fully formed yet where it is diffused, so it acts as contrast reduction (worse where directly illuminated by a bright lightsource).

Quote
The usual culprit is branches/leaves on a tree sticking up above the horizon into the sky. When the camera is down low, long grass can also do it.

Really, it is much more robust a method than what you give it credit for. Hans Kruse has also discovered that method and posted results in a number of threads. It is usually only small patches of the lightest areas that need to be blended in, and they only rarely coincide with moving detail. It can happen, but it's more rare that you suggest, it's the exception rather than the rule.

Quote
It averages the read noise and makes it smoother, but does it actually improve the DR?

For argument's sake, let's say the noise floor is 4 and the full well capacity is 16000. That's a 1:4000 contrast ratio. If you take 2 shots and average it, you have a total of 8 noise and 32000 maximum signal. That's still a 1:4000 ratio.

That's (fortunately) not how it works. DR is defined as the number of photons at the saturation point, divided by the noise level at a low exposure or even no exposure level, just the read noise. What may seem like the full well capacity at 16000, actually took 4x as many photons if we shoot at base ISO (after all we want to avoid noise, we're not shooting action). Canon cameras can benefit from relatively lower read noise from boosting ISO a bit, but for the lowest noise they too should use base ISO if shutter speed is not an issue.

So that's 64000 photons for each shot we want to average, which stays 64000 on average then. The read noise of e.g. 8 (no photons, just standard deviation of noise) is reduced as we average more and more shots. Two shots have 1/Sqrt(2) of the noise so 8/Sqrt(2)=5.67, 8 shots would have 8/Sqrt(8)=2.8. So that would be log(64000/2.8)/log(2)=14.5 stops of DR, if we want to go through the trouble of averaging instead of blending (the best parts of) images.

Cheers,
Bart
Title: Re: Putting DR into perspective..
Post by: shadowblade on March 27, 2015, 02:07:10 pm
Just to make sure, there is a difference between flare (often colorful local reflected hotspots), and veiling glare. It's no just semantics. The veil is omni-present, not as strong everywhere but some of it is. The same happens as our eyes age and we develop some level of glaucoma. The image is not fully formed yet where it is diffused, so it acts as contrast reduction (worse where directly illuminated by a bright lightsource).

From a mathematical point of view, even, sensor-wide glare isn't a problem and can even help you deal with limited dynamic range, so long as the sampling (i.e. bit depth) is great enough that you don't run into problems with posterisation.

Consider this, for argument's sake. You have a scene that, at 1s exposure, gives you 16384 photons in its brightest pixel, and 2 photons in its darkest, for a scene DR of 13 stops. Your sensor has a full well capacity of 18000 photons and a noise floor of 8 pixels, for a sensor DR of 11-and-a-bit stops. Naturally, you can't capture the entire scene in one shot.

Let's say that you have glare that adds 200 photons to each photosite. Your brightest pixel now receives 16584 photons and your darkest one 202 photons. The dynamic range of the scene, as seen by the sensor, is now around 6.5 stops - easily capturable by the sensor. Since your sensor has 14-bit output, the output is now distributed over around 16336 luminosity levels instead of 16535 levels - hardly a significant decrease in levels and unlikely to cause posterisation. This is because the brightest stop contains half the luminosity levels, the next brightest half of the remaining, and so on. The top six luminosity levels, therefore, contain 98.44% of the total levels available; the rest of the levels, the shadows, are all crammed into the remaining 1.46%.

Of course, glare isn't completely even across the frame, which is the problem. But this is considering the hypothetical perfect glare - it wouldn't actually be a problem.

Also, I think you mean cataracts rather than glaucoma.

Quote
Really, it is much more robust a method than what you give it credit for. Hans Kruse has also discovered that method and posted results in a number of threads. It is usually only small patches of the lightest areas that need to be blended in, and they only rarely coincide with moving detail. It can happen, but it's more rare that you suggest, it's the exception rather than the rule.

It certainly doesn't happen in every frame. But, when it does happen (which, while not the majority of shots, is certainly common enough to cause problems), it's one of the most annoying things to try to deal with.

Quote
That's (fortunately) not how it works. DR is defined as the number of photons at the saturation point, divided by the noise level at a low exposure or even no exposure level, just the read noise. What may seem like the full well capacity at 16000, actually took 4x as many photons if we shoot at base ISO (after all we want to avoid noise, we're not shooting action). Canon cameras can benefit from relatively lower read noise from boosting ISO a bit, but for the lowest noise they too should use base ISO if shutter speed is not an issue.

So that's 64000 photons for each shot we want to average, which stays 64000 on average then. The read noise of e.g. 8 (no photons, just standard deviation of noise) is reduced as we average more and more shots. Two shots have 1/Sqrt(2) of the noise so 8/Sqrt(2)=5.67, 8 shots would have 8/Sqrt(8)=2.8. So that would be log(64000/2.8)/log(2)=14.5 stops of DR, if we want to go through the trouble of averaging instead of blending (the best parts of) images.

Let's say one shot has a maximum of 16384 photons per photosite, with an average 8 photons added by electronic noise, with a distribution of 8 (i.e. the equivalent of 0-16 photons added per pixel). This puts the saturation point (16384) 11 stops above the noise floor (8). Let's just say that the distribution of noise is equal within that range - that is, the same number of pixels receive 1 'photon' of read noise as receives 6, 8 or 16 (in reality, it would approximate a normal distribution curve, but that would just complicate the mathematics and this will serve just as well for argument).

Now, let's say you averaged out 4 frames. You now have a maximum of 65536 photons per photosite. But you've also added an average of 32 photons of noise per photosite, with a distribution of 32 (although the actual distribution curve would be much tighter - there would be far more pixels close to 32 noise in the combined image than there would be pixels close to 8 noise in the single image and you'd have a bell-shaped curve rather than the equal distribution of the single frame; in an actual situation, where the distribution of read noise in the single frame is also a bell curve, you'd have a much tighter bell). Your ceiling is still only 11 stops above the average noise floor.

Of course, this all changes if you set the black point at the average noise floor, i.e.  produce the image based on 'white' being full well capacity, and 'black' being the noise floor. This would mean subtracting 8 from each image, or 32 from the four combined images. In other words, your scale would go from 0 to 16376 for a single image (with noise present from 0-8, with 50% of pixels receiving 0 and the rest evenly distributed between 1-8), or 0 to 65528 in the combined image (with noise present from 0-32, with 50% of pixels receiving 0 and the vast majority receiving just 1-8, with occasional pixels receiving more, due to the tighter bell curve). Therefore, the saturation point in the single image would be around 11 stops above the noise floor, while the saturation point in the combined image would be almost 13 stops above the noise floor, due to the tighter bell curve.

OK, I just shot that part of my own argument. But I was merely speculating whether there would actually be an improvement in DR - hadn't actually done the calculations to prove or disprove it, until forced to! Looks like it comes down to the fact that the 'zero' point is set at the average noise floor rather than an absolute 'zero' signal - when done that way, there is indeed an improvement in DR.
Title: Re: Putting DR into perspective..
Post by: Jack Hogan on March 27, 2015, 07:06:08 pm
Consider this, for argument's sake. You have a scene that, at 1s exposure, gives you 16384 photons in its brightest pixel, and 2 photons in its darkest, for a scene DR of 13 stops. Your sensor has a full well capacity of 18000 photons and a noise floor of 8 pixels, for a sensor DR of 11-and-a-bit stops. Naturally, you can't capture the entire scene in one shot.

Not that it matters to the substance of the argument, but in the last couple of posts it would be more physically correct to talk about photoelectrons rather than photons.  If we want to talk about photons we need to take into consideration effective QE (http://www.strollswithmydog.com/effective-quantum-efficiency-of-sensor/), which these days tends to be around 15-30%.

Jack
Title: Re: Putting DR into perspective..
Post by: Iliah on March 27, 2015, 07:49:20 pm
The flare and glare tend to be non-uniform, and even with uniform field the response linearity is sacrificed and thus white balance becomes problematic. But yes, adding flare artificially can be used as one of last resorts. I use such filters in front of the lens (similar to Tiffen Ultra Contrast, but custom-made in Germany).
Title: Re: Putting DR into perspective..
Post by: Phil Indeblanc on March 27, 2015, 08:09:56 pm
Photons shmotons...where is that building?!

(nice Erik)
Title: Re: Putting DR into perspective..
Post by: ErikKaffehr on March 27, 2015, 10:54:29 pm
:-)

47°24'8" N 16°25'30" E

:-) Erik :-)

Photons shmotons...where is that building?!

(nice Erik)
Title: Re: Putting DR into perspective..
Post by: Iliah on March 27, 2015, 11:12:21 pm
I was expecting something like http://www.amazon.com/Vintage-Poster-Aufnahmeblatt-Unterrabnitz-Lockenhaus/dp/B00DHLUT8W - not Google ;)
Title: Re: Putting DR into perspective..
Post by: shadowblade on March 28, 2015, 10:42:41 am
Not that it matters to the substance of the argument, but in the last couple of posts it would be more physically correct to talk about photoelectrons rather than photons.  If we want to talk about photons we need to take into consideration effective QE (http://www.strollswithmydog.com/effective-quantum-efficiency-of-sensor/), which these days tends to be around 15-30%.

Jack

That's the term I was looking for - had a mind blank while I was writing it.

Although a recent development in solar cell technology, which allows one photon to produce two photoelectrons instead of just one, has the potential to increase this to 60%, making for better high-ISO performance.

Personally, I'd be more interested in increased well capacity, though.
Title: Re: Putting DR into perspective..
Post by: Bart_van_der_Wolf on March 28, 2015, 11:06:54 am
Not that it matters to the substance of the argument, but in the last couple of posts it would be more physically correct to talk about photoelectrons rather than photons.

Yes, that would be more accurate, however ...

Quote
If we want to talk about photons we need to take into consideration effective QE (http://www.strollswithmydog.com/effective-quantum-efficiency-of-sensor/), which these days tends to be around 15-30%.

However, only photons that got converted into electrons are relevant for DR discussions. The QE has no bearing on DR performance as such, it just gives an idea about how the exposure times may differ to collect an average number of photons for conversion.

Cheers,
Bart
Title: Re: Putting DR into perspective..
Post by: Jack Hogan on March 28, 2015, 05:55:27 pm
Yes, that would be more accurate, however ...

However, only photons that got converted into electrons are relevant for DR discussions. The QE has no bearing on DR performance as such, it just gives an idea about how the exposure times may differ to collect an average number of photons for conversion.

Right.  When camera DR is involved what counts is the ratio of the maximum to the minimum recordable signal as defined, both expressed in units of either output-referred raw values or input-referred physical units.  Physical units are photoelectrons, and this is what my comment was aimed at.

If one wants to relate DR to a specific Exposure, one needs effective Quantum Efficiency to be able to reach back into photons and photometric units.  For instance, what's the captured DR when we want the maximum signal to be 1 lux-second?  Unless your sensor saturates at 1 lx-s or higher it will be less than expected.  A bit more on this here (http://www.strollswithmydog.com/comparing-sensor-snr/).

Jack
Title: Re: Putting DR into perspective..
Post by: Ajoy Roy on March 31, 2015, 09:43:09 am
Higher DR helps a lot if you cannot nail the exposure perfectly (or are a bit lazy like me). In my case with Nikon D3300 which has a DR of about 12, I can recover 1EV highlight and at least 3EV of shadows, before noise rears its head. For most of my daylight shots the DR of 12 is sufficient to recover all but deepest shadows, and a bit of blown highlights. This is in contrast with the older DSLR where the DR was much less and shadow noise was more, and you had to be more careful in exposure.

With full frame DR approaching 15 things become even easier. You can expose for the setting sun and recover rest of the scene perfectly.
Title: Re: Putting DR into perspective.. (part 2)
Post by: dwswager on March 31, 2015, 12:44:57 pm
Hi,

This is another image, with wider dynamic range, first let's look at an HDR exposure (P45+ exposures from 1s to 30s), fused in Lumariver HDR.

(http://echophoto.dnsalias.net/ekr/Articles/DRArticle/Lockenhous/20140617_lumariver.jpg)

The whole luminance range is impressive, perhaps 14 stops

Best regards
Erik

What this whole post shows though is that it is no only the DR, but the ability to recover highlights and shadows and how well.  Unfotunately, the cameras I have seen that have limited DR also present the shadow recovery problem. It is amazing to be as an amateur with limited opportunities for photography, when the DR is needed and when not.  Some situations are obvious and other again, are not. 

All I know is that I got up at 5:30am to take sunrise photos while with my kid at the coast for a softball tournament.  A cloud wall blocked the sun and there was no sun and no color at all, just a hazy grey.  So I spent my time doing some long exposures.  I was using 10 stops of ND and the exposure times were around 14 minutes.  I forgot to take into account the rapidly increasing light level.  At about 11 minutes it dawned on me and I closed the shutter.  Ended up with an image exposed to the right, but the D810 hadn't reached saturation yet!  Whew!
Title: Re: Putting DR into perspective..
Post by: barryfitzgerald on April 01, 2015, 06:20:57 pm
I don't disagree with the op DR has improved massively over the last decade, on the other hand if I were shooting Canon I'd be pretty unhappy