Just to make sure, there is a difference between flare (often colorful local reflected hotspots), and veiling glare. It's no just semantics. The veil is omni-present, not as strong everywhere but some of it is. The same happens as our eyes age and we develop some level of glaucoma. The image is not fully formed yet where it is diffused, so it acts as contrast reduction (worse where directly illuminated by a bright lightsource).

From a mathematical point of view, even, sensor-wide glare isn't a problem and can even help you deal with limited dynamic range, so long as the sampling (i.e. bit depth) is great enough that you don't run into problems with posterisation.

Consider this, for argument's sake. You have a scene that, at 1s exposure, gives you 16384 photons in its brightest pixel, and 2 photons in its darkest, for a scene DR of 13 stops. Your sensor has a full well capacity of 18000 photons and a noise floor of 8 pixels, for a sensor DR of 11-and-a-bit stops. Naturally, you can't capture the entire scene in one shot.

Let's say that you have glare that adds 200 photons to each photosite. Your brightest pixel now receives 16584 photons and your darkest one 202 photons. The dynamic range of the scene, as seen by the sensor, is now around 6.5 stops - easily capturable by the sensor. Since your sensor has 14-bit output, the output is now distributed over around 16336 luminosity levels instead of 16535 levels - hardly a significant decrease in levels and unlikely to cause posterisation. This is because the brightest stop contains half the luminosity levels, the next brightest half of the remaining, and so on. The top six luminosity levels, therefore, contain 98.44% of the total levels available; the rest of the levels, the shadows, are all crammed into the remaining 1.46%.

Of course, glare isn't completely even across the frame, which is the problem. But this is considering the hypothetical perfect glare - it wouldn't actually be a problem.

Also, I think you mean cataracts rather than glaucoma.

Really, it is much more robust a method than what you give it credit for. Hans Kruse has also discovered that method and posted results in a number of threads. It is usually only small patches of the lightest areas that need to be blended in, and they only rarely coincide with moving detail. It can happen, but it's more rare that you suggest, it's the exception rather than the rule.

It certainly doesn't happen in every frame. But, when it does happen (which, while not the majority of shots, is certainly common enough to cause problems), it's one of the most annoying things to try to deal with.

That's (fortunately) not how it works. DR is defined as the number of photons at the saturation point, divided by the noise level at a low exposure or even no exposure level, just the read noise. What may seem like the full well capacity at 16000, actually took 4x as many photons if we shoot at base ISO (after all we want to avoid noise, we're not shooting action). Canon cameras can benefit from

*relatively* lower read noise from boosting ISO a bit, but for the lowest noise they too should use base ISO if shutter speed is not an issue.

So that's 64000 photons for each shot we want to average, which stays 64000 on average then. The read noise of e.g. 8 (no photons, just standard deviation of noise) is reduced as we average more and more shots. Two shots have 1/Sqrt(

2) of the noise so 8/Sqrt(2)=5.67, 8 shots would have 8/Sqrt(

8)=2.8. So that would be log(64000/2.

/log(2)=14.5 stops of DR, if we want to go through the trouble of averaging instead of blending (the best parts of) images.

Let's say one shot has a maximum of 16384 photons per photosite, with an average 8 photons added by electronic noise, with a distribution of 8 (i.e. the equivalent of 0-16 photons added per pixel). This puts the saturation point (16384) 11 stops above the noise floor (

. Let's just say that the distribution of noise is equal within that range - that is, the same number of pixels receive 1 'photon' of read noise as receives 6, 8 or 16 (in reality, it would approximate a normal distribution curve, but that would just complicate the mathematics and this will serve just as well for argument).

Now, let's say you averaged out 4 frames. You now have a maximum of 65536 photons per photosite. But you've also added an average of 32 photons of noise per photosite, with a distribution of 32 (although the actual distribution curve would be much tighter - there would be far more pixels close to 32 noise in the combined image than there would be pixels close to 8 noise in the single image and you'd have a bell-shaped curve rather than the equal distribution of the single frame; in an actual situation, where the distribution of read noise in the single frame is also a bell curve, you'd have a much tighter bell). Your ceiling is still only 11 stops above the average noise floor.

Of course, this all changes if you set the black point at the average noise floor, i.e. produce the image based on 'white' being full well capacity, and 'black' being the noise floor. This would mean subtracting 8 from each image, or 32 from the four combined images. In other words, your scale would go from 0 to 16376 for a single image (with noise present from 0-8, with 50% of pixels receiving 0 and the rest evenly distributed between 1-8), or 0 to 65528 in the combined image (with noise present from 0-32, with 50% of pixels receiving 0 and the vast majority receiving just 1-8, with occasional pixels receiving more, due to the tighter bell curve). Therefore, the saturation point in the single image would be around 11 stops above the noise floor, while the saturation point in the combined image would be almost 13 stops above the noise floor, due to the tighter bell curve.

OK, I just shot that part of my own argument. But I was merely speculating whether there would actually be an improvement in DR - hadn't actually done the calculations to prove or disprove it, until forced to! Looks like it comes down to the fact that the 'zero' point is set at the average noise floor rather than an absolute 'zero' signal - when done that way, there is indeed an improvement in DR.