Hello again Jack,

We may have a misunderstanding. I am only talking about the

definition of Unity Gain, per the topic. Since the definition itself excludes photons, they and their distributions, Poisson, etc., are not relevant to the definition. The definition starts at the sensor output and knows nothing of the shot noise and thermal noise generated therein. Thus I am able to answer as below:

Hi Ted,

Ok, good point about energy levels: so you say one photon = one electron? Does it make a difference as to what the energy of the photon in question is (i.e. 380nm, vs 760nm light) as far as the number of electrons generated?

To be pedantic, one photon = one or no electron. 380nm vs. 760nm makes no difference, other than 380nm has more energy and is more likely to whack an electron. However, there are devices that can produce more electrons than photons, such as photo-multipliers and those night-vision thingies (not IR) much beloved by the military.

I am all for keeping things simple, however noise is a pretty key element of this discussion as I hope to show below. I am thinking about the noise that is always inherently present in light, sometimes referred to as shot noise, because its distribution is similar to the arrival statistics of shot from a shot gun, whch was characterized by a gentlemen by the name of Poisson.

Yes, shot noise and sensor thermal noise exist, of course, but they are earlier in the signal chain than the sensor output and are not, therefore, part of the definition.

So now we can address the integer nature of electrons. Let's assume that each photosite is a perfect electron counter with Unity Gain. 10 electrons generated by the photosite, the ADC stores a count of 10 in the Raw data with no errors. Example 1: the sensor is exposed at a known luminous exposure and the output of the photosite in question is found to result in a Raw value of 2. What is the signal?

By including signal processing downstream of the ADC you are going outside the bounds of the definition. By including illuminance at the sensor face, you are again going outside the bounds of the definition. Therefore, the question is not relevant to "unity gain" and - with respect - by not defining "the signal", you are also rendering the question moot.

We cannot tell by looking at just one photosite. The signal could easily be 1,2,3,4,5, 6... For instance if it were 4, shot noise would be 2, and a value of 2 is only a standard deviation away. To know what the signal is, we need to take a uniform sample of neighbouring photosites, say a 4x4 matrix*. We gather statistics from the Raw values from each photosite and compute a mean (the signal) and a standard deviation (the noise). In this example it turns out that the signal was 1 electron with standard deviation/noise of 1. Interestingly, the human visual system works more or less the same way.

Quite so, but I'm sure that the definition is meant to be about the camera ISO

setting and not the captured scene. That is to say that the gain of the amplification from the sensor output to the ADC output, i.e. the ISO setting itself, knows nothing of the scene nor anything of subsequent processing. And you will remember that the sensels are sampled

*one at a time* and know nothing of their neighbors values. Equally, the ADC input only receives sensel values one at a time and I believe that to be a very important point for you to consider. Remember again that the definition does

not include or imply any signal processing after the ADC output. Therefore "the signal", as far the definition is concerned, is the output of one sensel and one sensel only, not a number thereof.

Example 2: a new exposure resulting in a signal of 7 electrons for each photosite in the 4x4 matrix on our sensor. Of course each does not get exactly 7 electrons because photons arrive randomly, and in fact we know thanks to M. Poisson that the mean of the values in our 4x4 matrix should indeed be 7 but with a standard deviation of 2.646 - so some photosites will generate a value of 7 but many will also generate ...2,3,4,5,6,8,9,10,11,12.... The signal is the mean of these values.

No, "the signal" can not be anything but the discrete value of one sensel output. No algebra, statistical or otherwise, is done until after the ADC output and therefore is not part of the definition.

Example 3: Different exposure. Say we look at our 4x4 matrix of Raw values and end up with a mean of 12.30 and a standard deviation of 3.50.

We can't look at different 16 sensel values and do any calculations on them - the definition only includes a gain term, taken

one sensel at a time. Of course, the camera can do what it likes with as many sensel values that it likes and the consequent RAW file can show distributions of all kinds, BUT, being outside of the signal path from the sensor output to the ADC output, that has nothing to do with the definition.

Using the Effective Absolute QE for the D800e above (15.5)% and ignoring for the sake of simplicity Joofa's most excellent point above, could we say that this outcome resulted from exposing each photosite to a mean of +12.3/0.155= 79.35 photons? After all, this number of photons is a mean itself.

No we could not. We can calculate a mean and SD for discrete items such as the number of electrons in a number of photosite and we can indeed assign fraction values to the said mean and SD. But, sorry, we can

not turn the equation around and come up something like 79.35 photons. All that figure tells you is that it is perhaps more likely that there were 79 photons than 80 photons. You can not have a fractional number of photons. Physically impossible.

I find it disturbing that fractional photons are still being mentioned. There can be no such thing. If this basic fact about the nature of light is not understood, then nothing else can be accepted or understood and, with all due respect, our discussion would be at an end.

So, time for a question of my own:

Is it the opinion of your goodself, or indeed of this forum, that fractional photons can exist?