eronald


« Reply #40 on: March 24, 2013, 03:14:28 PM » 
Reply

Deleted original content  sorry.
BTW, all current sensors seem to have tunable antiblooming incurving their response near saturation.
Edmund


« Last Edit: March 24, 2013, 03:19:07 PM by eronald »

Logged




Jim Kasson


« Reply #41 on: March 24, 2013, 03:30:26 PM » 
Reply

BTW, all current sensors seem to have tunable antiblooming incurving their response near saturation. If you can get me details, I can throw it into the model. So far, I haven't modeled what happens when a well overflows. In the present model, it just clips. Jim



Logged




Jim Kasson


« Reply #42 on: March 24, 2013, 04:40:36 PM » 
Reply

Bill, I've added read/dark noise to the model. I've assumed it's Gaussian, although it seems to have longer tails than that. I got the values from some Nikon D4 testing I've done with exposures at 1/30 of a second (therefore not much dark current). I made the exposures using ISOs from 100 to 6400, and, to the first order, the noise all occurs before the preADC amplifiers. The mean value of the noise with an amplifier gain of unity is 1.8 leastsignificant bits (LSBs) and the standard deviation is 2.7 LSBs. Since I did my testing with the lens cap on, values below zero got clipped, and therefore the real standard deviation is probably higher. I note that observation, but have not tried to correct for it. Because the real read noise tail appears longer than Gaussian and the standard deviation is probably understated, the curve that follows could be optimistic in the higher test ISOs and the darker targets. Here's the result: You can see that the best compromise exposure is Zone IV (14bit ADC count of 1000), exactly as you predicted. Jim



Logged




Jack Hogan


« Reply #43 on: March 25, 2013, 01:59:05 PM » 
Reply

I've added read/dark noise to the model. I've assumed it's Gaussian, although it seems to have longer tails than that. I got the values from some Nikon D4 testing I've done with exposures at 1/30 of a second (therefore not much dark current). I made the exposures using ISOs from 100 to 6400, and, to the first order, the noise all occurs before the preADC amplifiers. The mean value of the noise with an amplifier gain of unity is 1.8 leastsignificant bits (LSBs) and the standard deviation is 2.7 LSBs. Since I did my testing with the lens cap on, values below zero got clipped, and therefore the real standard deviation is probably higher. I note that observation, but have not tried to correct for it. Because the real read noise tail appears longer than Gaussian and the standard deviation is probably understated, the curve that follows could be optimistic in the higher test ISOs and the darker targets. can see that the best compromise exposure is Zone IV (14bit ADC count of 1000), exactly as you predicted.
Hi Jim, very nice. I trust you know that the source of empirical knowledge about noise and similia in DSCs is this excellent treatise by Emil Martinec. For fun, I applied the theory there to the full SNR curves that can be found at DxO. Here is an example based on the D5200: It's amazing what information one can pull out of those little graphs. Here is an example on the D800e, which I don't think has been characterized by Sensorgen yet. Blue data is read off the chart, green values is the analysis that comes out of it. My results are typically very, very similar to Sensorgen's  as well they should, we are using the same data. Values up to ISO 6400 should be fairly accurate. Above that I am not sure we get a clear spot where to rest the shot noise only/Signal tangent. Cheers, Jack PS Just out of curiosity, why do you refer to Zones when talking about Raw value ranges? If you want to start an interesting debate try asking where Zone 0 should sit out of full scale and watch what happens


« Last Edit: March 25, 2013, 02:14:59 PM by Jack Hogan »

Logged




Jack Hogan


« Reply #44 on: March 25, 2013, 02:35:55 PM » 
Reply

Anyway, here goes. On the RawDigger site, there's a technique for computing Unity Gain ISO. It is basically a search over several exposures made with the camera ISO setting at different places for the ISO setting that, with a flat, relatively bright (but not saturated) compact target rectangle, produces a standard deviation in the (pick a channel) raw values that's the square root of the mean raw value in that channel. I have often wondered what 'unity' means exactly. I understand it in principle (input 100 apples, count 100), but does it really apply to the statistical nature of light, where integers are replaced by means and standard deviations with many decimal points? So what's the input: photons or electrons? Most of the relevant posts in this thread assume electrons. Why? From my D800e chart above it takes about 6.4 photons on top of the sensor to generate an electron. That's not an integer to start with. And what happens if I only get 6 photons? Do I get an electron? I believe I in fact get 0.94 electrons, which with a little dithering I can store pretty accurately in the Raw data. Any thoughts? Jack



Logged




Jim Kasson


« Reply #45 on: March 25, 2013, 03:09:16 PM » 
Reply

I have often wondered what 'unity' means exactly. Jack, It's sloppy notation, but I use it because of its wide currency. The denominator is electrons, the way that I think of it. That way the shot noise varies as the square root of the signal. If 1000 photons fall on a photosite and 500 electrons are generated, the shot noise is the same as if 500 photons fell on the site of a different, ideal, camera and 500 electrons were generated. I call the shot noise photon noise, although it is really electron noise, because that's the notation I see so often. I think it's sufficiently clear in context. In electrical engineering, gain is usually a unitless quantity. Volts out over volts in. Amps out over amps in. However, in this usage, it has units: counts out over electrons in. That makes me cringe, but it seems to be standard usage, so I go along. I am more comfortable dealing with statistical uncertainty in digital data streams, and in the physical events upstream. I worked for HewlettPackard designing data acquisition and process control systems, and for Rolm and IBM designing voice and data communication systems, and noisy data is something that nearly always is what you have to work with. In fact, one of the patents that made the first Rolm CBX possible involved injecting spectrallyshaped noise into the ADC to, in effect, gain a thirteenth bit in a 12bit ADC. I came up with the idea in a moment of desperation after we decided to go after the telco market, which had more stringent noise and crosstalk standards. It was later picked up by the audio industry in general, and used in the Compact Disc. But I digress. I'm fine with data being stochastic, just like analog signals. Jim



Logged




Jim Kasson


« Reply #46 on: March 25, 2013, 03:19:05 PM » 
Reply

Just out of curiosity, why do you refer to Zones when talking about Raw value ranges? I started using the notation when working with local (Monterey Peninsula) photographers. Out here, it seems like the Zone System is in the water, and everyone seems to know what I'm talking about. It doesn't bear close scrutiny, because the Zones in the Zone System are in the manipulated output, and I mean to refer to light ratios in the signal that falls on the sensor. What I wanted when I fastened on the Zone nomenclature was a term that indicated the digitized value in a raw file independent of the number of bits in the ADC and the gain of the amplifier. I also wanted it to be logarithmic, with a base of two, because we photographers think in terms of stops. Another possibility would be mean digitized value in stops below full scale, but around here I get some glazed eyes when I say that. I am open to suggestions. Jim


« Last Edit: March 25, 2013, 03:21:04 PM by Jim Kasson »

Logged




Jim Kasson


« Reply #47 on: March 25, 2013, 03:37:22 PM » 
Reply

Jack, I stand on the shoulders of others, and Emil has big shoulders. I like what you've done. Looking at your curves and some of my own data makes me want to extend my model to have two sources of read noise. The first one is before the amplifier, measured in electrons. That noise would be specified as the characteristics of some probability density function. If I continue to use a Gaussian function, as I will for at least a while, the parameters will be mean and standard deviation. I kind of like the idea of quantizing the noise to integral numbers of electrons just for elegance, but I can't imagine that it will make any difference when averaged over a hundred exposures, which is my standard now (when I start running batch jobs, I'll probably raise that to a thousand). [Edit: now that I think about it, the noise added shouldn't be quantized, since it's electrical, but not necessarily electrons in the well. Maybe the electrons in the well should be rounded to the nearest integer, though.] The second one would be after the amplifier, measured in LSBs, probably represented by the mean and standard deviation of a Gaussian generator, and probably not quantized to integers, because I see it as an analog signal injected into the ADC input. One way to get these values would be to measure the noise at a bunch of ISOs and fit a straight line to the data. For some reason, I like that better than just having a different mean and standard deviation for each ISO, although the cruder technique is probably more accurate, since it allows for unmodeled mechanisms. Jim


« Last Edit: March 25, 2013, 08:35:35 PM by Jim Kasson »

Logged




bjanes


« Reply #48 on: March 25, 2013, 09:19:32 PM » 
Reply

Bill, I've added read/dark noise to the model. I've assumed it's Gaussian, although it seems to have longer tails than that. I got the values from some Nikon D4 testing I've done with exposures at 1/30 of a second (therefore not much dark current). I made the exposures using ISOs from 100 to 6400, and, to the first order, the noise all occurs before the preADC amplifiers. The mean value of the noise with an amplifier gain of unity is 1.8 leastsignificant bits (LSBs) and the standard deviation is 2.7 LSBs. Since I did my testing with the lens cap on, values below zero got clipped, and therefore the real standard deviation is probably higher. I note that observation, but have not tried to correct for it. Because the real read noise tail appears longer than Gaussian and the standard deviation is probably understated, the curve that follows could be optimistic in the higher test ISOs and the darker targets. Here's the result: You can see that the best compromise exposure is Zone IV (14bit ADC count of 1000), exactly as you predicted. Jim Jim, Thank you! Your data are quite interesting and your model seems quite sophisticated. Matlab is above my pay grade . I'm not sure how you took PRNU into account. In my testing with the D800e it represents about 0.38% of the signal output. Is that consistent with your model? Regards, Bill



Logged




bjanes


« Reply #49 on: March 25, 2013, 09:23:52 PM » 
Reply

I have often wondered what 'unity' means exactly. I understand it in principle (input 100 apples, count 100), but does it really apply to the statistical nature of light, where integers are replaced by means and standard deviations with many decimal points? So what's the input: photons or electrons? Most of the relevant posts in this thread assume electrons. Why? From my D800e chart above it takes about 6.4 photons on top of the sensor to generate an electron. That's not an integer to start with. And what happens if I only get 6 photons? Do I get an electron? I believe I in fact get 0.94 electrons, which with a little dithering I can store pretty accurately in the Raw data. Any thoughts? Jack Here is a link to what Roger Clark thinks about unity gain. Regards, Bill



Logged




bjanes


« Reply #50 on: March 25, 2013, 09:38:54 PM » 
Reply

Jack, I stand on the shoulders of others, and Emil has big shoulders.
Indeed. Another source worth looking at is modeling work done by Marianne Oelund. I have not used her program but it looks interesting and might give you some ideas. Regards, Bill



Logged




BJL


« Reply #51 on: March 25, 2013, 10:43:14 PM » 
Reply

It turns out that you can take the same data and compute fullwell capacity, if you assume that the well fills as the ADC output approaches fullscale at base ISO. The fullwell capacity should be proportional to photosite area, all else being equal. With the CCDbased Leica M9 in the mix, all else is not equal: Jim Something seems wrong here: the M9 sensor should have roughly the same 60,000 electron full well capacity as other Kodak full frame type sensors with 6.8 micron pixels, like the KAF31600: http://www.truesenseimaging.com/all/download/file?fid=11.62so that 30,000 seems too low. Also, the other numbers seem too high, and it would be puzzling for full frame type CCDs to have lower rather that higher full well capacity, since the main virtue of full frame type sensors is using almost all of the sensor area for storing photoelectrons, whereas CMOS sensor use some space for the three or more processing transistors per photosite. Also, sensors like these with microlenses are known to hav quantum efficiency of around 40% or better with color filter arrays and higher without: about 80%. That is, about 2.5 photons per photoelectron with CFA, under 2 photons/electron without. So 6.2 photons per photoelectron is way to high. Also, understand that it is not a matter of the sensor counting up to some number of photons and then scoring one electron; it is instead a probabilistic thing. For example, when a sensor has 80% quantum efficiency with no CFA in place, it means that each photon has an 80% chance of causing an electron to be deposited in the well, a 20% chance of going undetected. P. S. the name "unity gain" is a bit unfortunate, as it perpetuates the myth that there is "no amplification" at some natural exposure index level, and "amplification" at all higher EI settings. Instead with various dimensional conversions from photoelectron counts (charges) to currents to voltages to numerical ADC levels, the idea of "unamplified" or "gain of one" is physically meaningless. I suppose the idea of "one ADC level per detected photon" can be useful, as an upper limit on the level of amplification that can help with image quality, SNR and such.


« Last Edit: March 25, 2013, 10:45:06 PM by BJL »

Logged




joofa


« Reply #52 on: March 25, 2013, 11:03:29 PM » 
Reply

So what's the input: photons or electrons? Most of the relevant posts in this thread assume electrons. Why?
Electrons; because, in most of the such experiments one does not have access to photons. ADU/electrons is only what one has. So one just spectulates about photons. And, unfortunately, the calculation from photons going on to electrons and electrons going on to photons is not symmetric for the "usual analysis". More below. From my D800e chart above it takes about 6.4 photons on top of the sensor to generate an electron. That's not an integer to start with. And what happens if I only get 6 photons? Do I get an electron?
Yes, you may get an electron with 6 photons in this case. However, here is that asymmetry: Using the numbers in your char above, if 20 photons fall on a pixels and QE is 15.7% percent, then, a good guess of the number of electrons generated is 20*0.157 = 3 electrons (rounded). However, if you see 3 electrons in your analysis then the number of photons impinged obtained as 3 / 0.157 = 19 electrons is a relatively poor guess (about 6 times off than the "best" guess on average). Please see these links: http://forums.dpreview.com/forums/post/50345602http://forums.dpreview.com/forums/post/50352414http://forums.dpreview.com/forums/post/50345857Joofa


« Last Edit: March 25, 2013, 11:06:44 PM by joofa »

Logged




xpatUSA


« Reply #53 on: March 26, 2013, 03:41:14 AM » 
Reply

A little offtopic but, thanks to this thread, I've finally figured out how Clark calculates his Unity Gain ISO value (he doesn't actually give a formula in the link below, just a hint or two). http://www.clarkvision.com/articles/digital.sensor.performance.summary/#unity_gainHe divides the sensor full well capacity by the ADC 'capacity' giving, say, a value of 'n'. Then log10(n)/log10(2) gives an exponent, say 'y'. Then the Unity Gain ISO is given by (base ISO) * 2^(y). So, for my SD10 with a well capacity of 77,000e, a 12bit converter and a base ISO of 100, that is 77,000/4095 = n = 18.8. Then log10(18. /log10(2) = y = 4.233. Then Unity Gain ISO = 100 * 2^(4.233) = ISO 1880, voila! By this method, it becomes unnecessary to know the sensor output characteristic uV/e, the amplifier gain and the ADC input voltage range. Of course, Sigma muddies the water a bit by recommending a lower figure than 77,000e for good linearity . . . just as ISO themselves muddy the water with their inclusion of 'headroom' into the saturationbased ISO calculation. Ted



Logged

best regards,
Ted



Jack Hogan


« Reply #54 on: March 26, 2013, 04:20:49 AM » 
Reply

Here is a link to what Roger Clark thinks about unity gain.
Ok Jim, Bill, Edmund, BJL, Ted, Joofa and everybody else  what Roger Clark says offers a good example of what I am asking: "Since 1 electron (1 converted photon) is the smallest quantum that makes sense to digitize, there is little point in increasing ISO above the Unity Gain ISO (small gains may be realized due to quantization effects, but as ISO is increased, dynamic range decreases)"1. One electron is not one converted photon... 2. Even if it were, isn't he assuming that the electron is either there or not there when he says that it is "the smallest quantum that makes sense do digitize"? Doesn't quantum mechanics work in double precision floating point internally ? 3. I can think of many ways to determine when to stop increasing ISO, but all of them include some measure of read noise, which I do not see here. Please correct me if I am reading this wrong but, intuitively, an electron is just the equivalent of a convenient SI unit. However it means nothing by itself: we cannot tell if that one electron is the signal (a mean) or noise. In order to decode the signal we need a larger sample. Because there will be (random) noise, we also get dithering. Which, depending on sample and noise size, may allow us to determine the 'signal' to many significant digits, for example 0.15 electrons. So does it make sense to speak of 1 electron (or was it photon?) as " the smallest quantum that it makes sense to digitize"? Jack



Logged




Jack Hogan


« Reply #55 on: March 26, 2013, 04:33:03 AM » 
Reply

I'm not sure how you took PRNU into account. In my testing with the D800e it represents about 0.38% of the signal output.
It's consistent with the data that comes off of the DxO curves at ISO 200 and 400 for which I get 0.36% and 0.39% at ISO 200 and 400 respectively in the table above. I am not sure why ISO 100 is inconsistent at 0.28%  in theory this should give the most accurate reading  perhaps a slight measurement error on my part. For your reference, the D800's PRNU readings in the 100 to 400 ISO range were virtually identical. Jack [EDIT] Rereading this message it is unclear: identical to each other, not to the D800e's. PRNU on the D800 came out to 0.51%, 0.53%, 0.52% at ISO 100, 200, and 400 resp.


« Last Edit: March 26, 2013, 12:43:22 PM by Jack Hogan »

Logged




Jack Hogan


« Reply #56 on: March 26, 2013, 04:53:20 AM » 
Reply

Looking at your curves and some of my own data makes me want to extend my model to have two sources of read noise.
The first one is before the amplifier, measured in electrons. That noise would be specified as the characteristics of some probability density function. If I continue to use a Gaussian function, as I will for at least a while, the parameters will be mean and standard deviation... The second one would be after the amplifier, measured in LSBs, probably represented by the mean and standard deviation of a Gaussian generator, and probably not quantized to integers, because I see it as an analog signal injected into the ADC input.
One way to get these values would be to measure the noise at a bunch of ISOs and fit a straight line to the data. For some reason, I like that better than just having a different mean and standard deviation for each ISO, although the cruder technique is probably more accurate, since it allows for unmodeled mechanisms.
Great, I'm curious to see whether it makes a material difference, or whether the simpler model we've been using suffices for our purposes. You may want to take a look at the links that Joofa provided to help you choose the appropriate functions. Jack



Logged




Jack Hogan


« Reply #57 on: March 26, 2013, 05:19:15 AM » 
Reply

What I wanted when I fastened on the Zone nomenclature was a term that indicated the digitized value in a raw file independent of the number of bits in the ADC and the gain of the amplifier. I also wanted it to be logarithmic, with a base of two, because we photographers think in terms of stops. Another possibility would be mean digitized value in stops below full scale, but around here I get some glazed eyes when I say that.
I am open to suggestions.
My question about Zones refers to the fact that no two cameras/vendors meter to roughly the same spot in the Raw data, therefore when Zone IV is mentioned one is not sure what to refer it to  hence the need to refer to a specific Raw Value (i.e. 1000). Metering as set up by different vendors seems to fall in the 3 to 4.5 stops from saturation range (12.55% of Raw full scale). Ansel Adams would have been happier with the latter, absent bracketing So one way to specify the signal in log 2 fashion familiar to photographers would be as a function of Stops from Clipping. Another way I've seen it (as in RawDigger) is to assign O EV to 12.5% (as per the relative ISO Saturation Speed standard). This way Saturation is +3 EV and everything else falls into place. A third way, apparently intuitive but potentially misleading in the long run, is for the x axis to refer to the camera's Raw values in log2 notation: saturation is about 14, one stop lower is 13 etc. But this approach is bit depth dependent and, worse, tempts readers to tie together bitdepth and dynamic range. Jack


« Last Edit: March 26, 2013, 05:23:56 AM by Jack Hogan »

Logged




xpatUSA


« Reply #58 on: March 26, 2013, 05:30:22 AM » 
Reply

Ok Jim, Bill, Edmund, BJL, Ted, Joofa and everybody else  what Roger Clark says offers a good example of what I am asking: "Since 1 electron (1 converted photon) is the smallest quantum that makes sense to digitize, there is little point in increasing ISO above the Unity Gain ISO (small gains may be realized due to quantization effects, but as ISO is increased, dynamic range decreases)"1. One electron is not one converted photon... 2. Even if it were, isn't he assuming that the electron is either there or not there when he says that it is "the smallest quantum that makes sense do digitize"? Doesn't quantum mechanics work in double precision floating point internally ? 3. I can think of many ways to determine when to stop increasing ISO, but all of them include some measure of read noise, which I do not see here. Please correct me if I am reading this wrong but, intuitively, an electron is just the equivalent of a convenient SI unit. However it means nothing by itself: we cannot tell if that one electron is the signal (a mean) or noise. In order to decode the signal we need a larger sample. Because there will be (random) noise, we also get dithering. Which, depending on sample and noise size, may allow us to determine the 'signal' to many significant digits, for example 0.15 electrons. So does it make sense to speak of 1 electron (or was it photon?) as " the smallest quantum that it makes sense to digitize"? Jack Hello Jack, I'm having some difficulty in following your comments re: electrons. The photon is a particle of light energy. It is not possible to have fractional particles, only integer quantities thereof. At the energy levels of visible light wavelengths, it is generally accepted that there is only enough energy per photon to move one electron from one energy level to another. The change in the energy level of that one electron causes a charge to go into a capacitance and that charge raises the voltage across the capacitance a smidgeon (7.14 microvolts in my camera). Thus the output voltage from the sensor changes by discrete steps as far as electron 'counting' is concerned. Physicists will dislike the foregoing oversimplified explanation. If the foregoing is acceptable then we can see that the unity 'gain' condition occurs when, during an exposure, the ISO is such that a perfect pixel accumulates exactly 4095 electrons (12bit ADC assumed) all of which produce a voltage output which in turn, by definition, causes the ADC output to be binary 111111111111. I think that the point Clark is making goes something like this: An ISO setting higher than the socalled unity gain value would be needed if there were a count of less than 4095 electrons. For example, 2048 would cause me to use an 'extended' setting if my camera had one, doubling the amplifier gain from whatever it was such that a change of one electron now causes a change of 2 ADU's or whatever we call them here. Our 12bit ADC has just become effectively an 11bit ADC. Which effectively halves both the dynamic range and the signal resolution, eh? The 'Unity Gain' concept is quite artificial and therefore, with due respect, I think that the introduction of noise as a consideration is both confusing and unnecessary. Hope this helps,


« Last Edit: March 26, 2013, 05:49:03 AM by xpatUSA »

Logged

best regards,
Ted



Jack Hogan


« Reply #59 on: March 26, 2013, 10:57:35 AM » 
Reply

Hi Ted, Ok, good point about energy levels: so you say one photon = one electron? Does it make a difference as to what the energy of the photon in question is (i.e. 380nm, vs 760nm light) as far as the number of electrons generated? The 'Unity Gain' concept is quite artificial and therefore, with due respect, I think that the introduction of noise as a consideration is both confusing and unnecessary.
I am all for keeping things simple, however noise is a pretty key element of this discussion as I hope to show below. I am thinking about the noise that is always inherently present in light, sometimes referred to as shot noise, because its distribution is similar to the arrival statistics of shot from a shot gun, whch was characterized by a gentlemen by the name of Poisson. So now we can address the integer nature of electrons. Let's assume that each photosite is a perfect electron counter with Unity Gain. 10 electrons generated by the photosite, the ADC stores a count of 10 in the Raw data with no errors. Example 1: the sensor is exposed at a known luminous exposure and the output of the photosite in question is found to result in a Raw value of 2. What is the signal? We cannot tell by looking at just one photosite. The signal could easily be 1,2,3,4,5, 6... For instance if it were 4, shot noise would be 2, and a value of 2 is only a standard deviation away. To know what the signal is, we need to take a uniform sample of neighbouring photosites, say a 4x4 matrix*. We gather statistics from the Raw values from each photosite and compute a mean (the signal) and a standard deviation (the noise). In this example it turns out that the signal was 1 electron with standard deviation/noise of 1. Interestingly, the human visual system works more or less the same way. Example 2: a new exposure resulting in a signal of 7 electrons for each photosite in the 4x4 matrix on our sensor. Of course each does not get exactly 7 electrons because photons arrive randomly, and in fact we know thanks to M. Poisson that the mean of the values in our 4x4 matrix should indeed be 7 but with a standard deviation of 2.646  so some photosites will generate a value of 7 but many will also generate ...2,3,4,5,6,8,9,10,11,12.... The signal is the mean of these values. Example 3: Different exposure. Say we look at our 4x4 matrix of Raw values and end up with a mean of 12.30 and a standard deviation of 3.50. Using the Effective Absolute QE for the D800e above (15.5)% and ignoring for the sake of simplicity Joofa's most excellent point above, could we say that this outcome resulted from exposing each photosite to a mean of +12.3/0.155= 79.35 photons? After all, this number of photons is a mean itself. What does this mean for Unity Gain? Jack *The area within the circle of confusion on an 8x12" image watched by a normal person with 20/20 vision at arm's length corresponds to the area of about 16 sensels on a typical modern FF DSLR.


« Last Edit: March 26, 2013, 01:33:02 PM by Jack Hogan »

Logged




