What is the purpose of knowing the Unity Gain ISO? What practical purpose does it have?
What is the purpose of knowing the Unity Gain ISO? What practical purpose does it have?
Similarly what practical purpose does knowing the full well capacity have? Are we able to use the information to determine exposure on the fly, in the field?
There is little to be gained in increasing the camera ISO setting much beyond the Unity Gain ISO. All you're doing is losing headroom, and you're not reducing the noise in the raw file. You're better off letting the histogram slide towards the left and cranking up the Exposure control in Lightroom or ACR.
That's assuming you can see the playback image in the camera LCD (derived from the raw preview JPEG, and therefore affected by the camera ISO setting) well enough to do all the chimping you want to do.
Jim
The full-well capacity is a pretty darned good indicator of the dynamic range of the camera.
It's a nice thing to know when you're trying to decide what camera to buy, or what camera to use for a particular job.
Once you've purchased the camera and are using it, you might use the dynamic range of the camera to determine when you need to use HDR, averaging, or similar techniques to get more shadow detail. You can't do that directly from the full-well capacity, but you could take the log base 2 of the full-well capacity, and subtract 4 to 7 stops (some people say you need 100 electrons for photographic quality, and that's a tad under two to the seventh) to account for the signal-to-noise ratio (SNR) you want in the shadows, and what's left would be the approximate difference, in stops, between the highlights and the shadows-with-detail (Zone II or III).
Here's the graph with a log base 2 vertical axis to make it easy for you to do the math in your head:
I thought there ought to be a way to do the same thing without a search. I applied some algebra to the problem, and came up with the following algorithm: Set your camera to some middling ISO; call that value ISOtest. Point your camera at a featureless target. Defocus a bit to make sure you don't have any detail. Expose so that the target is about Zone VI, or a count of about 4000 for a 14-bit ADC. If you have a 12-bit ADC in your camera, try for a count of 1000. Bring the resultant image into RawDigger, select a 200x200 pixel area, and read the mean and standard deviation for each color plane. For each plane, call the mean Sadc and the standard deviation Nadc. The unity gain ISO is ISOtest*Sacd/(Nadc^2). Average all three color channels for the Unity Gain ISO of the camera.
I tried the algorithm out on a Nikon D4 over a range of ISOtest values, making 16 exposures for each ISOtest value and plotting the mean Unity Gain ISOs, the mean plus two standard deviations, and the mean minus two standard deviations.
I welcome discussion on what might be the source of the systematic variations, which indicate that the simple model I used is incomplete. In the case of the Sony cameras, the raw files are compressed in a way that reduces the resolution in the lighter values. That might be a possible source. I think the value of understanding the systematic variation is to better understand the internal makeup of the cameras, since the test appears to be sufficiently accurate even with this variation.
http://www.rawdigger.com/howtouse/pixel-capacity-and-amplifier-gain
Why bother with all of that? Why can't sites like DxO or other 'credible' review sites be used for that information? DxO, for example, suggests the D800 has a drange of just over 13 stops at ISO 100 when the noise floor is SNR=1. Knowing that SNR=1 isn't a practical limit, why can't I simply subtract 2 or 3 stops from the DxO number and consider that the practical brightness range of the sensor? All that aside, one still needs to know the brightness range of the scene/subject being shot or all the math is moot.
I would suggest using a somewhat lower DN (data number) than a 14 bit value of 4000. At high DNs, PRNU (pixel response non-uniformity) is increasingly prominent. And at low DNs, read noise becomes significant.
Bob,
There are many ways to approach the technical side of photography. Some people just ignore it, and are perfectly happy with their iPhones and P&S cameras with auto-everything and tiny sensors. Other people believe that the more you understand about your tools the better you can use them, and that deep understanding comes through experimentation. I'm mostly in that camp. Those are the extremes, and it sounds like you are somewhere in between. That's great. If if works for you, keep at it. I'll applaud.
Have you ever taught a workshop or course on a subject you know well? I have, and, every time I'm surprised about how much I learn about the subject that I thought I knew cold. Trying to come up with simple explanations for complicated things makes me understand the complicated things more deeply. Student questions sometimes come out of left field, approaching the subject from a direction that I'd never considered makes me dig deep and come up with a new way to think about the subject.
Testing's like that for me. Developing the test makes me think harder about what I'm testing for than just reading about it on the DxO site. When I go to do the testing, things never go exactly the way I thought they'd go, and I learn something from that. One of the things that I often do is perform tests many times and collect statistics on the results. I don't see that on test web sites very often. Having access to the statistics lets me figure out the accuracy, and even the statistical importance, of a result.
Sure, there are people who spend all their time testing and never make good pictures. They're not new; some of the Zone System acolytes did the same thing. I call it the sharpening pencils syndrome (http://blog.kasson.com/?p=9). But there are others who use their deep knowledge of the technology to make better pictures.
Bob, I'm not trying to convince you to go with my approach. You've got something that works the way you want it to work, and then it's perfect for you; I'm just trying to help you understand where I'm coming from.
Jim
One question. Why only the G channel? Why not the entire signal of the entire sensor?what is entire singal of the entire sensor ? the fact is that you have 4 channels (typically), sometimes the difference is just because of CFA, but sometimes indeed the manufacturer can do differen things for different sensels based on where under CFA they are located, etc... so you can average 4 channels or use the strongest one (like G1/G2) for a typical daylight... rawdigger website has a forum - you can ask authors directly - they are quite prompt if a good question is asked.
Why only the G channel? Why not the entire signal of the entire sensor?
Thinking about this testing a bit more, I'm wondering how it relates to the concept of the 'ISO-less' sensor that people talked about, mostly, when the D7000/K5D came out. The thought there was that it made no sense to increase ISO because of the essentially equal drop in drange for each 1 stop increase in ISO and that simply underexposing and pushing in the RAW converter would serve the same purpose. The D800 sensor behaves much the same way yet it appears that it does make sense to increase ISO up to a point. Is there conflict between the two schools of thought?
Thanks. Nice article. Takes a technical concept and explains it such that an engineering degree isn't needed to decipher it. Too rare an occurrence. See the response directly above for the typical bit of bafflegarb. ::)
One question. Why only the G channel? Why not the entire signal of the entire sensor?
What is the purpose of knowing the Unity Gain ISO? What practical purpose does it have? Similarly what practical purpose does knowing the full well capacity have? Are we able to use the information to determine exposure on the fly, in the field?
I don't think so. Above the Unity Gain ISO (or maybe a stop above that to make allowances for imperfect analog-to-digital converters), the camera is effectively ISO-less, and the ISO dial serves mainly for amusement of the photographer and getting the preview image to be bright enough to use for chimping.
Jim
The plot for all 3 (or 4) colour channels is interesting too. I think illuminant type could play a part there. Perhaps, even if the illuminant is rated for a certain colour temperature, if it isn't full-spectrum that could make a difference.
The "Real World" card has been played ;)
The well-respected and knowledgeable Roger Clark seems to feel it has a practical purpose, at least in the comparison of camera performance:
http://www.clarkvision.com/articles/digital.sensor.performance.summary/#unity_gain
Ted the Noob
By the way, figuring out the details of this has dramatically changed the way I work in dim light. I use manual exposure almost all the time where I usually used aperture-preferred with adjustments to the exposure compensation dial. I use the histogram mainly to figure out how much noise I'm going to have to deal with in the file. I leave the exposure the same for many pictures when before I would be constantly changing it. It feels very different to me.
Jim
And the problem with that is.....? I wasn't saying that the concept of unity gain ISO isn't important. But there are a lot of things that get studied in a lab that have little to no practical application (at least at this time). Most of us aren't lab rats; however. Most of us are 'in the field' photographers so it makes complete sense to understand how a given lab or theoretical test plays out in practical use. As has been laid out over the course of this discussion there are, definitely, practical implications. And the findings that Jim has laid out and explained are consistent with other, considered reliable, sources which simply lends increased credibility to both. There's also something to be said for being able to explain a technical or theoretical construct and make it more widely understood. Some are unable to do that because they don't really understand the underlying technicalities themselves. Others won't do that because they think it gives them some measure of superiority over others, it makes them feel elite or special.
What is the purpose of knowing the Unity Gain ISO? What practical purpose does it have? Similarly what practical purpose does knowing the full well capacity have? Are we able to use the information to determine exposure on the fly, in the field?
Since all the underlying sensor cells are statistically the same no matter what color filter in the CFA is over them
My response was to your earlier post:
The problem with that, if you must know, is:
A gentleman puts a lot of research and work into a post - which he publishes on what I thought was a forum where there is at least some slight interest in matters technical. Your quote immediately above decries the usefulness of the OP with a series of demeaning rhetorical questions. Your post offered nothing other than the negative implication that the OP is of no practical use. Effectively thereby, it dismissed the OP as useless.
The "Real World" card was played in the last line " . . to determine exposure on the fly, in the field . . "
Perhaps the problem wasn't what was said, it was how it was said. And, I must confess to a certain sensitivity in the area of this topic. I once calculated the saturation-based "native" ISO of a Sigma DSLR sensor, posted it with formulae and quotes from the ISO standard on a Sigma forum, thinking it might be of interest, and was promptly beaten up for being too technical.
Ted
but the processing by camera's hardware (including on chip/off chip ADCs)/firmware of the might not be the same
As far as playing the 'real world' card, you still haven't answered my question of why that is unimportant. Do you disagree with my reasons why it is important?
I guess not. I just didn't like those non-rhetorical, non-demeaning questions, when it comes right down to to it. Should have kept my opinion to myself, and I'll say no more about it if you won't.
I'll ask you the same question: What practical application does your analysis of the native ISO of a Sigma sensor have? How can photographers use the information in the field to advantage?
I'd never considered that possibility. Do cameras really have differential processing of the three (well, four) color planes of a raw file? If so, how can you find out if that's the case in a given camera? There's one case I have heard of, and that's multiplicative scaling of the values in a given color plane after the ADC, with the evidence being missing codes in that channel. Compression like Sony uses could, I suppose, cover up that evidence, or at least muddy the waters. I don't think such scaling would have much effect on the test results of the algorithm I'm proposing, unless the exposure gets really low, but I haven't done any testing, since I don't have a camera that I know does that kind of thing.
Jim
I can answer without really going OT. I will assume that we are familiar with ISO 12232: 2006. It gives several legal methods of determining a value to be shown or selected on a camera. Some of the methods allow the camera considerable latitude in these values, provide the word "equivalent" appears somewhere in the product, perhaps buried deep in the manual. And some manufacturers are said to take liberties even with that, according to fora elsewhere. On the camera itself, of course, we will only see 'ISO' unless anyone knows of a camera which says 'SOS' or 'REI' right there on the LCD or knob or whatever. Thus we are lulled into thinking that a setting of 100 gives exactly the same exposure for any camera (all other things being equal).
It is possible to calculate or, for that matter, test as per the OP to determine a sensors saturation-based ISO value (Ssat) - independently of what the camera says. If you then know (up front) how much over or under your camera is, you can dial in exposure compensation such that the camera 'tells the truth' which would be of practical benefit to those seeking, for example, to ETTR. Or those whose images come out less exposed or more exposed than they would like. Or those who want to push the envelope a bit, hoping to claw back some highlight details in post.
I would say that is both useful and practical to know that a camera has an Ssat of, say 130, when the LCD 'ISO' says 100. Feel free to disagree.
What you're effectively doing, it seems, is verifying the accuracy of the metering system. Or are you using a hand held meter to determine exposure? If the latter, then back to the earlier question of how you're separating ISO from shutter speed and aperture.
How do your figures compare with DxO's? And how would you account for any differences between your numbers and theirs? How are you determining that 'saturation-based ISO'? What is the methodology? While I understand the practical relevance of what you're doing, it still raises many questions. And dressing it up in pseudo-buzzwords like Ssat, or even 'saturation-based ISO', as opposed to something like 'the point at which overexposure occurs' is going to turn some people off. This is back to the point I made yesterday. There is something to be said for being able to explain technical constructs in a way that they are more widely understood. The more people who understand, the better it is for everyone, no? The more people who understand, the more it can foster discussion. Can you explain your methodology in such a way?
No, in most cases you are verifying the rating of the sensor. In assigning an ISO to the sensor, manufacturers allow widely differing amounts of headroom for the highlights, and this is the major variable. Light meters are fairly well standardized according to ISO 2720:1974. My personal experience is with Nikon, and their meters are usually spot on. Otherwise, the use of a hand held meter would give different results from the built in meter. These considerations are discussed in depth in articles by Doug Kerr on his web site.
The DXO saturation standard (http://www.dxomark.com/index.php/About/In-depth-measurements/Measurements/ISO-sensitivity) is straight forward:
Ssat = 78·A2/q·Lsat·t,
where A is the aperture, q is a constant, Lsat is the luminance in cd/m2 required for saturation, and t is the integration time (shutter speed). On modern digital cameras the apertures and shutter speeds are quite accurate.
Most photographers do not have a photometer to measure the luminance, but as mentioned above, the camera meters are usually within spec and the meter reading can be used with confidence. An exposure according to the saturation standard should yield approximately 12.7% sensor saturation, and this is easy to verify using RawDigger or a similar tool. Why complicate things and throw up all your disclaimers?
Regards,
Bill
Actually not as accurate as you might think. 'Aperture flicker' is a big annoyance to timelapse shooters and it can make a significant difference from shot to shot in a clip.
And yes, I can go and read the articles on the DxO site. But from his explanation I have no idea whether Ted is using the same methodology or not. So I asked. 'q' is a 'constant'? What is it? What constant? What's the figure? It really isn't a true 'constant' though, is it? It's going to vary from lens to lens, right? Where do the T and v values come from in determining q? If I ask why '78'? Or where that number comes from you'll decry that I'm throwing up roadblocks. When in actual fact that's not the case at all. Understanding how a formula is derived or where the inputs come from is as important as just plugging numbers into it.
Do you have any data on this? I do have data for the Nikon D800e using various shutter speeds and a constant aperture of f/8. The coefficient of variation is 1.0, indicating a high degree of reproducibility for both the aperture and the shutter speed.
It would be good if you did a bit of reading before asking questions about data that are readily available and understood by photographers with a technical bent. The derivation of 78 is addressed along with other issues in an excellent post (http://en.wikipedia.org/wiki/Film_speed#Saturation-based_speed) on Wikipedia. The factors in determining q are also discussed and they do include an assumed T value for the lens. With TTL metering, the T factor is taken into account. For practical photography, one is really interested in the total system response that includes the lens, sensor, light meter, exposure mechanism of the camera, and the rendering software.
Regards,
Bill
Those "pseudo-buzzwords" come from the ISO standard.
I'm through talking to you, Bob.
At the 16 bit DN of about 16000, the SD is contaminated by PRNU. The observed SD is 156 and the corrected SD is 144. At a 16 bit DN of around 4000, the observed SD is 73.8 and the corrected SD is 72.3.
Bill,
I started down the road of creating a model in Excel to help me sort this out. It got messy fast, because a few hundred pixels in a simulated test image is not enough to have stable expected values (I'm using the term in the mathematical sense). I've decided that what I should do is create a camera model using a real programming language. I've decided on Matlab. If I do it right, I should be able to extend it to aliasing and demosaicing studies. This is not going to be an afternoon job, so be patient.
One thing I could use is information that would help me model pixel response non-uniformity. I'd be grateful for any pointers you might give me.
The full-well capacity is a pretty darned good indicator of the dynamic range of the camera. It's a nice thing to know when you're trying to decide what camera to buy, or what camera to use for a particular job.
Once you've purchased the camera and are using it, you might use the dynamic range of the camera to determine when you need to use HDR, averaging, or similar techniques to get more shadow detail. You can't do that directly from the full-well capacity, but you could take the log base 2 of the full-well capacity, and subtract 4 to 7 stops (some people say you need 100 electrons for photographic quality, and that's a tad under two to the seventh) to account for the signal-to-noise ratio (SNR) you want in the shadows, and what's left would be the approximate difference, in stops, between the highlights and the shadows-with-detail (Zone II or III).
Here's the graph with a log base 2 vertical axis to make it easy for you to do the math in your head:
(http://www.kasson.com/ll/FWstops.PNG)
This ignores dark noise, read noise, and other things that affect the shadows but not the light tones. It also ignores resolution, and you can decrease noise in an image by rezzing it down. In practice, I've found the D4 and the D800 to give similar noise performance at similar resolutions. If we compute the dynamic range by averaging the photosites to get to 12 megapixels for each camera, we see that, except for the M9, the size of the sensor pretty much detirmines the dynamic range:
(http://www.kasson.com/ll/FWstops-rescor.PNG)
Jim
The best answer I can give for determining PRNU is that PNRU is proportional to the signal, so it increases along with shot noise as exposure increases.
Have you seen the excellent treatise (http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/index.html) on noise by Emil Martinec? If not, it is worth reading.
BTW, all current sensors seem to have tunable anti-blooming incurving their response near saturation.
I've added read/dark noise to the model. I've assumed it's Gaussian, although it seems to have longer tails than that. I got the values from some Nikon D4 testing I've done with exposures at 1/30 of a second (therefore not much dark current). I made the exposures using ISOs from 100 to 6400, and, to the first order, the noise all occurs before the pre-ADC amplifiers. The mean value of the noise with an amplifier gain of unity is 1.8 least-significant bits (LSBs) and the standard deviation is 2.7 LSBs. Since I did my testing with the lens cap on, values below zero got clipped, and therefore the real standard deviation is probably higher. I note that observation, but have not tried to correct for it. Because the real read noise tail appears longer than Gaussian and the standard deviation is probably understated, the curve that follows could be optimistic in the higher test ISOs and the darker targets.
can see that the best compromise exposure is Zone IV (14-bit ADC count of 1000), exactly as you predicted.
Anyway, here goes. On the RawDigger site, there's a technique for computing Unity Gain ISO (http://www.rawdigger.com/howtouse/pixel-capacity-and-amplifier-gain). It is basically a search over several exposures made with the camera ISO setting at different places for the ISO setting that, with a flat, relatively bright (but not saturated) compact target rectangle, produces a standard deviation in the (pick a channel) raw values that's the square root of the mean raw value in that channel.
I have often wondered what 'unity' means exactly.
Just out of curiosity, why do you refer to Zones when talking about Raw value ranges?
Hi Jim, very nice. I trust you know that the source of empirical knowledge about noise and similia in DSCs is this excellent treatise by Emil Martinec (http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p2.html#readandshot).
For fun, I applied the theory there to the full SNR curves that can be found at DxO (http://www.dxomark.com). Here is an example based on the D5200:
Bill,
I've added read/dark noise to the model. I've assumed it's Gaussian, although it seems to have longer tails than that. I got the values from some Nikon D4 testing I've done with exposures at 1/30 of a second (therefore not much dark current). I made the exposures using ISOs from 100 to 6400, and, to the first order, the noise all occurs before the pre-ADC amplifiers. The mean value of the noise with an amplifier gain of unity is 1.8 least-significant bits (LSBs) and the standard deviation is 2.7 LSBs. Since I did my testing with the lens cap on, values below zero got clipped, and therefore the real standard deviation is probably higher. I note that observation, but have not tried to correct for it. Because the real read noise tail appears longer than Gaussian and the standard deviation is probably understated, the curve that follows could be optimistic in the higher test ISOs and the darker targets.
Here's the result:
(http://www.kasson.com/ll/D4SimReadNoise.PNG)
You can see that the best compromise exposure is Zone IV (14-bit ADC count of 1000), exactly as you predicted.
Jim
I have often wondered what 'unity' means exactly. I understand it in principle (input 100 apples, count 100), but does it really apply to the statistical nature of light, where integers are replaced by means and standard deviations with many decimal points?
So what's the input: photons or electrons? Most of the relevant posts in this thread assume electrons. Why? From my D800e chart above it takes about 6.4 photons on top of the sensor to generate an electron. That's not an integer to start with. And what happens if I only get 6 photons? Do I get an electron? I believe I in fact get 0.94 electrons, which with a little dithering I can store pretty accurately in the Raw data.
Any thoughts?
Jack
Here is a link (http://www.clarkvision.com/articles/digital.sensor.performance.summary/#unity_gain) to what Roger Clark thinks about unity gain.
Regards,
Bill
Jack, I stand on the shoulders of others, and Emil has big shoulders.
It turns out that you can take the same data and compute full-well capacity, if you assume that the well fills as the ADC output approaches full-scale at base ISO. The full-well capacity should be proportional to photosite area, all else being equal. With the CCD-based Leica M9 in the mix, all else is not equal:Something seems wrong here: the M9 sensor should have roughly the same 60,000 electron full well capacity as other Kodak full frame type sensors with 6.8 micron pixels, like the KAF-31600: http://www.truesenseimaging.com/all/download/file?fid=11.62
(http://www.kasson.com/ll/Fullwell.PNG)
Jim
So what's the input: photons or electrons? Most of the relevant posts in this thread assume electrons. Why?
From my D800e chart above it takes about 6.4 photons on top of the sensor to generate an electron. That's not an integer to start with. And what happens if I only get 6 photons? Do I get an electron?
Here is a link to what Roger Clark thinks about unity gain.
I'm not sure how you took PRNU into account. In my testing with the D800e it represents about 0.38% of the signal output.
Looking at your curves and some of my own data makes me want to extend my model to have two sources of read noise.
The first one is before the amplifier, measured in electrons. That noise would be specified as the characteristics of some probability density function. If I continue to use a Gaussian function, as I will for at least a while, the parameters will be mean and standard deviation...
The second one would be after the amplifier, measured in LSBs, probably represented by the mean and standard deviation of a Gaussian generator, and probably not quantized to integers, because I see it as an analog signal injected into the ADC input.
One way to get these values would be to measure the noise at a bunch of ISOs and fit a straight line to the data. For some reason, I like that better than just having a different mean and standard deviation for each ISO, although the cruder technique is probably more accurate, since it allows for unmodeled mechanisms.
What I wanted when I fastened on the Zone nomenclature was a term that indicated the digitized value in a raw file independent of the number of bits in the ADC and the gain of the amplifier. I also wanted it to be logarithmic, with a base of two, because we photographers think in terms of stops. Another possibility would be mean digitized value in stops below full scale, but around here I get some glazed eyes when I say that.
I am open to suggestions.
Ok Jim, Bill, Edmund, BJL, Ted, Joofa and everybody else - what Roger Clark says offers a good example of what I am asking:
"Since 1 electron (1 converted photon) is the smallest quantum that makes sense to digitize, there is little point in increasing ISO above the Unity Gain ISO (small gains may be realized due to quantization effects, but as ISO is increased, dynamic range decreases)"
1. One electron is not one converted photon...
2. Even if it were, isn't he assuming that the electron is either there or not there when he says that it is "the smallest quantum that makes sense do digitize"? Doesn't quantum mechanics work in double precision floating point internally ;)?
3. I can think of many ways to determine when to stop increasing ISO, but all of them include some measure of read noise, which I do not see here.
Please correct me if I am reading this wrong but, intuitively, an electron is just the equivalent of a convenient SI unit. However it means nothing by itself: we cannot tell if that one electron is the signal (a mean) or noise. In order to decode the signal we need a larger sample. Because there will be (random) noise, we also get dithering. Which, depending on sample and noise size, may allow us to determine the 'signal' to many significant digits, for example 0.15 electrons. So does it make sense to speak of 1 electron (or was it photon?) as "the smallest quantum that it makes sense to digitize"?
Jack
The 'Unity Gain' concept is quite artificial and therefore, with due respect, I think that the introduction of noise as a consideration is both confusing and unnecessary.
For fun, I applied the theory there to the full SNR curves that can be found at DxO (http://www.dxomark.com).
So one way to specify the signal in log 2 fashion familiar to photographers would be as a function of Stops from Clipping. Another way I've seen it (as in RawDigger) is to assign O EV to 12.5% (as per the relative ISO Saturation Speed standard). This way Saturation is +3 EV and everything else falls into place.
Something seems wrong here: the M9 sensor should have roughly the same 60,000 electron full well capacity as other Kodak full frame type sensors with 6.8 micron pixels, like the KAF-31600: http://www.truesenseimaging.com/all/download/file?fid=11.62
so that 30,000 seems too low...
Jack, one thing you might do with your curves to make the departures from ideality (Is that a word? My spell-checker doesn't think so.) more obvious is subtract out the 3 DB/octave (or half a stop per stop, if you use log base two axies in both directions like I do) contribution of the photon noise. I wish DxO did this; it would make it easier to see what's going on in the camera, rather than reproducing the physics of photon noise over and over.
Here's an example of a similar, but not identical, set of curves, this one showing noise as a function of ISO with the ADC count held constant.
(http://www.kasson.com/ll/D800ESNRvsISO.PNG)
This shows small departures from ideal performance that would be invisible without subtracting out the effects of the photon noise.
This shows nicely the effect of read noise at a constant raw value (of 1000, I assume) as the ISO is increased. Is this the objective of the graph?
However, if you are using DxO's curves as the source of data you may have difficulty showing deviation from 'normal' because I understand that they do some curve fittin'
Jim, did I understand correctly from a previous post of yours that you are the inventor of Subtractive Dithering? That is one bloody brilliant idea!
Hi Ted,To be pedantic, one photon = one or no electron. 380nm vs. 760nm makes no difference, other than 380nm has more energy and is more likely to whack an electron. However, there are devices that can produce more electrons than photons, such as photo-multipliers and those night-vision thingies (not IR) much beloved by the military.
Ok, good point about energy levels: so you say one photon = one electron? Does it make a difference as to what the energy of the photon in question is (i.e. 380nm, vs 760nm light) as far as the number of electrons generated?
I am all for keeping things simple, however noise is a pretty key element of this discussion as I hope to show below. I am thinking about the noise that is always inherently present in light, sometimes referred to as shot noise, because its distribution is similar to the arrival statistics of shot from a shot gun, whch was characterized by a gentlemen by the name of Poisson.
So now we can address the integer nature of electrons. Let's assume that each photosite is a perfect electron counter with Unity Gain. 10 electrons generated by the photosite, the ADC stores a count of 10 in the Raw data with no errors. Example 1: the sensor is exposed at a known luminous exposure and the output of the photosite in question is found to result in a Raw value of 2. What is the signal?
We cannot tell by looking at just one photosite. The signal could easily be 1,2,3,4,5, 6... For instance if it were 4, shot noise would be 2, and a value of 2 is only a standard deviation away. To know what the signal is, we need to take a uniform sample of neighbouring photosites, say a 4x4 matrix*. We gather statistics from the Raw values from each photosite and compute a mean (the signal) and a standard deviation (the noise). In this example it turns out that the signal was 1 electron with standard deviation/noise of 1. Interestingly, the human visual system works more or less the same way.
Example 2: a new exposure resulting in a signal of 7 electrons for each photosite in the 4x4 matrix on our sensor. Of course each does not get exactly 7 electrons because photons arrive randomly, and in fact we know thanks to M. Poisson that the mean of the values in our 4x4 matrix should indeed be 7 but with a standard deviation of 2.646 - so some photosites will generate a value of 7 but many will also generate ...2,3,4,5,6,8,9,10,11,12.... The signal is the mean of these values.
Example 3: Different exposure. Say we look at our 4x4 matrix of Raw values and end up with a mean of 12.30 and a standard deviation of 3.50.
Using the Effective Absolute QE for the D800e above (15.5)% and ignoring for the sake of simplicity Joofa's most excellent point above, could we say that this outcome resulted from exposing each photosite to a mean of +12.3/0.155= 79.35 photons? After all, this number of photons is a mean itself.
Yes, shot noise and sensor thermal noise exist, of course, but they are earlier in the signal chain than the sensor output and are not, therefore, part of the definition.
...what I am trying to understand is whether Unity is just a figment of our imagination because integers are warm and fuzzy and allow us to calculate certain things easily, or whether it is grounded in something deeper...
Adding noise messes things up. You could say that electrical noise smaller than one electron's worth of voltage referred to the input of the amplifier creates a dither signal that allows resolution of non-integer mean electron counts as long as the noise is at least comparable to 1 LSB, and you'd be right. So I take this Unity Gain thing with a grain of salt. I figure when the ISO knob on the camera gets a stop or two past what is necessary to get Unity Gain, then I should stop twisting it unless I have a good reason.
Also, dithering is not without cost, if we can't remove it later, and it's not clear that we can in this case. The noise removal tools in Lightroom and fancy plugins seem magical sometimes, but they do trade off resolution for noise reduction.
When we get to cameras that oversample the highest frequencies the lens can produce and we're using many camera pixels for every pixel we send the printer, then maybe Unity Gain ISO will have outlived its usefulness.
Does that help?
I hear you, but that's a pretty narrow definition, as is the definition of signal as the output of a single photosite, or charge collection efficiency as wavelength independent (http://en.wikipedia.org/wiki/Responsivity) - that's not the way that things work in the real world and that definition doesn't help to answer the common place that I am trying to get to the bottom of: is "one electron the smallest quantum that makes sense to digitize" as Mr. Clark says? Or is there more of an articulated answer once Information Science is brought to bear? I have no answers, I am just curious.
To help us converse let me define the signal as the mean Raw value from a 4x4 sensel matrix on our sensor illuminated uniformly by an exposure in lx-s that would not break down into an integer number of photons. The sort of mean signal that you would read off of a 4x4 sampling area in RawDigger (http://www.rawdigger.com)and trace back to electrons and photons.
I am open to suggestions but my gut says that it is more complex than Mr Clark suggests, because light, electrons and the human visual system are stochastic systems, based on statistics. Noise is an integral part of them at every level. In the light itself, in the sensor, in the ADC in our visual and processing system. If it's not there it gets injected (Jim inserted noise in the ADC so that he would have dithering and then figured out how to take it back out later - brilliant stuff, check the text around the bottom figure of this page (http://www.dspguide.com/ch3/1.htm)).
As to your question, nobody is challenging the fact that light comes in quanta. Physics has spoken about that a century ago and we all agree on it. I may be completely off base but what I am trying to understand is whether Unity is just a figment of our imagination because integers are warm and fuzzy and allow us to calculate certain things easily, or whether it is grounded in something deeper.
With appropriately sized noise dithering we are able to record integer values that allow us to represent electrons and fractions thereof in a stochastic process. What does unity mean in this context? If instead of 1:1 it is 0.5:1, what's the real difference when the signal is a noise dithered 7.2 ADU/electrons/photons? Isn't this the same as going from ISO 400 to 200 with an ISOless sensor while keeping exposure the same? Half the Raw values but same information captured?
Perhaps Jim could give us a hand?
. . . I am trying to get to the bottom of: is "one electron the smallest quantum that makes sense to digitize" as Mr. Clark says?
No we could not. We can calculate a mean and SD for discrete items such as the number of electrons in a number of photosite and we can indeed assign fraction values to the said mean and SD. But, sorry, we can not turn the equation around and come up something like 79.35 photons. All that figure tells you is that it is perhaps more likely that there were 79 photons than 80 photons. You can not have a fractional number of photons. Physically impossible.
I find it disturbing that fractional photons are still being mentioned. There can be no such thing. If this basic fact about the nature of light is not understood, then nothing else can be accepted or understood and, with all due respect, our discussion would be at an end.
So, time for a question of my own:
Is it the opinion of your goodself, or indeed of this forum, that fractional photons can exist?
I hear you, but that's a pretty narrow definition, as is the definition of signal as the output of a single photosite, or charge collection efficiency as wavelength independent (http://en.wikipedia.org/wiki/Responsivity) - that's not the way that things work in the real world and that definition doesn't help to answer the common place that I am trying to get to the bottom of: is "one electron the smallest quantum that makes sense to digitize" as Mr. Clark says? Or is there more of an articulated answer once Information Science is brought to bear? I have no answers, I am just curious.
Hi Jack,
Each single (one) electron is the only thing that matters once collected (on a per sensel collection area). It helps to quantize it before we can do something useful with it. When the gain is such that 1 or 2 (or more) electrons will produce the same ADU, it will not be unity gain. When each single captured electron produces a different ADU, unity gain is in effect, and the bit depth of the ADC is optimally utilized. The fact that there may be more electrons from other processes (electronic noise, dark current, and what have you) doesn't change the importance of being able to quantize each and every EXPOSURE related electron with enough accuracy. The electronic noise and such can be mostly eliminated by taking multiple samples and averaging them, which leaves the exposure signal itself (and its Poisson noise distribution), which is what interests photographers most.
Cheers,
Bart
* You can tell what next question this portends, right ;-)?
Bart, Ted and Jim, very helpful, thank you for indulging me with a less than intuitive subject - I am sure Mr. Clark is a very smart man and I may be completely off base here but I've been thinking about this for a while, meandering a bit without being able to phrase the question properly. In a nutshell, are we not making the same mistake as those who think that engineering DR is equal to bit depth, relating a bit to a doubling of signal instead of to a basic unit of information? As I said, my information science is weak :)
The intuitive answer is, yes you are degrading IQ - because now you have compressed your original target signal range, using only a quarter of the bits as before to record it, so you are losing resolution in the graduations. But the real world answer is no, we are not degrading IQ perceptibly because dithering occurs as a result of noise unavoidably present in the system (http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p3.html#bitdepth).
We are able to record virtually the same information as in the first case plus some more.
[Addition: if I knew the internal organization of the amps and ADCs in the D800E, I could craft a selection that only encompassed one of each. Anyone know enough to tell me how to do that?]
I can help with that.Hope that helps,
Thus analysis will be difficult because it's a 'black box' thingy with all those amps ADCs and more inside.
To paraphrase Slick Willy "it all depends what the meaning of 'up' is . ."
I don't know if I've gotten to a single amp and ADC in either case, but if I have, the following is a reasonable interpretation. There are peaks separated by about 20 counts, as you would expect from an ISO about 20 times the unity gain ISO. There appears to be noise after the amp and before the ADC with a standard deviation of about 6 LSBs. However, it also looks like the ADC is functioning effectively as a 12-bit device. What's up with that?
You can see that there are a lot of missing codes. In fact, there are typically three empty codes between each occupied one.
If the gears in your head are turning and you're thinking, "Yes, but with a unity gain ISO of around 300, at ISO 6400, the "gain" is about 20, and there should be more missing codes than that." Well, you're right, but consider the limitations of the experiment. With this test, I am probably using all the amplifiers and maybe all the ADCs, all of which have slightly different characteristics. If I could test one amp and one ADC, I'd probably get bigger gaps in the histogram.
We are able to record virtually the same information as in the first case plus some more.
I would phrase it, "We are able to record virtually the same information, plus some more noise, and not as much of either at it looks like we are recording from looking at the broad-brush histogram (because of the missing codes we can't see when we're looking at the whole histogram)."
Jim
I don't know if I've gotten to a single amp and ADC in either case, but if I have, the following is a reasonable interpretation. There are peaks separated by about 20 counts, as you would expect from an ISO about 20 times the unity gain ISO. There appears to be noise after the amp and before the ADC with a standard deviation of about 6 LSBs. However, it also looks like the ADC is functioning effectively as a 12-bit device. What's up with that?Still exactly the same values show (and don't show) up with the same interval. Looks like about 4x digital amplification after the ADC, which would make it
The second reason is to have enough range in your raw processor to boost the "Exposure" in post. Lightroom only offers 5 stops.
Earlier, out of interest, I calculated the D800 Unity Gain ISO using Clark's method. For a 12-bit in-camera setting it comes to 666 (!). For a 14-bit it comes to 108 which is quite interesting. So, back to 666, 20 times brings us to 13,300 ISO, not 6,400. I'm probably obfuscating the difference between theory and practice, so feel free to ignore this post.
PS I am sure that you've seen this post about NEF information compression (http://www.openphotographyforums.com/forums/showthread.php?t=5499)
Ted, I get what Jim suggested, about 320 ISO at 14 bits. I get that by taking FWC of roughly 52600 dividing by the number of ADUs at full scale (16382 for the 'e') and multiplying by 100. The method you outlined does exactly the same but with a few redundant operations: reread your post, the answer is always n*100 :)Thanks, I didn't know the FWC so I used a value from a similar camera which I shouldn't have, tsk.
I think that a definitive answer will need some more sophisticated analysis than histograms. Does anyone know a good way to turn raw files into 4 monochromatic TIFFs, preferably in batches? I'm not excited about using DCRAW. ImagesPlus?ImageJ does lots of things that I've never used and don't understand.
Jack, even if I screwed up and turned lossy compression on, it shouldn't cause this effect, since the signal level is about 5 stops below full scale.Of course. I meant that you only need 600 or so codes to represent fully information from a 12-bit ADC with 4095 at full scale. Lots of free levels there and nobody's noticing. Apparently Nikon also does WB preconditioning, that's why you get gaps in the Raw data even in the green channel at ISO 100.
Still exactly the same values show (and don't show) up with the same interval. Looks like about 4x digital amplification after the ADC, which would make it function like a 12 bit one.
I got interested in the missing codes, and evaluated some exposures made with five different cameras under the same conditions. The subject was the back of a lens cap. The exposure was 1/4 sec. The aperture was all the way stopped down, just to make sure. The ISO was 6400. The exposures were the first after the camera had "rested" a bit, although I'd seen not much in the way of thermal effects previously. I selected a central area 200x200 pixels, and ran a histogram of the region between a count of 4 and one of 100.
I used dark noise as a way to make sure that the always-on-no-matter-what-you-do tone curve compresion of the Sony cameras wouldn't come into play.
First, let's look at the two cameras that behaved as expected. The first of those is the Nikon D4.
A few missing codes in the red and blue channels, which could well be due to ADC defects, or also digital gain applied to the red and blue channels. I'm thinking it's digital gain, because the noise appears to be higher in the red and blue, and I can't think of a reason why that would happen.
The Sony RX-1 shows an interesting result:
...I'm at a loss here.
Next up, the Sony NEX-7:
Wow! We're seeing steps of about 16 LSBs between occupied buckets, but it's not exactly 16 LSBs, and it's not just as if the lower four bits have been lopped off.
Finally, the Nikon D800E:
This is interesting, because, although the channels are mostly looking like they'd been digitized with a 12-bit ADC, there are many places where adjacent 14-bit codes are occupied, indicating that the comblike nature of the histogram is not the result of any planned processing.
Maybe this is an invalid test because dark noise may be patterned, but I think there's something worth exploring here. The results for he M9 and D4 indicate that some cameras respond to this test in a way that you'd expect them to.
I'm not sure where to go from here, but I'm going to pursue this. It could have serious implications for what Unity Gain ISO should mean to photographers using the D800E and the two Sonys.What about for the D4 and the M9? Weren't you surprised that there were no systematic gaps proportional to gain beyond Unity Gain there? For instance, according to sensorgen, there are 9 times as many linear Raw values as electrons in the D4 at ISO 6400, suggesting a unity gain of around ISO 700. Where are the gaps?
Jim
Making the column 1000 pixels high shows no periodicity in the histogram. I think that a definitive answer will need some more sophisticated analysis than histograms. Does anyone know a good way to turn raw files into 4 monochromatic TIFFs, preferably in batches? I'm not excited about using DCRAW. ImagesPlus?
Jim
Iris (http://www.astrosurf.com/buil/us/iris/iris.htm) is a freeware astronomical program that can do the job...ImagesPlus does have batch modes. It splits the CFA into three monochrome images, with the green channels combined...
What about for the D4 and the M9? Weren't you surprised that there were no systematic gaps proportional to gain beyond Unity Gain there? For instance, according to sensorgen, there are 9 times as many linear Raw values as electrons in the D4 at ISO 6400, suggesting a unity gain of around ISO 700. Where are the gaps?
I think that a definitive answer will need some more sophisticated analysis than histograms. Does anyone know a good way to turn raw files into 4 monochromatic TIFFs, preferably in batches? I'm not excited about using DCRAW. ImagesPlus?
There are several software packages that can be of use, but one could use a combination of DCRaw to produce a non-demosaiced output file, and use ImageJ (http://imagej.nih.gov/ij/index.html) to do some image calculations (also allows to subtract image pairs).
And absent the gaps or with gaps that have nothing to do with it, what is the usefulness of Unity Gain?
Jack, there aren't any photon-created electrons in the test image; the inside of the camera was as near to a photon-free zone as I could make it. I think we're just looking at electrical noise, exacerbated by turning up the gain with the ISO control. You could consider the signal to be zero, and that bucket is so full that I had to cut it off the histogram so that you could see the other buckets. I could have used the log scale for the y axis, but sometimes that's confusing to people.
So, I'm not surprised. I'll be working on some images today with real photons, and not very many of them. Maybe we'll see gaps in the D4, but I'm betting the M9 noise will prevent that.
Also, I don't think we need to have SNRs greater than one in the deep shadows (a necessary condition for gaps) for Unity Gain ISO to make sense. But let's wait on that until I figure out what's going on with the D800E and the two Sonys.
Thanks for getting me started on this.
Jim
I looked at DCRAW, and went to a couple of the websites that offered compiled code, and was put off by the extra stuff that seemed to some with it. I don't want to buy a C compiler, and I'm -- probably unjustifiably so -- scared of using a free compiler that I don't know for sure is without side effects.
Call me chicken.
Jack, no matter what you think about the dithering effect of noise in the system, we do agree on one thing: there's a point at which you don't improve the SNR by turning up the ISO knob. If you think that noise dithering swamps out the theoretical gain in SNR of Unity Gain ISO, then you think that point comes sooner (at a lower ISO) than you do if you buy the whole Unity Gain ISO worldview. But either way, you stop twisting the knob.
Right?
Thanks, Bart, but I think I'm going to use Matlab for the image processing once I get the raw image planes. Not that it will be better than a specialized image processing program, but it's a program that I know (reasonably) well. I can't create those incredibly dense APL-like constructions, so real Matlab experts probably would look down on me, but the trade-off is that anyone with C++/Java experience can make sense of my code.
I looked at DCRAW, and went to a couple of the websites that offered compiled code, and was put off by the extra stuff that seemed to some with it. I don't want to buy a C compiler, and I'm -- probably unjustifiably so -- scared of using a free compiler that I don't know for sure is without side effects.
Call me chicken.
Jim
Still doing that, but while Googling, found this link which was highly relevant to high ISO and particularly the answer by jrista addressed the subject of unity gain very well and is quite relevant to this discussion. IMHO.
http://photo.stackexchange.com/questions/29675/what-is-so-special-about-iso-1600
Gain is the conversion ratio of electrons (e-) to digital units (DU). A camera that converts exactly one e- to one DU has "unity gain".
Most cameras achieve unity gain at some exact (but possibly non-selectable) ISO setting.
More frequently, gain is fractional, such as 5.7 e- to every DU. For every stop increase in ISO, gain drops by the same factor.
If you have a gain of 5.7 e-/DU at ISO 100, you would have 2.85 e-/DU at ISO 200, 1.425 e-/DU at ISO 400, .7125 e-/DU at ISO 800, and 0.35625 e-/DU at ISO 1600.
As you increase ISO, you lose signal to noise ratio (S/N). A lower S/N is never really a good thing...
Hi Ted, I believe this is the paragraph that you are referring to:QuoteGain is the conversion ratio of electrons (e-) to digital units (DU). A camera that converts exactly one e- to one DU has "unity gain".
Most cameras achieve unity gain at some exact (but possibly non-selectable) ISO setting.
More frequently, gain is fractional, such as 5.7 e- to every DU. For every stop increase in ISO, gain drops by the same factor.
If you have a gain of 5.7 e-/DU at ISO 100, you would have 2.85 e-/DU at ISO 200, 1.425 e-/DU at ISO 400, .7125 e-/DU at ISO 800, and 0.35625 e-/DU at ISO 1600.
As you increase ISO, you lose signal to noise ratio (S/N). A lower S/N is never really a good thing...
Alright, here is an example. We are at base ISO in manual mode and Exposure is already as big as [we care to set] according to dof and blur constraints. We take a test shot, look at our (fictitious) Raw histogram and realize that the brightest highlight we would like to keep is still two stops below clipping. Do we increase ISO?
If one were to follow that quote above blindly one would not. And they'd be leaving IQ at the scene. Because for a fixed exposure, when you raise ISO the input referred read noise usually goes down (except in ISOless cameras) improving the shadows' SNR and overall DR - but only up to a point. The key is determining the ISO past which the IQ no longer improves, because if you increase it past that point then all you are doing is waste space and possibly blow more highlights.
I believe that point is determined mainly by the physical characteristics of the analog 'amplification' in the conversion chain.
Unity Gain exponents introduce an additional element unrelated to noise and suggest that, in any case, one should stop raising ISO when the chain produces 1 ADU for each electron from the sensor. I don't quite understand why that should be a limit, probably because I am missing a piece of the puzzle.
I would say we increase ISO from base - simply because the scene is currently under-exposed by the said two stops.
I believe that point is determined mainly by the physical characteristics of the analog 'amplification' in the conversion chain*. Unity Gain exponents introduce an additional element unrelated to noise and suggest that in any case one should stop raising ISO when the chain produces 1 ADU for each electron from the sensor. I don't quite understand why that should be a limit, probably because I am missing a piece of the puzzle.
Jack
*PS Bill Claff has developed a marvellous chart http://home.comcast.net/~NikonD70/Charts/PDR_Shadow.htm#D800E,EOS 5D Mark III (http://home.comcast.net/~NikonD70/Charts/PDR_Shadow.htm#D800E,EOS 5D Mark III) that shows graphically the improvement that one can expect from raising ISO in such situations for various cameras.
I am going to stop you right here: we are not underexposed because Exposure (http://en.wikipedia.org/wiki/Exposure_value#Formal_definition)(determined solely by shutter speed and f/number) is as good as it gets and fixed.
Some in this thread have suggested that the unity gain is useful in determining this point of diminishing returns where increasing the ISO no longer decreases the read noise. However, unity gain has little to do with read noise. Many of the newer cameras using the Sony Exmor sensors are ISO less in that read noise does not vary with ISO and one does not need to bother with increasing the ISO on the camera, but can merely increase exposure in the raw converter. Unity gain has little utility with these cameras.
I determined unity gain for my D800e by a brute force, using Roger Clark's method (http://www.clarkvision.com/imagedetail/evaluation-1d2/index.html) for sensor analysis. For details, see Roger's post, but in brief, I took duplicate images at various ISOs of a uniform target illuminated by daylight coming in from a south facing window on a clear cloudless day where the illumination did not vary. I exposed at 1 EV over the meter reading get data where read noise and dark noise do not contribute significantly to the total noise, using a 300 mm lens at f/8 to reduce light falloff. I used ImagesPlus to isolate the 200x200 central area of the image and extract the green channel, and then subtracted duplicate images to obtain the photon noise. The number of captured photoelectrons equals the square of the signal:noise and one can determine the gain by dividing the number of electrons by the 14 bit data number. The results are shown. The unity gain can be estimated by the graph and occurs at about ISO 320.
I determined unity gain for my D800e by a brute force, using Roger Clark's method (http://www.clarkvision.com/imagedetail/evaluation-1d2/index.html) for sensor analysis. For details, see Roger's post, but in brief, I took duplicate images at various ISOs of a uniform target illuminated by daylight coming in from a south facing window on a clear cloudless day where the illumination did not vary. I exposed at 1 EV over the meter reading get data where read noise and dark noise do not contribute significantly to the total noise, using a 300 mm lens at f/8 to reduce light falloff. I used ImagesPlus to isolate the 200x200 central area of the image and extract the green channel, and then subtracted duplicate images to obtain the photon noise. The number of captured photoelectrons equals the square of the signal:noise and one can determine the gain by dividing the number of electrons by the 14 bit data number. The results are shown. The unity gain can be estimated by the graph and occurs at about ISO 320.
Nevertheless I am sure that there is some merit to the unity-gain argument, and I would like to learn more to understand what that is.
Any comment, Jim?
The group of people who work with unity gain most are probably those involved with Astrophotography. They are faced with two major challenges (besides light-pollution).
Ted,
This isn't working very well for you, is it?
Bart,
Astrophotographers often cool their sensors, don't they?
As the read noise level drops, Unity Gain ISO assumes greater importance.
As the noise goes up and up, at some point one or two photons either way need a lot of processing to be detected. I'm trying to sort out the visual effects, but I'll have to devise some new test to see if single electron changes in the sensor well can translate to visible differences in normal photographic images. I'm learning a lot with this exploration.
Each camera was measured with the camera ISO setting at all the whole stops from ISO 100 through ISO 6400. Shutter speeds were kept at 1/60 and faster to avoid any in-camera processing that might take place at slow shutter speeds. The central 200x200 pixels (10,000 pixels in each color plane) were used for the histogram. These tests have demonstrated to me that the histogram combing observed at high ISO settings in the previous post are due to two reasons:
ADCs that, although they are specified as 14-bit devices, are not delivering 14 bits of resolution
Digital gain applied by the camera manufacturers to the real raw data before it is written to the raw file.
The digital gain seems to be applied at very high ISOs, where there is so much noise that the l0ss in resolution of taking a 14-bit unsigned integer and multiplying it by a number less than 16 to yield another 14-bit unsigned integer probably does not adversely impact image quality.
One thing that surprised me is the Gaussian look to the noise in all cases. I had thought that the noise had longer tails than that from my dark noise tests.
Summary for the four cameras:
The Nikon D4 uses all 14 of its bits all the time except for those lost to digital white balance. There is no evidence of histogram periodicities that would indicate the elusive electron quanta, but with a sample that big, we've probably got several ADCs and several analog amplifiers involved. Details here (http://blog.kasson.com/?p=2926).
The Nikon D800E is at 14-bit device with all codes present except for those lost to digital white balance until the ISO knob gets to 3200. Then the gain of the analog amplifier stops increasing and the output of the ADC is shifter one bit to the left, giving thirteen bits of resolution. There is another one bit leftward shift and concomitant loss of resolution to 12 bits at ISO 6400. Details here (http://blog.kasson.com/?p=2932).
The Sony RX-1 is never a 14-bit instrument. It starts at 13 bits at ISO 100, and loses another bit in two of the channels at ISO 6400. Details here (http://blog.kasson.com/?p=2934).
The Sony NEX-7 starts out a 12-bit camera at ISO 100, is 11 bits at ISO 3200, and 10 bits at ISO 6400. Details here (http://blog.kasson.com/?p=2937).
Hi Jack,
The group of people who work with unity gain most are probably those involved with Astrophotography. They are faced with two major challenges (besides light-pollution).
The first is a lack of photons, so they would like to expose longer (collect more photons) to get the faint stars to get significantly brighter than the (background) noise floor. That brings us to the second issue, exposure time must be kept as short as possible to avoid atmospheric turbulence and perhaps residual tracking errors despite their special equatorial mounts. They also don't want to overexpose the brighter stars.
So they are caught in the middle, between exposure times that must be as long as possible but short enough to avoid motion issues. That's where ISO, or rather Gain, makes a difference. They take multiple exposures to reduce the noise by (amongst others) averaging, and boost the gain to amplify the weak signals and allow shorter exposure times. However, there is not much to be gained once those weak signals are sufficiently above the (lowered) noise floor, raising the gain further will increasingly add more amplifier and thermal noise which will make them again less visible.
In other words, they have little use for the situation where it takes several converted photons to change the ADU by only a single unit because that would require an unnecessarily long exposure time and an exponential growth of the dark current. There is also not much use for too much gain that separates single photon conversions by more than a single ADU, because it only risks getting more amplifier noise and temperature which won't improve accuracy one bit (pun intended).
That's why many of them aim for a setting of approximately unity gain or slightly above, because it's as accurate as useful, yet as low noise as possible.
Otherwise, just increase the number of Photons that get converted and the S/N ratio will offer a better image quality, or shorten the exposure time and get the shot to begin with. In the latter case, it may help to use a unity gain ISO setting, and underexpose if one can boost 'exposure' in post-processing (instead of cranking up the ISO setting).
I am trying to wrap my head around that statement and the situation where it would apply. Here is an example to talk around: We are shooting hockey with a Canon 5DMIII with its lens fully open in an indoor arena. The brightest highlight we are interested in is the white helmet of the players and in order to freeze their motion satisfactorily we need a shutter speed of 1/800s or shorter. We take a test shot at ISO 100, look at our (fictitious, vendors!) Raw histogram and see that the helmets show up four stops below clipping.
In order to maximize IQ given our constraints we quickly evaluate noise in PP at various ISOs (better yet we look at this great chart by Bill Claff (http://home.comcast.net/~NikonD70/Charts/PDR_Shadow.htm#EOS 5D Mark III,D800E)) and decide to bump ISO up to ISO 1600 and shoot away.
Unity Gain for the 5DMIII is around ISO 500. How would we use that knowledge to improve IQ? Or would it apply more to a camera with the read noise profile of the D800e?
Good stuff. I assume that just above you are referring to demosaicing and rendering...
Any clues as to what those tiny entries next to the main codes could be in the earlier histograms?
The NEX-7 is surprising to me because it came out shortly after the D7k and K5 which used on-board 14-bit ADCs for the first time, so I thought it would use the same technology. However its Raw data reaches full scale at only around DN4000 (it is spec'd as 12 bits). Since at ISO 100 3/4 of the codes are missing, could we say that it is instead a 10-bit device?
For a shot at ISO 100, I got a result of x-bar (mean) and sigma (standard deviation) of 978 and 15.7 respectively which, if I understand the method presented in this topic correctly, gives a factor of about 4 thereby making a Unity Gain ISO of 400.
Am I correct to convert (in dcraw) to linear 16-bit TIFF for analysis? Is there a penalty for converting to 8-bit?
Had the Unity Gain ISO for my camera been truly 400 (ignoring Clark's method for the moment), should I have expected higher ISOs than 400 to have values of less than 1 for mean/SD^2 ?
The NEX-7 is surpirsing to me because it came out shortly after the D7k and K5 which used on-board 14-bit ADCs for the first time, so I thought it would use the same technology. However its Raw data reaches full scale at only around DN4000 (it is spec'd as 12 bits). Since at ISO 100 3/4 of the codes are missing, could we say that it is instead a 10-bit device?
[Am I correct to convert (in dcraw) to linear 16-bit TIFF for analysis? Is there a penalty for converting to 8-bit?]
Yes to both questions. Your standard deviations won't be accurate with the size of the histogram bucket increased from one count to 16. It's 16 since you have a 12-bit camera. If you had a 14-bit camera, the 8-bit bucket would be 2 ^ ( 14 - 8 ), or 64 counts.
One thing you have to make sure you do if you don't look directly at the raw color planes in a program like RawDigger is make sure that whatever program does the conversion does not change the color space from the original camera color space. If you go to, say, Adobe RBG, you'll have contributions from various camera color planes mixed into the Adobe RGB color planes.
Next considering shooting my monitor per Claff.
Jack, it's early days here, but I have the first results looking for the elusive single electron. Here's an image from the D4 with ISO set to 640, which is pretty close to the unity gain ISO. It's underexposed. In fact, it's so underexposed that the average difference between the dark and the light areas is 1.25 electrons in the green channel, about one electron in the blue channel, and less than half an electron in the red channel. I brought it into ACR and gave it 5 stops of plus exposure, then hit it with an aggressive curve in Photoshop. The curve just changed the white point. In neither this one or the one that follows did I touch the black point.
"OK", you say, "But that's the power of averaging. What would happen with the ISO set to 100, so each step in the histogram would be more than 6 electrons?" I'm glad you asked. Here is an image with a little more light -- 2 green electrons average difference, with the other channels scaled. Note that the green channel difference is now less than one-third of a count.
We've lost some contrast, but we're also missing that really ugly red artifact.
By boosting the gain with a factor of 16, we might have barely avoided highlight clipping, although the tail of the highlight shot noise might have clipped. However, the question becomes, what looks better: ISO 1600, or ISO 800 and a stop exposure correction in Raw conversion (plus a stop more highlight headroom for tweaking/recovery). It may even be interesting to test ISO 400 and a 2 stops Raw conversion exposure push. Raw converters may also handle the conversion differently based on ISO metadata.
Because of that reason, I once did a quick test of 3 scenarios on my 1Ds3. Image quality was only evaluated on the amount of noise after Raw conversion. The steps 1 through 6 mentioned, were the Grayscale Colorchecker patches 19 - 24:Noise standard deviation of a 50x49 pixel area of each patch.
(http://www.xs4all.nl/~bvdwolf/temp/OPF/HighISO+Push.png)
As can be seen, the ISO 400 has lower noise than the ISO 800 group, and the ISO 800 group has lower noise than the ISO 1600 group. Within the ISO 800 group, there is little difference between the underexposed + pushed in post setting, but the pushed settings will have more highlight clipping latitude (a stop headroom) at capture time. Within the ISO 1600 group there is also little difference, although the ISO 800 pushed 1 stop is slightly better than the rest, and again it has 1 stop overexposure headroom. Even ISO 400 pushed 2 stops is a bit better than the ISO 1600 gain setting.
The differences within each 'ISO group' are actually very difficult to see, but they are measurable. The differences between the groups are more distinct, visually. For me that made it clear that ISO 400 was good enough (for the rare occasion that I need higher ISOs) with still the potential to boost 1 or 2 stops in post processing with little loss compared to a higher ISO setting to begin with, because the Unity gain of the 1Ds3 is approx. reached at ISO 400.
That's how I see a practical implementation of what we can do with the knowledge about 'Unity Gain' for a specific camera.
Sony cameras after Alpha 900/850 uses lossy compression in RAW. It's pretty tricky but efficient. I tested with my a900 cRAW and RAW and found virtually no difference. But 14bit claimed in a99 worked only on single shot mode. And it is a definitely difference in usable range in deep shadows compared to 12bit.
Interesting. I thought that this type of 'gamma' compression did not kick in but well after the deep shadows (http://www.openphotographyforums.com/forums/showthread.php?t=5499).
That's how I see a practical implementation of what we can do with the knowledge about 'Unity Gain' for a specific camera.
Bart, I have a similar noise-based way of deciding when to stop turning up the ISO that has nothing to do with unity gain, although my targets are much simpler; just one D65 patch. I'll give you an example for the RX-1, which yields results similar to what unity gain would say.
I first do a series of exposures of the target at various ISOs, using only the central 200x200 pixels, and holding the digital values constant, in this case 5 stops below clipping for the green channel. That means every time I increase the ISO by a stop, I have to stop down a stop (because I'm compensating for the read noise, which varies with shutter speed, I hold the shutter speed constant.) I get curves like this:
Then I subtract out the half-a-stop-per-stop slope that's occurring because, for each stop increase in ISO, there are half as many photons. That gives me this:
(http://www.kasson.com/ll/RX1ISOseries5stopsfromclippingImprovement.PNG)
The highest point on the curve gives me the place where I should think about not turning my ISO up any further. You can see in this case that there's very broad latitude. The peak, if you want to call such a broad top a peak, is near the unity gain ISO of about 800. (I measured 400 and change, but now I think my RX-1 is for all intents and purposes a 13-bit camera.)
I do have one concern with this test, however. I've noticed that SNR holds up remarkably well as resolution decreases, if the noise is big enough. Thus, the SNR at low ISOs might look good, but there could be some posterization. It remains a strictly theoretical worry, though; I've never seen any evidence of it. Here's a five-stop series from ISO 100 to ISO 3200 with the D800 that doesn't show any that I can see. (http://blog.kasson.com/?p=2739) [Edit: I just noticed that the whole image from which the crops were made isn't on that page. It's here (http://blog.kasson.com/?p=2729).]
Here are the D800 curves that made me think I could get away with that:
Bart, I have a similar noise-based way of deciding when to stop turning up the ISO that has nothing to do with unity gain, although my targets are much simpler; just one D65 patch.
I first do a series of exposures of the target at various ISOs, using only the central 200x200 pixels, and holding the digital values constant, in this case 5 stops below clipping for the green channel. That means every time I increase the ISO by a stop, I have to stop down a stop (because I'm compensating for the read noise, which varies with shutter speed, I hold the shutter speed constant.)
I do have one concern with this test, however. I've noticed that SNR holds up remarkably well as resolution decreases, if the noise is big enough.
Jim, what I believe you have done is replicated the 18% SNR curves at DxO - in your case at around 3.2% (-5 stops) of full scale.
The highest point on the curve represents the cleanest spot of just photon noise, as unpolluted as possible by read noise or PRNU, which can be seen here as the red and green patches at the tip and tail end of the (unstraightened) curves, where they drop just like in yours
Bart, upon further thought I now believe that my question about the 5DIII or in fact most Canon DSLRs is not useful for the issue at hand, since I understand that they employ a two stage analog amplifier design, complicating things unnecessarily.
Let's stick to single amplifier designs for simplicity. What do you think about my comment to Jim above?
Does the area cleanest of camera-induced noise on an SNR curve correspond to unity gain?
After taking many shots by various means, I have given up on the Grail of measuring the ISOug of a Foveon-based camera.
The only improvement I can currently think of is an elimination (or at least determination) of the influence of non-uniformity of the light-source and lens vignetting, and sensels (PRNU / sensor dust / dead- or hot pixels), in the central crop area used for analysis. The only way would be to do a check with subtracted image pairs (and Stdev/Sqrt(2)), or (slightly less accurate but still informative) a comparison between the 2 Green filtered sensel sub-images. It may reveal that your current results do not vary much from an even more normalized data-set, but dust and PNRU do keep lurking around the corner, waiting to strike.
There is a slight (nitpicking) concern here. By varying the aperture you do avoid effects from dark current and potentially Raw converters that are not really giving the same type of Raw data at all exposure times. Unfortunately at the same time you introduce a variable in light uniformity over the selected area due to vignetting. You can minimize that effect by taking only a small crop at the center of the image (assuming the light fall-off is symmetrical). But the smaller the crop becomes, the larger a few outliers will influence the statistics. Another potential source of non-uniformity is that there may be a slight asymmetry in the aperture that the (partially sticky) blades of the iris leave open, and there may be a less than perfectly linear progression in the amount of light that's let through (narrower aperture also require longer to close, hopefully to a repeatable final position).
Another issue is that at apertures wider than approx. f/3.5, the analog gain of some cameras seems to be increased.
Oops! I didn't know that. At least it won't affect the M9 results, since the camera can't figure out where the ISO is set.
Jack,
For the RX-1 it happens to, at least with the input five stops down from clipping, but for the NEX-7, it does not:
By the way, you wondered what would happen if the stimulus was a little darker. You can get a sense of that by looking at the blue curve which averages 312/467 = 2/3 of a stop down from the green, and the red curve, which averages 183/457 = 1 1/3 stop down from the green. Not as far down as you wanted to go, but it's some more information.
Also by the way, I believe the reason the ISO 6400 points jump up like they do is because there is some noise reduction that the camera does at that ISO that you can't turn off.
Jim, could you post somewhere the NEF with the 1-2 electron image? I'd be interested to play with it a bit if at all possible.
Yes, but with an arrangement that's more useful for the topic at hand, and with a few additional pieces of information.
Nevertheless, it's probably more important to know the real optimal Photon noise versus camera noise trade-off point, as far as it's controlled by the ISO setting, although it may be different at various level of exposure.
However, do note that the overall noise level at a certain ISO setting may be objectionable from an aesthetical point of view, even though the objective SNR is optimal. It's still a trade-off, and collecting more photons will always give better quality.
Yes, the RX-1 does appear to have a read noise sweetspot around ISO 800, and I calculate its Unity-Gain at around that based on a 13 bit ADC.
Something seems wrong here: the M9 sensor should have roughly the same 60,000 electron full well capacity as other Kodak full frame type sensors with 6.8 micron pixels, like the KAF-31600: http://www.truesenseimaging.com/all/download/file?fid=11.62
so that 30,000 seems too low. Also, the other numbers seem too high, and it would be puzzling for full frame type CCDs to have lower rather that higher full well capacity, since the main virtue of full frame type sensors is using almost all of the sensor area for storing photo-electrons, whereas CMOS sensor use some space for the three or more processing transistors per photo-site.
Also, sensors like these with microlenses are known to hav quantum efficiency of around 40% or better with color filter arrays and higher without: about 80%. That is, about 2.5 photons per photo-electron with CFA, under 2 photons/electron without. So 6.2 photons per photo-electron is way to high.
Also, understand that it is not a matter of the sensor counting up to some number of photons and then scoring one electron; it is instead a probabilistic thing. For example, when a sensor has 80% quantum efficiency with no CFA in place, it means that each photon has an 80% chance of causing an electron to be deposited in the well, a 20% chance of going undetected.
Also, sensors like these with microlenses are known to hav quantum efficiency of around 40% or better with color filter arrays and higher without: about 80%. That is, about 2.5 photons per photo-electron with CFA, under 2 photons/electron without. So 6.2 photons per photo-electron is way to high.
P. S. the name "unity gain" is a bit unfortunate, as it perpetuates the myth that there is "no amplification" at some natural exposure index level, and "amplification" at all higher EI settings. Instead with various dimensional conversions from photo-electron counts (charges) to currents to voltages to numerical ADC levels, the idea of "unamplified" or "gain of one" is physically meaningless. I suppose the idea of "one ADC level per detected photon" can be useful, as an upper limit on the level of amplification that can help with image quality, SNR and such.
Note how in this case light of wavelength 550nm generates more than twice the electrons (current) than light at 400nm. If this is incorrect, I would be grateful if someone could explain why.
Jack
The D800e for instance averages out at an Effective Absolute Quantum Efficiency of just shy of 16% - requiring 6+ photons to hit the sensor before the energy necessary to release one electron is achieved.
It's almost correct except for the units, Jack. An electron has units of charge (well, energy really). Current has units of charge per unit time.
Ted, I'm not sure that the camera engineers look at it this way, but we can get the dimensions correct if we think of the electrons forming in the well over a period of time as a kind of current, although it's not flowing past a point. If we have a 100,000 electrons FWC, and the well gets 62,400 electrons during a 1/100 second exposure, that's one picoamp.
Jim
It is not correct to assign 1 pA to the well example above. Yes, it can argued of course that 1 pA is the average current that would flow in order to charge the capacitance by some value, but the true capacitor charge is a simple count of electrons collected during the exposure period (ignoring leakage, etc). The example value of 1 pA becomes invalid if, for example during an exposure in low light of some seconds, there were some flashes of lightning!
Ted,
Well, it was a thought. If current is not a useful concept in photography, why all the talk about "dark current"?
Jim
In many ways, the passage of electrons can be regarded as "current" with the complication that current has time-1 in it's units. The analogy in mechanical engineering is the difference between "work done" (lbf-ft, Joules) and power (HP, Watts).
Are we really sure that, for a given QE, there is some threshold value of photon count below which free electrons can not be produced?
I would have thought the QE factor would be applicable to any number of photons such that there is probability that even one photon could produce an output. By this I mean that, for many succeeding tries (say 100 tries with a QE of 16%) one electron would be produced in 16% of those tries (at some confidence level, depending on the number of tries). Putting that another way, if 1 photon arrives at the D800e sensor, the probability of it producing an electron is 0.16 (16%).
Would be interested to know the difference between "Effective Absolute Quantum Efficiency" and "Absolute Quantum Efficiency"?
Hi Ted,
Jim did a good job of outlining the way I've seen it in the literature (http://www.amazon.com/Sensors-Processing-Digital-Cameras-Engineering/dp/0849335450/ref=sr_1_1?s=books&ie=UTF8&qid=1364720821&sr=1-1), even for camera sensors. The confusion arises when we mix radiometric (flux, power etc.) and photometric quantities (Illuminance, exposure etc.) (http://en.wikipedia.org/wiki/Exposure_(photography)). The units are different but they describe exactly the same physical processes. For instance I find it useful to think of Exposure as a certain number of photons incident on an area during exposure time. But to calculate that number one has to do a few backward somersaults with both sets of units (maybe worth a separate thread).
Illuminance (in lux) provides a certain number of photons per second which get converted into e-/second by the photodiode. So while a photographic sensor is exposed to a certain illuminance, a current of e-/s is indeed generated within the integrating photodiode, which holds the resulting total charge so that it can be read by the downstream circuitry. The responsivity diagrams of a solar vs a photographic photodiode look very similar. What changes are the slopes, which are related to the material used and charge collection efficiency.
(http://www.fiberoptics4sale.com/wordpress/wp-content/uploads/2011/02/Spectral-dependence-of-responsivity-and-quantum-efficiency.gif)
Jack
PS Happy Easter
Bart, Ted, Jack (and anybody else who cares),
But this is a post about PRNU. It occurs to me that there's another explanation for it, and if that explanation explains most of it, then PRNU should be considered after the electrons in the well are converted from probabilities to actual electron counts. I don't know a lot about how these imaging chips work, but here's the model I have in my brain. The photodiode converts photons to electrons, the electrons charge a capacitor, the voltage on that cap is buffered by an amp in the cell, and eventually the voltage gets amplified and converted to digital.
If I've got that right, a potential source of PRNU is variation in the sensel capacitance. The voltage, V, on a capacitor as a function of the charge, Q, and capacitance, C, is V = Q/C. You can see that, with the charge held constant, changes in C will cause changes in V, and there's no reason I know of to suspect that the changes in C occur in quanta. The same is true of amplifier gain variation, in both the ISO-programmable amplifiers and the source followers, of which there is presumably one per sensel.
Anybody have an ideas on what the underlying sources of PRNU are?
The low ISO images are somewhat clumpier than the Unity Gain ISO one, but the effect is subtle, even though the histograms look dramatically different. This test bends over backwards to give the Unity Gain ISO image a chance to shine, and to me, it's barely glowing. I am coming around to the school of thought that what's really important for ordinary photographers is the SNR, although I can understand the appeal of UGiso if you're going to be doing computations or mathematical processing on your images.
One more set of simulation images. The camera this time is a D4 with read noise and PRNU (before electron quantizing). ...
Pretty close to the real thing. I'd give the nod to the Unity Gain ISO image at ISO 640, but it's not head and shoulders above the others.
If you want to see the parameters of the D4 simulation, here's the code for the camera constructor. It's in Matlab, but the code should look pretty familiar to any C++ or Java programmer.
I suppose that I could do another set of simulations with probabilistic electrons coming from a known photon flux, but I get the feeling that I'm beginning to overdo this.
Jim
My 2e's worth:
I brought the images into Photoshop as Adobe RGB with a identity conversion matrix (diagonal ones, else zeros), gave them a bit over 1 EV exposure bump, and (over) corrected the greenish color cast. Here's the histogram of the ISO 640 image:
(http://www.kasson.com/ll/PerfectD4ISOSimulation640histo.PNG)
Here's the histogram of the ISO 40 image:
(http://www.kasson.com/ll/PerfectD4ISOSimulation80histo.PNG)
Note the extreme depopulation.
I agree that the ISO 640 image is preferable, as it would be in real life also because read noise for the D4 decreases rapidly from ISO 100 (18.6e-) to ISO 800 (3.4e-). I believe that read noise of 18.6e- (4+ ADUs) at ISO 100 would visually swamp any quantization effects. Did you model this change or was read noise kept constant?
The question that comes to mind is whether you used QE of 53% for your broad spectrum illuminant. I understand that the Sensorgen value assumes a green laser as the illuminant :)
Another improvement to the model, if you haven't done so already, is to separate the read noise into two components: a sensor component and an analog 'gain' component. They are apparently not correlated. Here is a thread by Dosdan (http://www.dpreview.com/forums/thread/3453568) on the subject. The reason for doing so is that, from a noise perspective, the performance of a particular sensor can intuitively be understood as, aotbe, the DR of the incoming light, having to pass through the DR of the sensor, having to pass through the DR of the analog amplification chain as a function of the gain (ISO) applied.
Non-unformity between pixels is no different to any other mass-produced item and no production process is perfect. There are many things that cause variance in pixel characteristics especially if filters and microlenses are included. All variances are significant, including those of the semiconductor doping and lithographic processes, thickness of the materials, the depletion layer, fill, metalization accuracy, CFA consistency, microlens MTF, the list is almost endless . . .
...the histogram of the bottom image needs to be refreshed as indicated by the little triangle with the exclamation Repoint. See the Adobe help (http://help.adobe.com/en_US/photoshop/cs/using/WSfd1234e1c4b69f30ea53e41001031ab64-768da.html#WSfd1234e1c4b69f30ea53e41001031ab64-7684a) for details.
[the list is almost endless . .] Yep, so it is. I'm just trying to figure out whether most of the PRNU occurs before quantizing to electrons or afterward. If the weight of the two components is similar, then I'll never get data to put into a two-stage model.
Well capacity is approximately 77,000 electrons per photodiode but the usual operating point (for restricted nonlinearity)
corresponds to about 45,000 electrons. Photo response non-uniformity (PRNU) is less than ±1%.
Several fixed-pattern and random noise reduction techniques have been incorporated into the F7 design to realize very
good noise performance for the CMOS technology. The total fixed pattern noise from all sources is less than ±1%. The
primary contributor to dark noise is ktC noise from diode reset. This noise is approximately 70 electrons. It is possible
to reduce this to about 40 electrons by implementing a reset-read-expose-read cycle for the frame and then subtract the
first frame from the second.
OT - have you tried ImageJ yet?
Hi Ted, and others,Thanks for the post Bart,
On that note, I've made an improved ImageJ macro available for download (http://bvdwolf.home.xs4all.nl/main/downloads/SplitBayerCFA.ijm) (save with right mouse button click) which will split a Bayer CFA image into its individual R/G1/G2/B channels.
That looks like it would be pretty easy to fit the two-stage read noise model to. But now look at the RX-1:
There are two big problems here. The first is that, with the exception of the ISO 100 point, the read noise component on the output side of the amplifier is darned close to zero. The second is that the read noise goes down as the ISO is increased from 100 to 200. With the two-stage model, that shouldn't happen.
I'm scratching my head right now. I'm also having trouble figuring out how to estimate the standard deviations of the portion of the read noise referred to the output of the amplifier in the cases where the base gain is only a stop or two away from unity gain.
I built a two-stage read noise model for the D4. Here's a plot of the mean and mean+standard deviation for the measured and the modeled values:
(http://www.kasson.com/ll/D4readnoisevsModel.PNG)
That's more noise at the lower ISOs than I'm seeing on the real camera. Some possibilities are: a) one-electron test shots were made at 1/8000 sec exposure, where read noise test shots were made at 1/30, b) there's some ACR processing going on that's making a difference, c) there were some artifacts in the read noise test shots like the read smear in the ISO 640 one-electron shot.
I assume that you used Poisson distribution for photoelectrons from the sensors in quadrature with a gaussian read noise for the pre/sensor, followed in quadrature again by gaussian read noise in the analog amplifier, followed by an ideal noiseless ADC?
Where does ACR come into the picture?
The vertical axis is ADUs, correct?
Or perhaps the relative read noises are incorrect. How did you calculate them?
I think I can see in the real camera M+S data the two slopes: the amplifier limited one at low ISOs and that of the amplified sensor once it becomes dominant at 800 and above. I don't see the same knee in the modelled curve.
When you're simulating on a per-pixel level, this quadrature stuff doesn't apply.
I take the signal, multiply that by the PRNU image (with the sensitivity of each sensel, populated by a Gaussian distribution centered on one), use Poisson for the shot noise to form an image and add that in, quantize to integer electrons, add the preamp read noise image (generated with Gaussian statistics), multiply by the gain, add the postamp read noise image, and quantize to the number of bits in a perfect ADC.
I'm considering splitting the PRNU multiply into two components, one pre quantizing to electrons, and one post quantizing to electrons. I'm not sure if the shot noise belongs before or after the pre-quantizing PRNU multiplication.
I'm considering splitting the PRNU multiply into two components, one pre quantizing to electrons, and one post quantizing to electrons. I'm not sure if the shot noise belongs before or after the pre-quantizing PRNU multiplication.
Jim
Based on these comments, if we consider PRNU as all signal dependent 'noises' proportional to the signal, I think it should make no difference whether it is handled before or after conversion - including CFA non-uniformity - so might as well do everything at the same time. What do you think?
This shows that Unity Gain ISO produced the best results, but the low-ISO images (even the ones that have real-world counterparts) seem a bit too noisy. I think it's clear that in the presence of a post-gain read noise component, that Unity Gain ISO can offer some advantages. Whether they're hugely significant over a ISO's a stop or two lower still needs some investigation.
Jim
And here's how they look plotted against the new data:
(http://www.kasson.com/ll/D4ReadNoiseModel.PNG)
When fitting the curves, I decided that the ISO 6400 values weren't very important, since, based on what we've learned so far, we'll never use those ISOs.
Jim
Jack,
If all the PRNU is before quantizing to electrons, as it currently is in the model, then we'll see PRNU jump two counts at a time at twice Unity Gain ISO, and four counts at at time at four times ISOug. There won't be any intermediate values. If it's after quantizing to electrons, then there will be intermediate values, but, of course, none smaller than 1 LSB of the ADC. So it makes at least a theoretical difference. The power of noise dither being what it is, it probably makes no practical difference.
But when it comes to dither from noise, I'm speaking to the choir, right?
Jim
Now that you have the sensor/gain read noise model, I almost feel that a more useful interpretation comes from plotting the two and seeing when one becomes dominat over the other, as Dosdan did in the previously mentioned DPR thread (http://www.dpreview.com/forums/thread/3453568?page=4)
(http://www.dpreview.com/files/t/E~a5b3193a3e204f75b11339b5b3940319)
Here are the new read noise model values:
(http://www.kasson.com/ll/d4rnnew.PNG)
Jack, here the graph is with the two components added in. I don't think the change adds much, though. You can see the effect of each of the components well enough by looking at the general shape of the previous curve. The two components will alway be straight lines on a log-log graph, and the combined curve will approach each one asymptotically at both ends.Thank you, you are right. It just makes it a bit easier to see intuitively where increasing ISO no longer has a beneficial effect on noise.
(http://www.kasson.com/ll/D4ReadNoiseModelParts.PNG)
If it helps you, then enjoy.
Jim