Pages: 1 ... 7 8 [9] 10   Go Down

Author Topic: larger sensors  (Read 191412 times)

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
larger sensors
« Reply #160 on: January 12, 2007, 09:48:15 am »

Quote
I don't see how software 2x2 binning would be any different than hardware 2x2 binning, other than the potential 1-stop decrease in blackframe read noise.
[a href=\"index.php?act=findpost&pid=95205\"][{POST_SNAPBACK}][/a]
Indeed, less shadow noise is the main image quality benefit: that is, I believe, why binning is so often used in situations like astronomy. The Kodak KAI-10100 color binning sensor has so far been officially announced only in a special astro-photography camera.
However, maybe for everyday photography as opposed to technical work, the realm where read-noise is significant is or will soon be at such low light levels that S/N ratio will always be unacceptable anyway due to photon shot noise. Then the best solution is probably setting the black point higher than this signal level, eliminating the noise entirely, and never mind binning.

Anther benefit of binning over down-sampling is faster frame rate: the Dalsa source I mention above says three times faster readout for 4:1 binning. This is because a major speed bottle neck of CCD read out is the reading a line of pixels along the edge of the sensor, and with binning, this is done with only the reduced number of super-pixels. By the way, reducing read-rate in pixels per second can reduce read-noise, so binning and still using the same frame rate could further reduce shadow noise.

Putting these two together, a sensor binning say from 16MP to 8MP (2:1) or 4MP (4:1) gains almost all the advantages of using a sensor of lower pixel count to start with: less shadow noise at a give exposure level and higher frame rates. This could eliminate the lat main arguments against pushing pixel counts up to the maximum resolution level set by lenses or the needs of the user (so long as pixels stay big enough for good performance at lower ISO).

Even with on-sensor binning, down-sampling could still has its place too, like getting intermediate pixel count reductions less than a factor of 2 or 4.
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
larger sensors
« Reply #161 on: January 12, 2007, 10:11:31 am »

Quote
Anther benefit of binning over down-sampling is faster frame rate: the Dalsa source I mention above says three times faster readout for 4:1 binning. This is because a major speed bottle neck of CCD read out is the reading a line of pixels along the edge of the sensor, and with binning, this is done with only the reduced number of super-pixels. By the way, reducing read-rate in pixels per second can reduce read-noise, so binning and still using the same frame rate could further reduce shadow noise.

Putting these two together, a sensor binning say from 16MP to 8MP (2:1) or 4MP (4:1) gains almost all the advantages of using a sensor of lower pixel count to start with: less shadow noise at a give exposure level and higher frame rates. This could eliminate the lat main arguments against pushing pixel counts up to the maximum resolution level set by lenses or the needs of the user (so long as pixels stay big enough for good performance at lower ISO).

Even with on-sensor binning, down-sampling could still has its place too, like getting intermediate pixel count reductions less than a factor of 2 or 4.
[{POST_SNAPBACK}][/a]

As BJL has pointed out before, one should not assume that CCDs and CMOS sensors work alike. From a theoretical standpoint, I'm not sure that hardware binning would be of much use with [a href=\"http://www.dalsa.com/markets/ccd_vs_cmos.asp]CMOS[/url] where the output from the pixel is already in the form of voltage rather than an electron packet in the case of CCD. Since John works mainly with CMOS, perhaps he is right afterall for his camera.

Bill
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #162 on: January 12, 2007, 05:32:01 pm »

I shall certainly be glad when my all-digital system hits the market. Won't have to worry about these issues   . With any signal above the noise floor (including shot noise) we'll get a perfect, pixel-sharp rendition; perfect within the resolution limits of the system, that is.

In fact, I imagine the first production models will receive a lot of criticism, just as the first audio CDs did. The defects in exisiting lenses will become much more apparent and the experts will have to explain that previously such defects were masked by read noise, shot noise, AA filters and so on, but some people will insist that they prefer the old mushy results they'd been accustomed to   .
« Last Edit: January 12, 2007, 05:50:19 pm by Ray »
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #163 on: January 12, 2007, 07:08:43 pm »

One of things that has always worried me about our current analog/digital cameras is the sheer waste of lens resolution that occurs. For a camera such as the Canon 5D, for example, a lens ideally needs to have a strong MTF performance up to 50 lp/mm. Beyond that resolution, MTF can be crap as far as the sensor is concerned.

In fact, if it were possible to design a lens with a steep MTF fall-off beyond 50 lp/mm, results would be better with a cameras like the 5D because it could dispense with its AA filter.

This factor has provided much fuel to the debate of film versus digital. We know that B&W films such as T-Max 100 have an MTF response as high as 60% at 100 lp/mm. We know, with the appropriate sturdy tripod and MLU and with a bit of luck with film flatness, that we can capture 100 lp/mm with a good 35mm lens and a contrasty scene. Not even the next generation of FF 35mm DSLRs will be able to achieve this. However, don't think for one moment I am recommending a return to film. I am merely pointing that there is more resolving power in 35mm lenses than is able to be exploited with our current analog sensors.

The reason for this, I believe, is due to noise. A pixel from a current digital camera, whatever the strength of the signal and however good the lighting, will always contain a portion of noise. The higher the signal strength, the smaller the noise becomes as a proportion of the signal. At some point the noise becomes insignificant and of no practical concern, but it's still there, embedded within the signal (at least some of it. The stuff that hasn't been removed with black frame subtraction etc).

If a pixel is small and unable to collect many photons, the noise will be quite significant even in good lighting. Even when the well is full, the embedded noise will likely be noticeable. If we were to pack 2 micron photon detectors on a 35mm sensor, the resolving power of the sensor would be enormous (about 250 lp/mm).

Unfortunately, even the best lenses, like the discontinued Canon 200/1.8 at f4, would deliver a pretty weak signal at 250 lp/mm, so let's be realistic and not set our sights above 100 lp/mm. At 100 lp/mm the signal is still going to be pretty low. Our 2 micron analog photon detectors would pick it up, but in many cases (too many) the signal would be hardly greater than the noise. Who would be interested in lots of pixels that consisted of, say 45% noise?

Now back to my all-digital system. 45% noise? No problem. Switch the pixel on for perfect clarity   .
« Last Edit: January 12, 2007, 07:36:55 pm by Ray »
Logged

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
larger sensors
« Reply #164 on: January 15, 2007, 12:53:16 pm »

Quote
From a theoretical standpoint, I'm not sure that hardware binning would be of much use with CMOS where the output from the pixel is already in the form of voltage rather than an electron packet in the case of CCD.
[a href=\"index.php?act=findpost&pid=95275\"][{POST_SNAPBACK}][/a]
So long as the signal is still analog, it is susceptible to additional read-noise from analog processes like charge-to-voltage conversion, pre-amplification and A/D conversion, so true (hardware) binning could still be useful on a CMOS sensor. However if CMOS sensors can amplify the signal significantly right at the photo-site, the effect of subsequent noise could be reduced to insignificant levels.

A more extreme possibility is A/D conversion right at the photo-site. This is apparently used is some special sensors used in some surveillance cameras, or at least proposed for that use. (These are the same ones that eliminate DR limitations of highlight headroom completely, by reading out highlight pixels earlier.)
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
larger sensors
« Reply #165 on: January 15, 2007, 03:15:55 pm »

Quote
So long as the signal is still analog, it is susceptible to additional read-noise from analog processes like charge-to-voltage conversion, pre-amplification and A/D conversion, so true (hardware) binning could still be useful on a CMOS sensor. However if CMOS sensors can amplify the signal significantly right at the photo-site, the effect of subsequent noise could be reduced to insignificant levels.

A more extreme possibility is A/D conversion right at the photo-site. This is apparently used is some special sensors used in some surveillance cameras, or at least proposed for that use. (These are the same ones that eliminate DR limitations of highlight headroom completely, by reading out highlight pixels earlier.)
[a href=\"index.php?act=findpost&pid=95841\"][{POST_SNAPBACK}][/a]

As I understand CMOS as explained in the Dalsa reference, the output of the  CMOS pixel is already in the form of analog voltage, the pre-amplification and charge to voltage conversion having been done by the circuitry on each pixel site. The A/D conversion involves converting the voltage to a pixel value. Read noise would not be involved at this stage, the conversion having been done on the pixel site.

Bill
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #166 on: January 15, 2007, 05:14:15 pm »

Quote
Since John works mainly with CMOS, perhaps he is right afterall for his camera.
[a href=\"index.php?act=findpost&pid=95275\"][{POST_SNAPBACK}][/a]

Right about what?

Definition of gain?

Image noise vs pixel noise?

Hardware and software binning pretty much the same for everything but read noises on the chip (there are other read noises).

I'm not even sure we're in the same conversation sometimes.  You seem to read far too much innuendo into what I write.

If I say something like "hardware binning for reduced read noise over software binning is not without compromise", it doesn't mean I'm shooting the idea down, with my thumbs pointing at the floor, and giving a rasberry.  It means that there is a compromise.  I'd certainly want the hardware binning if I was recording a movie, or if the image was going to be reduced anyway.
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #167 on: January 15, 2007, 05:21:26 pm »

Quote
However, maybe for everyday photography as opposed to technical work, the realm where read-noise is significant is or will soon be at such low light levels that S/N ratio will always be unacceptable anyway due to photon shot noise. Then the best solution is probably setting the black point higher than this signal level, eliminating the noise entirely, and never mind binning.[a href=\"index.php?act=findpost&pid=95270\"][{POST_SNAPBACK}][/a]

Raising the blackpoint will only reduce the visibility of noise if the new blackpoint falls into a range that has no signal below it that has noise peaks surpassing the new blackpoint, and there is no signal in the immediate upper range of the new blackpoint.
If you have an image of a dark gradient, darkest on the left, and brightest on the right, raising the blackpoint will blacken the left edge, hiding noise, but the range with the new blackpoint gets even noisier.  Raising the the blackpoint helps mainly when there is a black or almost black area, and there is a huge gap in the histogram above it.
« Last Edit: January 15, 2007, 05:53:58 pm by John Sheehy »
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
larger sensors
« Reply #168 on: January 15, 2007, 05:46:46 pm »

Quote
Bill, you are really being obnoxious now.  It is pretty obvious that you're trying to make me look stupid.  I've been polite up to now.
[a href=\"index.php?act=findpost&pid=95264\"][{POST_SNAPBACK}][/a]

John, you are far from stupid and in previous posts I have acknowledged that when I have disagreed with you in the past, I have usually been wrong. However, none of us correct all the time and sometimes the teacher can learn from the student.

Bill
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #169 on: January 15, 2007, 05:52:33 pm »

Quote
However, none of us correct all the time and sometimes the teacher can learn from the student.
[a href=\"index.php?act=findpost&pid=95880\"][{POST_SNAPBACK}][/a]

I agree; in fact, you made a statement a few weeks ago in which you said that whenever we disagree, I turn out to be right, and I almost replied to say that you should never believe something to be 100% true just because I wrote it.  My contributions, though very confident at times, are meant to be food for thought, not dogma.

However, I really don't know what it is that I'd be wrong about for CCDs (but not CCDs).  It's only fair to say what you think someone is wrong about when you imply that they could be wrong.
Logged

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
larger sensors
« Reply #170 on: January 16, 2007, 05:01:53 pm »

Quote
As I understand CMOS as explained in the Dalsa reference, the output of the  CMOS pixel is already in the form of analog voltage, the pre-amplification and charge to voltage conversion having been done by the circuitry on each pixel site.
[a href=\"index.php?act=findpost&pid=95859\"][{POST_SNAPBACK}][/a]
That seems reasonable. But pre-amplification still happens, just in a different place, and that still opens the possibility that binning the electrons from several pixels before pre-amplification could be useful in reducing the effect pre-amplifier noise. But binning would be done "closer to home" before moving the electrons to the edge of the sensor as ins doe with CCD binning.

By the way, it seems that most read noise with CCD's occurs during the process of moving the electrons from each photo-site to the edges and then corners of the sensor at high rates, with lower read rates one way of substantially reducing total dark noise in scientific sensors. (Why does no CCD DSLR have a lower noise, very low frame rate mode?) So if CMOS sensors can pre-amplify before this moving, they can avoid that major source of dark noise, allowing well implemented CMOS sensors to have far lower total dark noise.
Logged

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
larger sensors
« Reply #171 on: January 16, 2007, 05:16:29 pm »

Quote
If you have an image of a dark gradient, darkest on the left, and brightest on the right, raising the blackpoint will blacken the left edge, hiding noise, but the range with the new blackpoint gets even noisier.
[a href=\"index.php?act=findpost&pid=95875\"][{POST_SNAPBACK}][/a]
Can you explain why? I am imagining for example setting the black point at 25 electrons (below which S/N ratio is at best a miserable 5:1) so pixels with signals less than 25 are set to level zero (pure black) and all "better lit" pixels keeping the same signal and same noise and so the same S/N ratio. Why would visible noise level increase in those unchanged brighter than black point pixels?

Perhaps I am misusing the words "black point". Or perhaps there are problems with an abrupt cut-off, even though it is only turning very dark gray to black. Maybe a roll-off at low pixel levels would work better.
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #172 on: January 16, 2007, 06:37:08 pm »

Quote
Can you explain why? I am imagining for example setting the black point at 25 electrons (below which S/N ratio is at best a miserable 5:1) so pixels with signals less than 25 are set to level zero (pure black) and all "better lit" pixels keeping the same signal and same noise and so the same S/N ratio. Why would visible noise level increase in those unchanged brighter than black point pixels?

Because the signal-to-noise ratio is no longer the same when you raise the blackpoint.  The signal is now closer to zero, just above the blackpoint.  Whatever subject matter you have in that range, will replace your old near-black, with more shot noise (and scalar line noise to a smaller degree), since the new black was originally captured as signal.  Pushing low signals towards black by raising blackpoint does not lower its absolute noise.  With a Canon 20D at ISO 1600, blackframe read noise is 4.7 ADU, which is about 3.75 electron units.  The total noise is (5^2+3.75^2)^0.5 = ~6.5, 30% greater than at the true blackpoint.  Then, in order to maintain the same DR, in the output, the contrast of this noise is increased slightly for small increases in blackpoint, and more visibly for larger ones.

I am assuming that the adjusted RAW DR will represent the same original DR in the output medium.

Quote
Perhaps I am misusing the words "black point". Or perhaps there are problems with an abrupt cut-off, even though it is only turning very dark gray to black. Maybe a roll-off at low pixel levels would work better.
[a href=\"index.php?act=findpost&pid=96046\"][{POST_SNAPBACK}][/a]

Perhaps what you want to do is clip as a lower limit at a greypoint; IOW, everything at 25 electrons and less is now 25; rather than 0.  That would keep the visible S/N up in that range.  In fact, you can just add some value to true black and get a similar effect, without losing anything.  This is effectively what the LCD on the camera does; it doesn't have true black, so the noise is harder to see.

Abrupt cut-offs are a problem, in many ways.  Even blackpointing at real black can cause inferior results with further processing; any kind of non-sensor binning or downsampling works out better with data that hasn't been blackpointed yet, as there is almost equally positive and negative noise about black; clipping first before downsizing (or stacking, as well) results in non-linear deep shadows, since all the contributions are 0 or positive, and more of the deep signal has been clipped away as well.
« Last Edit: January 16, 2007, 06:38:40 pm by John Sheehy »
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #173 on: January 16, 2007, 10:58:24 pm »

This is all very interesting and illuminating. John Sheehy really seems to know his stuff   .

However, I'd really like some feed-back on my all-digital idea. You're all very quiet on this issue; perhaps because none of you want to make me appear like a complete chump (very nice of you   ).

I'm struggling to find any major flaw in the idea, apart from the manufacturing difficulties of producing a Foveon type chip with so many pixels and the slow speed of processing such large RAW files with current processors.

My first objection to this idea was the likelihood there wouldn't be sufficient tonality. If there's no distinction to be made between a 'sub-pixel element' being switched on by a strong signal or by a weak signal, then possibly tonality goes down the drain.

However, 2 micron photon detectors simply would not receive strong signals from 35mm lenses. The whole idea of the Olympus 4/3rds system was that 35mm lenses could not resolve anything below 5 microns, but Zuiko lenses could. The 5 micron limit for current analog DSLRs and MFDBs is due to the fact that analog systems have poor S/N ratios below this size. The pixels would be too noisy for a quality system.

All that's required (conceptually) in my all-digital system, is that the camera be aware of its own noise. Any signal, for any color, that results in an increase above that noise floor, results in the sub-pixel element being switched on.

Where's the flaw, please?
Logged

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
larger sensors
« Reply #174 on: January 18, 2007, 03:59:12 pm »

Quote
Because the signal-to-noise ratio is no longer the same when you raise the blackpoint.  The signal is now closer to zero, just above the blackpoint.  Whatever subject matter you have in that range, will replace your old near-black, with more shot noise (and scalar line noise to a smaller degree), since the new black was originally captured as signal.
[a href=\"index.php?act=findpost&pid=96069\"][{POST_SNAPBACK}][/a]
It seems I was misusing black-point, or using it differently than you. I was thinking of something often done in post-processing, where one declares that pixels at and below a certain level are transformed to level zero, with I suppose some scaling down of levels a a bit above that down to avoid a sudden drop from dark gray to pure black. This is a purely digital process, so the pixels not blacked out retain their S/N ratio, at least as far as the effects of noise from the analog stages. Hopefully discretization noise (from rounding the new smaller levels to integer levels) is not to noticeable.
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #175 on: January 18, 2007, 09:29:53 pm »

Quote
It seems I was misusing black-point, or using it differently than you. I was thinking of something often done in post-processing, where one declares that pixels at and below a certain level are transformed to level zero, with I suppose some scaling down of levels a a bit above that down to avoid a sudden drop from dark gray to pure black. This is a purely digital process, so the pixels not blacked out retain their S/N ratio, at least as far as the effects of noise from the analog stages. Hopefully discretization noise (from rounding the new smaller levels to integer levels) is not to noticeable.
[a href=\"index.php?act=findpost&pid=96449\"][{POST_SNAPBACK}][/a]

What you explain above still sounds like clipping.  If a level of 25 electrons has a S/N of 4, if you reduce 25 electrons to 0 electrons, you still see the image as if it were a signal of 0 electrons , but with 6.5 electrons of noise instead of 5.  You don't see the S/N that it is supposed to have.

What I think something like ACR's "shadows" control does is apply a curve, so that the contrast of both the signal and the noise are reduced in these shadow regions (and increased in midtones).
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #176 on: January 21, 2007, 07:34:09 am »

Quote
What you explain above still sounds like clipping.  If a level of 25 electrons has a S/N of 4, if you reduce 25 electrons to 0 electrons, you still see the image as if it were a signal of 0 electrons , but with 6.5 electrons of noise instead of 5.[a href=\"index.php?act=findpost&pid=96509\"][{POST_SNAPBACK}][/a]

5 was not the figure I intended.  5 is the shot component of the 25 electron signal.  What used to be at zero before the clipping was the blackframe read noise; 3.75 in the 20D example.  So, instead of just having 3.75 electron units of noise at the black of the resulting image, you have 6.5 ((25+3.75^2)^0.5); almost twice as much.  The statistics will actually scale to about 60% of the pre-clipping/pre-blackpointed values, as the curve is sliced in half and the mean moves above the clipping point, but this happens to both the 0 electron clip and the 25-electron clip.
« Last Edit: January 21, 2007, 07:35:29 am by John Sheehy »
Logged

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
larger sensors
« Reply #177 on: January 22, 2007, 11:58:51 am »

Quote
5 is the shot component of the 25 electron signal.  What used to be at zero before the clipping was the blackframe read noise; 3.75 in the 20D example.  So, instead of just having 3.75 electron units of noise at the black of the resulting image, you have 6.5 ((25+3.75^2)^0.5); almost twice as much
[a href=\"index.php?act=findpost&pid=96820\"][{POST_SNAPBACK}][/a]
You have lost me, so let me describe more explicitly a modified proposal, based in part on your ideas. I will describe it in terms of electron counts as indicated by A/D convertor output levels. For concreteness, I consider a sensor with well capacity 25,000 (as in the 5.4 micron photo-sites of the Olympus E-500, so probably typical of current SLR's)

A/D output corresponding to 25 electrons or less: set to the same level as if no electrons were detected.
A/D output 25 to 25,000: scale linearly to the range 0-25,000, so 25->0, 25,000->25,000 and the slope in between is 1000/000 So roughly
25 -> 0
50->25.025
75->50.050

It seems to me that noise induced fluctuations between nearby pixels are amplified by the factor 1000/999, so essentially unchanged. Is that your point?

This raises a perceptual question: when looking at very dark parts of an image, but with eyes adapted to the overall luminosity level of the image, does the detectability of the noise fluctuations depend on the ratio of fluctuation size to the luminosity in that dark part of the image, or relative to the overall luminosity, or something in between?

If there really is a problem here, my next idea is simply increasing the amount of spatial averaging (noise reduction processing) done at low levels, so that at 25 electrons or less a lot of resolution is sacrificed to avoid visible noise. All this based on the idea that below about 100 photo-electrons, S/N is less than Kodak's "minimum acceptable" guideline of 10:1, and this should only ever be the case in deep shadows below the level of significant detail. (25 electrons is about 10 stops below that maximum signal of 25,000 seven stops below mid-tones at base ISO speed of say 100, so a good three stops below mid-tones even at sixteen times base ISO speed, or say 1600.)
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #178 on: January 23, 2007, 09:46:50 am »

Quote
You have lost me, so let me describe more explicitly a modified proposal, based in part on your ideas. I will describe it in terms of electron counts as indicated by A/D convertor output levels. For concreteness, I consider a sensor with well capacity 25,000 (as in the 5.4 micron photo-sites of the Olympus E-500, so probably typical of current SLR's)

A/D output corresponding to 25 electrons or less: set to the same level as if no electrons were detected.
A/D output 25 to 25,000: scale linearly to the range 0-25,000, so 25->0, 25,000->25,000 and the slope in between is 1000/000 So roughly
25 -> 0
50->25.025
75->50.050
That's actually quite unnecessary.  Losing 25 out of 25000 electron counts reduces highlight headroom by only .0014 stops.

Quote
It seems to me that noise induced fluctuations between nearby pixels are amplified by the factor 1000/999, so essentially unchanged. Is that your point?
Not exactly; my basic point is that raising the blackpoint is just like moving a pile of dirt from one location to another, adding more dirt to it, and erasing the original location.  The noise isn't "down there", per se, to be clipped away.  The noise is everywhere; at every tonal level.  In an absolute measurement, noise increases at higher tonal levels, but generally decreases relative to signal.  When you take a signal that used to be above black, and make it black by clipping to 0, you have a new black with a higher level of noise than what used to be the noise level at the old black.  The more you raise the blackpoint, the more sharply the S/N ratio drops in the range immediately above the clipping point.  The only time that simply raising the blackpoint works is when you raise it to a tonal level where no signal will be directly above it; then there is no signal to experience the great loss in S/N.

Quote
This raises a perceptual question: when looking at very dark parts of an image, but with eyes adapted to the overall luminosity level of the image, does the detectability of the noise fluctuations depend on the ratio of fluctuation size to the luminosity in that dark part of the image, or relative to the overall luminosity, or something in between?
It seems that it exists, to an extent, relative to the scene, but things outside the frame have an effect, too.  You're going to see more shadow tones and noise in a dark room, in an image lacking in highlights, full-screen, than you will with the same levels with bright highlight areas, or in a window on a white desktop.

For the levels in your example; 25 electrons at ISO 100, I doubt you will see much of any change when you move the blackpoint, if you are not pushing the exposure index in your render.  You have to use the Shadow/highlight tool or something like it, agressively, to see such a change.  25 electrons is only 2 to 4 ADU at ISO 100 for DSLRs.

Quote
If there really is a problem here, my next idea is simply increasing the amount of spatial averaging (noise reduction processing) done at low levels, so that at 25 electrons or less a lot of resolution is sacrificed to avoid visible noise.
That should work; I don't know why more converter don't do something like that; it seems that they generally soften their highlights, too, when you apply agressive NR.  While waiting for the feature, you can can render one conversion with sharp dtail, and one with lots of NR, and use a luminance mask to apply one over the other.

Quote
All this based on the idea that below about 100 photo-electrons, S/N is less than Kodak's "minimum acceptable" guideline of 10:1, and this should only ever be the case in deep shadows below the level of significant detail. (25 electrons is about 10 stops below that maximum signal of 25,000 seven stops below mid-tones at base ISO speed of say 100, so a good three stops below mid-tones even at sixteen times base ISO speed, or say 1600.)
[a href=\"index.php?act=findpost&pid=96997\"][{POST_SNAPBACK}][/a]
Well, with most current DSLRs, 25 electrons of signal at ISO 100 is only 2 to 4 ADU above black.  Your 25,000 electron ISO 100 would be about 4, so lets use that.  The read noise is much stronger than the shot noise there; read noise is about 2 ADU for a typical DSLR at ISO 100.  That's about 12 electrons, so you have a signal of 25 electrons, 5 electrons of shot noise, and 12 electrons of read noise.  That's a total noise of about 13 electrons, only 1 electron stronger than the read noise itself; the shot noise is almost totally irrelevant, as the read noise predominates by a wide margin at this level.  The lowest ISOs on DSLRs are mostly crippled by read noise, not shot noise, in the shadows.
« Last Edit: January 23, 2007, 09:47:51 am by John Sheehy »
Logged

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
larger sensors
« Reply #179 on: January 23, 2007, 11:32:29 am »

Quote
That's actually quite unnecessary.  Losing 25 out of 25000 electron counts reduces highlight headroom by only .0014 stops.
[a href=\"index.php?act=findpost&pid=97159\"][{POST_SNAPBACK}][/a]
Indeed, I was being pedantic: alright, I could have just said subtract about 25e, which is apparently about 2 to 4 ADU, and zero out if the result is less than 0 ADU. Noise induced variations in the levels of nearby pixels stays the same but luminance decreases, another way of thinking about the problem you are talking about I think.

Quote
That [my new suggestion of more smoothing at low levels] should work; I don't know why more converter don't do something like that; it seems that they generally soften their highlights, too, when you apply agressive NR.  While waiting for the feature, you can can render one conversion with sharp dtail, and one with lots of NR, and use a luminance mask to apply one over the other.
[a href=\"index.php?act=findpost&pid=97159\"][{POST_SNAPBACK}][/a]
I think we agree on a plan then! Maybe at least good NR tools do something like this.

Quote
The lowest ISOs on DSLRs are mostly crippled by read noise, not shot noise, in the shadows.
[a href=\"index.php?act=findpost&pid=97159\"][{POST_SNAPBACK}][/a]
That seems to be the case, at least for now. Suggesting perhaps that one rule for choice of exposure index (ISO speed) is rather film like: use high enough EI get the levels of the shadow regions up to where you want them (within the constraints of highlight head-room), to protect the signal from read-noise introduced after pre-amplification, or part way through pre-amplification. And if all else fails, bin pixels!
Logged
Pages: 1 ... 7 8 [9] 10   Go Up