Pages: 1 ... 6 7 [8] 9 10   Go Down

Author Topic: larger sensors  (Read 191405 times)

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #140 on: January 08, 2007, 07:41:46 pm »

Just for fun, I sometimes like to speculate on what might be possible as processing chips become faster and more powerful and buffer sizes grow.  

The point has often been made, by Michael as well as by BJL a few pages ago in this thread, that the behaviour of silver halide particles in B&W film photography is pretty close to the concept of a true, all-digital sensor.

These noise issues, which are getting really contentious in this thread   , are largely due to the fact that sensors are fundamentally analog devices with a lot of digital processing attached.

Would it ever be possible, I wonder to build a truly digital sensor. In such a sensor, we'd only be concerned with whether or not a photon collector had received sufficient light to be 'turned on'. For color photography, such a sensor would probably be a Foveon type and the resulting image would consist of 'real' pixels, each consisting of a red, green or blue element that was either switched 'on' or 'off'.

In such a design, it might even be possible to deal with photonic shot noise. For example, if a stray "red' pixel element, way smaller than the resolution limits of the lens, is not switched 'on' in a cluster of red pixels that are switched on and that cluster is within the resolution limits of the lens, some analyzing algorithm could work out that such a pixel did not receive enough photons to be swithed on (due to photonic shot noise), and switch that pixel on, thus reducing the effects of photonic shot noise. The reverse would also take place, ie. a cluster of photodiodes all switched off, bar one or 2 'red' pixel elements that received a slightly greater number of photons than their neighnours, would be switch off.

Needless to say, the numbers involved in such a design would be astronomical and the processing power required would be enormous. Initially, we might have to return to the tethered system.

By my calculations, a full frame 6cmx4.5cm sensor (which even BJL thinks might becomes a reality   ) would hold around 2 gigapixels of 2 micron pixel pitch (that is, using the sloppy definition of the term where a 3.3mp Foveon sensor is often described as having 10mp).
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #141 on: January 08, 2007, 08:40:00 pm »

This might well be a completely 'screwy' idea. It's partly tongue-in-cheek. Considering that one analog pixel can have 16.7 million meaningful (?) values, 2 billion different values for the entire image seems woefully inadequate.

However, I vaguely recall reading research that had analyzed a number of real-world images that had found that the actual number of different pixel values, even in a hi res image, is no where near the 16.7m mark. We're talking about numbers of thousands rather than millions.
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #142 on: January 10, 2007, 07:46:25 pm »

Well, I didn't intend to kill off the entire thread. I'm surprised that none of you 'techies' have attempted to shoot down the idea in flames.

Whilst taking my evening exercise yesterday, to stave off the boredom and keep my mind active, I tried to work out how many combinations of on/off states there are in a pixel of 3 primary colors. I realised with some dismay that my maths is so poor, I had difficulty in working this out, whilst slowly jogging along the road. Is it 6 or possibly 9?

When I returned from my exercise, I got out pen and paper and arrived at a figure of 8. (Is this correct?)

Of course, our full frame 6x4.5cm sensor with 2 gigapixel elements (667m real pixels) is way beyond the MTF50 resolution limit of MF lenses. The processing, in-camera or out-camera, would group such pixels into a cluster of say 9, which would give us around 74 megapixels of 6 micron pixel pitch, each one being pixel sharp.

According to my maths, the number of possible values of a cluster of 9 'real' Foveon type pixels, with each individual pixel having a possible number of 8 different values, is given by 8 to the power of 9, ie 134 million, somewhat better than the 16.7 million we currently use when printing.

Apart from the number crunching difficulty, I can't see any major flaws in such a design. We've already got 2 micron pixels in P&S cameras. To make the processing task more realistic, we could consider the number of 2 micron detectors that would fit on a full frame Foveon type 35mm sensor. That's around 630m, which equates to 210m real pixels with a red, green and blue elements. Group 9 of those into a cluster and we get a 23mp FF 35mm sensor, which is close to the next generation of 35mm DSLRs.

However, such a truly digital sensor, from the ground up, would be far better than a 23mp analog Foveon sensor, which struggles with noise, dynamic range, cross-talk and aliasing etc. In such an all-digital design, all that's required for total pixel sharpness is that the light falling on any individual detector should be greater than the noise. Noise 49%, signal 51% results in a perfect, noise-free rendition.

Should I be making my way to the patents office?  
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #143 on: January 11, 2007, 09:04:36 am »

Quote
In such a design, it might even be possible to deal with photonic shot noise.

Capturing more photons is the only way to "deal with" shot noise.  Read noise is what needs to be dealt with.

Quote
For example, if a stray "red' pixel element, way smaller than the resolution limits of the lens, is not switched 'on' in a cluster of red pixels that are switched on and that cluster is within the resolution limits of the lens, some analyzing algorithm could work out that such a pixel did not receive enough photons to be swithed on (due to photonic shot noise), and switch that pixel on, thus reducing the effects of photonic shot noise. The reverse would also take place, ie. a cluster of photodiodes all switched off, bar one or 2 'red' pixel elements that received a slightly greater number of photons than their neighnours, would be switch off.[a href=\"index.php?act=findpost&pid=94661\"][{POST_SNAPBACK}][/a]


The result would be a deterioration of the original capture.  There's nothing wrong with original photonic capture that needs to be fixed.  Any isolated photon or lack thereof is statistically more likely to be accurate than a "fixed" pixel is.  Also, if think about it, any regular pattern that is slightly broken would ask for fixing, too.  Your scenario only occurs at the clipping point, and black, in fact.  You need a mixture of photons and holes in a great variety to have any tonality.  Shot noise is not a noise in the way noise is normally thought of; it is not some unwanted outside invasion; it is a symptom of the limited nature of the signal itself.
« Last Edit: January 11, 2007, 09:06:11 am by John Sheehy »
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #144 on: January 11, 2007, 09:36:13 am »

Quote
Your scenario only occurs at the clipping point, and black, in fact.  You need a mixture of photons and holes in a great variety to have any tonality.  Shot noise is not a noise in the way noise is normally thought of; it is not some unwanted outside invasion; it is a symptom of the limited nature of the signal itself.
[a href=\"index.php?act=findpost&pid=95083\"][{POST_SNAPBACK}][/a]

But that's 'analog think', John   . In my all-digital system, there will be lots of 'cliff edges'; instances where noise is equal to the signal. There are no half measures. A pixel element is either switched on for perfect, noise-free sharpness, or it's black. You don't get a microscopic black speck on a red flower petal that an ordinary camera lens can pick up. An algorithm should be able to work out, 'Hey, that black speck shouldn't be there', and turn the pixel on.

On the other hand, if there was a cluster of black specks, the algorithm would let them be.
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #145 on: January 11, 2007, 10:17:16 am »

Quote
Any isolated photon or lack thereof is statistically more likely to be accurate than a "fixed" pixel is.  Also, if think about it, any regular pattern that is slightly broken would ask for fixing, too.  Your scenario only occurs at the clipping point, and black, in fact.  You need a mixture of photons and holes in a great variety to have any tonality.  Shot noise is not a noise in the way noise is normally thought of; it is not some unwanted outside invasion; it is a symptom of the limited nature of the signal itself.
[a href=\"index.php?act=findpost&pid=95083\"][{POST_SNAPBACK}][/a]

I'm not referring to the finished, processed pixel that appears on the monitor after downloading the RAW image, but to the 27 (or so) pixel elements (each 2 microns) that comprise the one 6 micron (or so) pixel. The tonality of each processed (finished) pixel is achieved through the combination of those 27 on/off values. Any isolated, single, pixel element that's switched off in a group of 'ons' is clearly due to noise and could be fixed.

Patterns would be treated similarly if they consisted of single pixel elements.

I'm sure there's a huge flaw in my reasoning but I just can't see it yet   .
Logged

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
larger sensors
« Reply #146 on: January 11, 2007, 10:27:20 am »

Quote
If BJL has some additional information then I would like to see it and his references.
[{POST_SNAPBACK}][/a]
Here is one fairly recent reference, from December 2005 on a [a href=\"http://www.dalsa.com/pi/documents/2005_DALSA_IEDM_Presentation.pdf]Dalsa 28MP color sensor with binning[/url]
Dalsa, and I, are referring to real binning on the sensor, not "software binning", which sounds like another name for downsampling. The electrons from groups of nearby photosites are merged as soon as they get to the edge of the sensor, before the fast read-out of a line along the edge of the sensor.

What Dalsa indicates on page 9 is that with its 4:1 binning the S/N ratio is improved at low light levels (where read noise is the the dominant noise source) by almost a factor of four, with the improvement declining as light level increases (so that photon shot noise is the main noise source) to a bit above 2.

[Warning about the summary on page 38: when electronic engineers talk about S/N ratios in sensors, a factor of two is 6dB, not 3dB.]

These numbers correspond to what one gets if read-noise (in electrons RMS) is not increased, since 4:1 binning will quadruple signal in electrons, which in turn will double photon shot noise, as it is proportional to the square root of signal.

This Dalsa claim also implies that dark current noise is rather low compared to read-noise, as otherwise binning would surely about double dark current noise (assuming independence of the dark noise at the four binned photosites, which remember are not even quite adjacent to each other). If dark current noise dominates at low light levels, S/N ratio would only about double at low light levels with 4:1 binning.

This fits with what I have heard lately: dark current noise is only significant in long exposures, of order of a second or longer, not at "hand-holdable" shutter speeds. At least for CCD sensors; maybe good CMOS sensors have read-noise low enough to be comparable to dark current noise.
« Last Edit: January 11, 2007, 10:38:17 am by BJL »
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #147 on: January 11, 2007, 01:42:41 pm »

Quote
But that's 'analog think', John   . In my all-digital system, there will be lots of 'cliff edges'; instances where noise is equal to the signal. There are no half measures. A pixel element is either switched on for perfect, noise-free sharpness, or it's black. You don't get a microscopic black speck on a red flower petal that an ordinary camera lens can pick up. An algorithm should be able to work out, 'Hey, that black speck shouldn't be there', and turn the pixel on.

If your pixels are so small or insensitive that some don't get a photon in a red flower petal that is illuminated, there will be no black speck; it will just "not" contribute at all to local luminance, as it probably shouldn't.  Tiny pixels like that won't be intended mainly for 100% view; they will contribute to the overall local luminace.  An area that is all red, except for one black pixel, is clipped.  The highlights should only have a majority of pixels turned on in any given area; never all.  THAT is clipping.

Quote
On the other hand, if there was a cluster of black specks, the algorithm would let them be.
[a href=\"index.php?act=findpost&pid=95088\"][{POST_SNAPBACK}][/a]

If you give it enough thought, I think you will realize that there is no value in trying to outsmart shot noise.  It will only lead to more noise.  Shot noise is actually the very fabric of light.  You can't figure out a better truth than what it is telling you; if you want less shot noise, relative to signal, get more signal.  Don't fabricate it.
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
larger sensors
« Reply #148 on: January 11, 2007, 05:19:05 pm »

Quote
Here is one fairly recent reference, from December 2005 on a Dalsa 28MP color sensor with binning
Dalsa, and I, are referring to real binning on the sensor, not "software binning", which sounds like another name for downsampling. The electrons from groups of nearby photosites are merged as soon as they get to the edge of the sensor, before the fast read-out of a line along the edge of the sensor.
 as otherwise binning would surely about double dark current noise (assuming i
What Dalsa indicates on page 9 is that with its 4:1 binning the S/N ratio is improved at low light levels (where read noise is the the dominant noise source) by almost a factor of four, with the improvement declining as light level increases (so that photon shot noise is the main noise source) to a bit above 2.

[Warning about the summary on page 38: when electronic engineers talk about S/N ratios in sensors, a factor of two is 6dB, not 3dB.]

These numbers correspond to what one gets if read-noise (in electrons RMS) is not increased, since 4:1 binning will quadruple signal in electrons, which in turn will double photon shot noise, as it is proportional to the square root of signal.

[a href=\"index.php?act=findpost&pid=95099\"][{POST_SNAPBACK}][/a]


Thanks for the info, BJL. The reference does show that John's software binning is not as effective as hardware binning at low levels of illumination. Since the S:N is 4x improved with 4:1 binning, the output of the superpixel is quadrupled as expected, but the read noise for the superpixel is hardly more than for that of a single pixel. However, at higher levels of illumination, the S:N advantage drops to 2:1 when shot noise predominates. In the latter instance, the read noise of the superpixel increases. This is what one might expect from the known effects of ISO on read noise: when more electrons are read with a lower ISO, the read noise increases. Therefore, at higher levels of illumination, hardware binning has no advantage.

Since a 7 MP image has enough image detail for an excellent 8 by 10 inch print, the Dalsa chip has a very nice feature there.

The reference is also interesting, since it shows how binning can be accomplished with a Bayer array. That is novel.

Bill
« Last Edit: January 11, 2007, 05:24:28 pm by bjanes »
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
larger sensors
« Reply #149 on: January 11, 2007, 05:27:30 pm »

Quote
Capturing more photons is the only way to "deal with" shot noise.  Read noise is what needs to be dealt with.

[a href=\"index.php?act=findpost&pid=95083\"][{POST_SNAPBACK}][/a]

Yes, indeed, even though it may seem counterintuitive, the higher the shot noise the better since the S:N improves. An ISO 100 capture has more shot noise than an ISO 3200 capture.

Bill
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #150 on: January 11, 2007, 05:55:20 pm »

Quote
Yes, indeed, even though it may seem counterintuitive, the higher the shot noise the better since the S:N improves. An ISO 100 capture has more shot noise than an ISO 3200 capture.
[a href=\"index.php?act=findpost&pid=95170\"][{POST_SNAPBACK}][/a]

This is why I often use the qualifiers "absolute" and "relative".  Shot noise is higher, in an absolute sense, with a stronger signal.  However, it is smaller, *relative* to the signal.

If you use the camera's metered exposure, the ISO 3200 image will have less absolute shot noise than the ISO 100 images, but it will be more visible in the 3200, especially in the highlights, because it is stronger relative to signal.
Logged

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
larger sensors
« Reply #151 on: January 11, 2007, 07:35:02 pm »

Quote
The reference is also interesting, since it shows how binning can be accomplished with a Bayer array. That is novel.
[a href=\"index.php?act=findpost&pid=95168\"][{POST_SNAPBACK}][/a]
Yes, Bayer array binning seems to be the latest trend: Kodak also does it in a new 10MP 4/3" format interline CCD sensor, the KAI-10100. That one also does 2:1 binning, giving 5MP. (This is probably the sensor in the Olympus E-400).
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #152 on: January 11, 2007, 09:23:26 pm »

Quote
Here is one fairly recent reference, from December 2005 on a Dalsa 28MP color sensor with binning
Dalsa, and I, are referring to real binning on the sensor, not "software binning", which sounds like another name for downsampling.

It's different from downsampling in some ways.  It takes the image to a deeper bit depth (which would take an extra processing step with dowsampling), and has no filtering, other than the high frequencies of the original lost in the process.  I don't see how software 2x2 binning would be any different than hardware 2x2 binning, other than the potential 1-stop decrease in blackframe read noise.  That would be the single benefit, AFAICT (other than write speed, and storage concerns, etc).   It would seem to me that enabling this mode is something you'd only want to do in special circumstances, and I'd certainly hope that the binning was not the only way to provide the higher ISOs; you will still get more detailed highlights and midtones without the binning.  

I wonder how close to 0.25x the read noise really gets.

Quote
The electrons from groups of nearby photosites are merged as soon as they get to the edge of the sensor, before the fast read-out of a line along the edge of the sensor.

What Dalsa indicates on page 9 is that with its 4:1 binning the S/N ratio is improved at low light levels (where read noise is the the dominant noise source) by almost a factor of four, with the improvement declining as light level increases (so that photon shot noise is the main noise source) to a bit above 2.

About the same as software binning.

Quote
This Dalsa claim also implies that dark current noise is rather low compared to read-noise, as otherwise binning would surely about double dark current noise (assuming independence of the dark noise at the four binned photosites, which remember are not even quite adjacent to each other). If dark current noise dominates at low light levels, S/N ratio would only about double at low light levels with 4:1 binning.

This fits with what I have heard lately: dark current noise is only significant in long exposures, of order of a second or longer, not at "hand-holdable" shutter speeds. At least for CCD sensors; maybe good CMOS sensors have read-noise low enough to be comparable to dark current noise.
[a href=\"index.php?act=findpost&pid=95099\"][{POST_SNAPBACK}][/a]

Here's my 20D with 100% crops of the RAW greyscale at 6 different shutter speeds.  Black frames, windowed at 128 to 160 ADUs, with 2.2 gamma applied (effectively ISO 1600 pushed to ISO 102,400):



Clearly, only the 30s is significantly more noisy than the 1/1000.  I did 1/8000 too, but there were 7, and 6 are easier to format.  The 1/8000 was like the 1/1000.  The Std dev is 9.0 at 30s, 5.2 at 4 seconds, and 4.7 at 1/2 through 1/8000.  The max value is 4095 at 30s, 3179 at 4s, 653 at 1/2, and 263 at 1/15, with no significant reduction with shorter "exposures".
« Last Edit: January 11, 2007, 09:50:31 pm by John Sheehy »
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #153 on: January 11, 2007, 09:45:42 pm »

Quote
If your pixels are so small or insensitive that some don't get a photon in a red flower petal that is illuminated, there will be no black speck; it will just "not" contribute at all to local luminance, as it probably shouldn't. 

John,
That's quite right! An actual, totally 'black speck', as seen on the monitor, could only occur if all the 27 sub-pixel elements were switched off. (Remember, I'm talking about a 6 micron Foveon type pixel consisting of 9x2 micron Foveon sub-pixels. 27 photon collectors in total for each pixel seen on the monitor.)

The luminance range of the 6 micron Foveon pixel stretches from 'all 27 sub-pixels off' (black) to 'all 27 sub-pixels on' (white). The possible number of values in between these two extremes is given by 8 to the power of 9, ie 134m.

Clearly, it doesn't make any visible difference if a single sub-pixel is switched off when it should be switched on. The change in tonality of the 6 micron pixel would be altered so slightly, one wouldn't notice. But a few random sub-pixels, within the group of 27, that are in the wrong state, could make a visual difference.

Quote
If you give it enough thought, I think you will realize that there is no value in trying to outsmart shot noise.  It will only lead to more noise.  Shot noise is actually the very fabric of light.  You can't figure out a better truth than what it is telling you; if you want less shot noise, relative to signal, get more signal.  Don't fabricate it.

Maybe you are right. However, I haven't stated that shot noise will be distinguished from other types of noise in such a system. The purpose is to get as accurate a signal as possible. It makes no difference to the final result what the source of the noise is. Noise is noise whatever the source, ie. inaccuracy.

In these examples of isolated sub-pixels which are in the wrong state, it seems to me, if I've understood the nature of shot noise, that shot noise will often be a contributing factor. Let's look at what I image happens to a sub-pixel in the 'cliff edge' situation. Noise (from all sources) is 50.1%; signal is 49.9%. The sub-pixel is switched off because the signal threshhold for switching the sub-pixel on, has not been reached. We don't actually know that the signal is actually 49.9%. It doesn't really matter. The reality is, the sub-pixel is in a state of 'off' when it should be 'on'. How do we know it should be 'on'? Because there's no reason for it to be 'off' (except noise) if it's surrounded by a cluster of  sub-pixels which are on.

Anyway, maybe this is a just a red herring and there's no need for an algorithm to make such decisions. My imaginary system is not founded on such a procedure. It simply occurred to me that maybe this could be a method of tackling photonic noise. If there are too many errors due to insufficient light, then maybe nothing can be done except increase exposure.

So let's ignore this imaginary noise reduction system, which would take an enormous amount of processing power anyway, and concentrate on the fundamental principle of a 6 micron Foveon type pixel that gets its tonality from the on/off states of 27 sub-pixel photon collectors.

Any flaws in that idea?  
« Last Edit: January 11, 2007, 09:51:41 pm by Ray »
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #154 on: January 11, 2007, 11:23:15 pm »

Continuing with the shot noise concept, let's try and flesh this out a bit more.

We have a photon detector that has received 50% of its signal from noise and 50% from the photographed target, through the lens. The color is red. The total signal strength is just below the threshhold for switching the photon detector 'on'.

The neighbouring 'red' photon detector has received the same amount of other-than-photonic noise but a higher degree of photonic noise, so the signal through the lens is, say 52% and total (non-photonic) noise 48%. The total signal strength, however, is greater by a factor that pushes it beyond the 'switch on' threshhold.

We have 2 adjacent photon detectors that have received a borderline signal strength. Whatever the signal strength in our all-digital system, at the most fundamental level there's only right or wrong, on or off.

It doesn't matter if a particular 'borderline' detector has been switched on due to a random increase in non-photonic noise, or a random increase in photonic noise. The question is, 'which state is more accurate, on or off?"
« Last Edit: January 11, 2007, 11:26:32 pm by Ray »
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
larger sensors
« Reply #155 on: January 12, 2007, 07:42:06 am »

Quote
Dalsa, and I, are referring to real binning on the sensor, not "software binning", which sounds like another name for downsampling. The electrons from groups of nearby photosites are merged as soon as they get to the edge of the sensor, before the fast read-out of a line along the edge of the sensor.

What Dalsa indicates on page 9 is that with its 4:1 binning the S/N ratio is improved at low light levels (where read noise is the the dominant noise source) by almost a factor of four, with the improvement declining as light level increases (so that photon shot noise is the main noise source) to a bit above 2.
Quote
About the same as software binning.
[a href=\"index.php?act=findpost&pid=95205\"][{POST_SNAPBACK}][/a]
[a href=\"index.php?act=findpost&pid=95099\"][{POST_SNAPBACK}][/a]

That is not how I understand hardware binning. In the case with a small electron count, the four pixels are binned into one superpixel, but the read noise is the same as for one of the smaller unbinned pixels. In the case of software binning you have four reads with their accompaning noise combined in the resulting downsized pixel.

Bill
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #156 on: January 12, 2007, 08:06:21 am »

Quote
That is not how I understand hardware binning. In the case with a small electron count, the four pixels are binned into one superpixel, but the read noise is the same as for one of the smaller unbinned pixels. In the case of software binning you have four reads with their accompaning noise combined in the resulting downsized pixel.

[a href=\"index.php?act=findpost&pid=95250\"][{POST_SNAPBACK}][/a]

Yes, but I replied to the shot noise figure.
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
larger sensors
« Reply #157 on: January 12, 2007, 08:47:45 am »

Quote
Yes, but I replied to the shot noise figure.
[{POST_SNAPBACK}][/a]
Quote
Capturing more photons is the only way to "deal with" shot noise.  Read noise is what needs to be dealt with.
[a href=\"index.php?act=findpost&pid=95083\"][{POST_SNAPBACK}][/a]

John,

At times you stress read noise as shown above, but when it is convenient you ignore it. If you are in a low light situation where read noise is predominant, it is not wise to ignore it. Hardware binning is one solution and it is widely used in scientific applications: [a href=\"http://www.microscopyu.com/tutorials/java/digitalimaging/signaltonoise/index.html]Nikon on Binning[/url]

Bill
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #158 on: January 12, 2007, 09:21:03 am »

Quote
At times you stress read noise as shown above, but when it is convenient you ignore it. If you are in a low light situation where read noise is predominant, it is not wise to ignore it. Hardware binning is one solution and it is widely used in scientific applications: Nikon on Binning
[a href=\"index.php?act=findpost&pid=95259\"][{POST_SNAPBACK}][/a]

Bill, you are really being obnoxious now.  It is pretty obvious that you're trying to make me look stupid.  I've been polite up to now.

I didn't ignore the read noise issue; I didn't comment on it IN THAT SENTENCE.  My point is that other than read noise potential, software binning is just as good as hardware binning.

Hardware binning is not without compromise.  You lose detail.  Hardware binning is only without compromise when you don't want the detail.
« Last Edit: January 12, 2007, 09:29:10 am by John Sheehy »
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #159 on: January 12, 2007, 09:27:53 am »

Quote
At times you stress read noise as shown above, but when it is convenient you ignore it. If you are in a low light situation where read noise is predominant, it is not wise to ignore it. Hardware binning is one solution and it is widely used in scientific applications: Nikon on Binning
[a href=\"index.php?act=findpost&pid=95259\"][{POST_SNAPBACK}][/a]

Yes, that's a nice applet, but remember, it is theoretical.  It would be interesting to see some real-world data.  It doesn't include all noises.  All cameras have read noise that is directly proportional to signal strength, and never achieve the theoretical S/N for the extreme highlights.  My XTi never goes above 100:1, for instance.
« Last Edit: January 12, 2007, 09:30:14 am by John Sheehy »
Logged
Pages: 1 ... 6 7 [8] 9 10   Go Up