Pages: 1 ... 4 5 [6] 7 8   Go Down

Author Topic: 1DS3 vs 5D CoC shootout in MFDB forum  (Read 55229 times)

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #100 on: July 14, 2008, 10:17:35 am »

Quote
Nor do I, yet it does seem intuitive to me that a larger sensor requires more light than a smaller sensor requires, in order to record the same scene. If you stitch together two 1Ds3 sensors, you need double the amount of light to fully expose that doubled sensor area, and as a consequence DR is increased by approximately one stop.

In fact Mark, it seems to me that this increase in DR would apply whatever the initial sensor size. Double the area of a 5D sensor and presumably you'd still get approximately one stop increase in DR. Double the area of a G9 or Olympus E3 sensor and you'd expect to get the same increase in DR despite the fact that the pixels are a different size in each case.
[a href=\"index.php?act=findpost&pid=208051\"][{POST_SNAPBACK}][/a]

Ray, I'm skeptical about this. If you need twice the amount of light to cover twice the sensor area, it would seem to me that each photosite is receiving the same amount of light it would have received whether the sensor was half the size or twice the size as long as the relationship between total light and total sensor area remains the same. So how does the DR change? I think we need one of our Forum physicists to step in here!
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #101 on: July 14, 2008, 11:58:57 am »

Quote
If you need twice the amount of light to cover twice the sensor area, it would seem to me that each photosite is receiving the same amount of light it would have received whether the sensor was half the size
[a href=\"index.php?act=findpost&pid=208072\"][{POST_SNAPBACK}][/a]
Read Emil's excellent explanation above. In brief, at equal exposure index (ISO speed)
- Doubling sensor area with equal pixel count doubles the light per pixel, and so can improve S/N ratio (by about 1/2 stop).
- Doubling sensor area with equal pixel size can given equal per pixel S/N ratio but twice as many pixels: then downsampling to the lower pixel count can increase the S/N raito over what the smaller sensor gives (again) by about 1/2 stop.
Either way one can probably buy an extra half stop of D/R and one stop of usable ISO speed for each doubling of sensor area.


Aside: this gathering of twice as much light requires either (1) a longer exposure time and/or (2) a larger aperture diameter and less DOF, as with equal f-stop and longer focal length. Larger sensors of "equal technology" offer no free lunch in this IQ comparison.
« Last Edit: July 14, 2008, 11:59:51 am by BJL »
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #102 on: July 14, 2008, 10:48:25 pm »

Quote
Read Emil's excellent explanation above. In brief, at equal exposure index (ISO speed)
- Doubling sensor area with equal pixel count doubles the light per pixel, and so can improve S/N ratio (by about 1/2 stop).
- Doubling sensor area with equal pixel size can given equal per pixel S/N ratio but twice as many pixels: then downsampling to the lower pixel count can increase the S/N raito over what the smaller sensor gives (again) by about 1/2 stop.
Either way one can probably buy an extra half stop of D/R and one stop of usable ISO speed for each doubling of sensor area.
Aside: this gathering of twice as much light requires either (1) a longer exposure time and/or (2) a larger aperture diameter and less DOF, as with equal f-stop and longer focal length. Larger sensors of "equal technology" offer no free lunch in this IQ comparison.
[a href=\"index.php?act=findpost&pid=208097\"][{POST_SNAPBACK}][/a]

BJL,
Is it as little as 1/2 a stop of improved S/N resulting from a doubling of light gathering capacity? If so, why?

If one were to stitch tegether four 1Ds3 sensors, one would get an 84mp sensor of dimensions 72mmx48mm. Are you saying that such a huge sensor would have merely a one stop S/N advantage over the 1Ds3?

I'm not arguing you are wrong. It just seems too conservative a figure, intuitively.
Logged

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #103 on: July 15, 2008, 08:16:36 am »

Quote
BJL,
Is it as little as 1/2 a stop of improved S/N resulting from a doubling of light gathering capacity? If so, why?

If one were to stitch tegether four 1Ds3 sensors, one would get an 84mp sensor of dimensions 72mmx48mm. Are you saying that such a huge sensor would have merely a one stop S/N advantage over the 1Ds3?

I'm not arguing you are wrong. It just seems too conservative a figure, intuitively.
[a href=\"index.php?act=findpost&pid=208267\"][{POST_SNAPBACK}][/a]


Two typical noise sources are present: Photon shot noise and the noise in the electronics that processes the data (so-called read noise).  Photon noise rises as the square root of the number of photons collected, so if you quadruple the number of photons collected by stitching together four sensors, you double the amount of photon noise.  S/N goes up by 4/2=2 when area goes up by four.  It's the same for read noise.  Four pixels were doing the same job that one was before; the read noise per pixel is constant, and noise adds as RMS as I mentioned above, and so doubles for the combined four pixels.

So indeed doubling the area (one stop more light) is only a half stop more S/N (since stops are powers of two, and 1/2 stop means square root two).
Logged
emil

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #104 on: July 15, 2008, 08:55:10 am »

Quote
Read Emil's excellent explanation above. In brief, at equal exposure index (ISO speed)
- Doubling sensor area with equal pixel count doubles the light per pixel, and so can improve S/N ratio (by about 1/2 stop).
- Doubling sensor area with equal pixel size can given equal per pixel S/N ratio but twice as many pixels: then downsampling to the lower pixel count can increase the S/N raito over what the smaller sensor gives (again) by about 1/2 stop.
Either way one can probably buy an extra half stop of D/R and one stop of usable ISO speed for each doubling of sensor area.
Aside: this gathering of twice as much light requires either (1) a longer exposure time and/or (2) a larger aperture diameter and less DOF, as with equal f-stop and longer focal length. Larger sensors of "equal technology" offer no free lunch in this IQ comparison.
[a href=\"index.php?act=findpost&pid=208097\"][{POST_SNAPBACK}][/a]

Re your first bullet - OK, but I don't think Ray is talking about doubling the sensor area leaving the pixel count unchanged.

Second bullet OK, but I don't see how this contributes to greater D/R if by D/R you mean Dynamic Range.
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #105 on: July 15, 2008, 09:02:23 am »

Quote
Two typical noise sources are present: Photon shot noise and the noise in the electronics that processes the data (so-called read noise).  Photon noise rises as the square root of the number of photons collected, so if you quadruple the number of photons collected by stitching together four sensors, you double the amount of photon noise.  S/N goes up by 4/2=2 when area goes up by four.  It's the same for read noise.  Four pixels were doing the same job that one was before; the read noise per pixel is constant, and noise adds as RMS as I mentioned above, and so doubles for the combined four pixels.

So indeed doubling the area (one stop more light) is only a half stop more S/N (since stops are powers of two, and 1/2 stop means square root two).
[{POST_SNAPBACK}][/a]

Emil's analysis applies to current dSLRs, but not to sceintific sensors where pixel binning can be performed in hardware before digitization. Consider a CCD with 2 by 2 pixel binning as described [a href=\"http://www.photomet.com/pm_solutions/library_encyclopedia/index.php]here[/url].

Without pixel binning, each pixel has a read noise and if 4:1 downsizing is done in software subsequent to digitization, one still has 4 read noises. However, with 2 by 2 pixel binning, the superpixel can be read with only one read noise, so the S:N is 4:1 as compared to the single pixels.

CCDs typically have a high fill factor, but this factor is considerably in CMOS sensors such as now used in the best 35 mm style DSLRs as explained here. This is due to the extra circuitry added to each pixel for signal processing. If the overall sensor size is held constant and the pixel count is quadrupled, this circuitry is also quadrupled, so the fill factor must decrease as compared to the larger pixel before the increase in the pixel count. Also, there is a certain amount of dead space between pixels which also decreases the fill factor.

I see that Michael has previewed a new Phase One digital back with variable resolution. I would presume this is a CCD with variable pixel binning.

Bill
« Last Edit: July 15, 2008, 09:05:28 am by bjanes »
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #106 on: July 15, 2008, 09:39:09 am »

Quote
Two typical noise sources are present: Photon shot noise and the noise in the electronics that processes the data (so-called read noise).  Photon noise rises as the square root of the number of photons collected, so if you quadruple the number of photons collected by stitching together four sensors, you double the amount of photon noise.  S/N goes up by 4/2=2 when area goes up by four.  It's the same for read noise.  Four pixels were doing the same job that one was before; the read noise per pixel is constant, and noise adds as RMS as I mentioned above, and so doubles for the combined four pixels.

So indeed doubling the area (one stop more light) is only a half stop more S/N (since stops are powers of two, and 1/2 stop means square root two).
[a href=\"index.php?act=findpost&pid=208329\"][{POST_SNAPBACK}][/a]

Hmm! Doesn't seem much, does it! I wonder if I should take the trouble to test this experimentally with the 40D. I could bracket a few exposures at F8 using a 50mm lens, then bracket a few more exposures at F13 using an 80mm lens (same scene, same position), then examine details in the shadows and match the appropriate overexposure at F8 and 50mm that visually has the same amount of shadow noise as the correct exposure at F13 and 80mm (or what would have been the correct exposure if the sensor had been full frame).
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #107 on: July 15, 2008, 09:52:56 am »

Quote
Emil's analysis applies to current dSLRs, but not to sceintific sensors where pixel binning can be performed in hardware before digitization. Consider a CCD with 2 by 2 pixel binning as described here.

Without pixel binning, each pixel has a read noise and if 4:1 downsizing is done in software subsequent to digitization, one still has 4 read noises. However, with 2 by 2 pixel binning, the superpixel can be read with only one read noise, so the S:N is 4:1 as compared to the single pixels.

[a href=\"index.php?act=findpost&pid=208338\"][{POST_SNAPBACK}][/a]

I've always assumed it's meaningful to compare equal size files or equal size prints when comparing noise. Is there necessarily much difference in the final noise outcome between binning 4 pixels in hardware as opposed to reducing the image file size to a quarter, or increasing the smaller image file by a factor of 4?

I've been of the opinion that downsampling results in a reduction of noise and upsampling results in an increase in noise.
Logged

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #108 on: July 15, 2008, 11:39:43 am »

Quote
Hmm! Doesn't seem much, does it! I wonder if I should take the trouble to test this experimentally with the 40D. I could bracket a few exposures at F8 using a 50mm lens, then bracket a few more exposures at F13 using an 80mm lens (same scene, same position), then examine details in the shadows and match the appropriate overexposure at F8 and 50mm that visually has the same amount of shadow noise as the correct exposure at F13 and 80mm (or what would have been the correct exposure if the sensor had been full frame).
[a href=\"index.php?act=findpost&pid=208342\"][{POST_SNAPBACK}][/a]

I don't see why one needs to go to all this trouble.  Just take a single exposure and see what happens to the noise when you reduce the resolution by binning 2x2 bunches of pixels.  What's being considered is local on the sensor.  If you want the reduction in noise, you have to give up some resolution; if you then make the sensor bigger, you get the same resolution relative to frame height with the binned pixels.



Quote
I've always assumed it's meaningful to compare equal size files or equal size prints when comparing noise. Is there necessarily much difference in the final noise outcome between binning 4 pixels in hardware as opposed to reducing the image file size to a quarter, or increasing the smaller image file by a factor of 4?

I've been of the opinion that downsampling results in a reduction of noise and upsampling results in an increase in noise.
[a href=\"index.php?act=findpost&pid=208348\"][{POST_SNAPBACK}][/a]

The business about pixel binning only refers to read noise; it has no effect on photon noise, which is dominant in midtones and highlights for low to moderate ISO.  I think that suggestion is a bit of a tangent to the topic under discussion.
Logged
emil

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #109 on: July 15, 2008, 11:41:10 am »

Some replies to Mark, Ray and Bill.

To MarkDS:
Quote
Second bullet OK, but I don't see how this contributes to greater D/R
[a href=\"index.php?act=findpost&pid=208336\"][{POST_SNAPBACK}][/a]

Because averaging the signal from several photosites (by binning or downsampling) reduces the RMS noise level while not reducing the signal level, and that increases the ratio of maximum signal to noise floor, which is the definition of dynamic range.

In other words, at any given signal level, S/N is better, and so you can go to a lower signal level (darker parts of the scene) before the per-pixel S/N level gets down to the same threshold of acceptable S/N, such as the 10:1 suggested by Kodak as the minimum acceptable.


To Ray: I should have said a minimum of 1/2 stop, and perhaps more but less than a full stop.
A half stop is what one would get if photon shot noise (and perhaps dark current noise) are the main noise sources, because these sources follow a square root law: noise increases in proportion to the square root of signal (photon or electron count).

Some other noise sources might not increase as fast with pixel and sensor size, though all data I have seen on the read noise of sensors show some noticeable increase in electrons of noise with photosite area. Thus total noise increases, but at most by the square root trend, and so S/N ratio and DR increase by between a factor between 1 and sqrt(2) for a doubling of sensor area, which is an improvement of between a half and one stop at equal exposure level.

The better sensor technology gets, the more that noise is dominated by "square root law" sources like photon shot noise, and so the closer one gets to the 1/2 stop I mentioned.


To Bill Janes: some CCD read noise comes from the photosites themselves (dark current noise) and this part combines by the square root law. So I would expect 2x2 binning to improve DR and S/N by a factor of between 2 and 4. One indication of this is the Kodak CCD's of similar era show a clear trend of increasing read noise with increasing photosite area, fitting a square root of area trend fairly well so that S/N ratio and DR grows roughly as the square root of photosite area.
Logged

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #110 on: July 15, 2008, 01:15:32 pm »

BJL - thanks - that's helpful.

Mark
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #111 on: July 15, 2008, 05:33:18 pm »

Quote
To Bill Janes: some CCD read noise comes from the photosites themselves (dark current noise) and this part combines by the square root law. So I would expect 2x2 binning to improve DR and S/N by a factor of between 2 and 4. One indication of this is the Kodak CCD's of similar era show a clear trend of increasing read noise with increasing photosite area, fitting a square root of area trend fairly well so that S/N ratio and DR grows roughly as the square root of photosite area.
[{POST_SNAPBACK}][/a]

BJL,

Dark current noise (thermal noise) becomes significant only at exposures of seconds and does not contribute significantly to most normal photographic situations.  Perhaps you did not read the Photometrics link about read noise.  In the real world, 2by2 binning may yield an SNR improvement of less than 4:1. Dark current noise is not helped by binning.

Read noise expressed in electrons does not correlate well with sensor pixel size (see [a href=\"http://www.clarkvision.com/imagedetail/digital.sensor.performance.summary/]Roger Clark[/url]). However, when the electron count is converted to a data number (raw pixel value), the larger sensor has an advantage because of the increased gain (electrons per DN).

Photometrics

"The primary benefit of binning is higher SNR due to reduced read noise contributions. CCD read noise is added during each readout event and in normal operation, read noise will be added to each pixel. However, in binning mode, read noise is added to each superpixel, which has the combined signal from multiple pixels. In the ideal case, this produces SNR improvement equal to the binning factors (4x in the above example). The figure below shows the effect of 2x2 binning for a four-pixel region. This example assumes that 10 photoelectrons have been collected in each pixel and the read noise is 10 electrons. If this region is read out in normal mode the SNR will be 1:1 and the signal will be lost in the noise. However, with 2x2 binning, the SNR becomes 4:1, which is sufficient to observe this weak signal."

Bill
« Last Edit: July 15, 2008, 05:40:19 pm by bjanes »
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #112 on: July 15, 2008, 07:07:42 pm »

Quote
I don't see why one needs to go to all this trouble.  Just take a single exposure and see what happens to the noise when you reduce the resolution by binning 2x2 bunches of pixels.  What's being considered is local on the sensor.  If you want the reduction in noise, you have to give up some resolution; if you then make the sensor bigger, you get the same resolution relative to frame height with the binned pixels.
The business about pixel binning only refers to read noise; it has no effect on photon noise, which is dominant in midtones and highlights for low to moderate ISO.  I think that suggestion is a bit of a tangent to the topic under discussion.
[a href=\"index.php?act=findpost&pid=208398\"][{POST_SNAPBACK}][/a]

Emil,
The experimental procedure I outlined above was not for the purpose of checking noise reduction effects resulting only from downsampling. I've seen clear examples of this effect comparing equal FoV images from the P30 and 5D. Same size crops (but at different magnification) look about equally noisy, but the P30 image displays more detail because it's comprised of more pixels.

After downsampling the P30 crop of the shadows to the same file size as the 5D crop of the same shadows, the resolution in both images is more or less equalised, but the P30 crop suddenly appears a lot cleaner, viewed side by side on the monitor.

Perhaps first we should define what constitutes a stop of dynamic range before I do this experiment.

My understanding is as follows. I compare two images of equal FoV and file size that have been correctly exposed according to the same ETTR standards. Call them images A and B.

If image A shows more noise in the shadows than image B, then image A clearly has less DR than image B.

If I have to overexpose image A by (for example) one whole stop in order that the shadows in image A look as clean as the shadows in image B (a process which also result in image A having blown highlights), then it is true to say that the camera that produced image B has one stop more dynamic range than the camera that produced image A. Is this correct?
Logged

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #113 on: July 15, 2008, 07:12:33 pm »

Quote
BJL,

Dark current noise (thermal noise) becomes significant only at exposures of seconds and does not contribute significantly to most normal photographic situations.  Perhaps you did not read the Photometrics link about read noise.

Actually, dark current is significant at small intervals of time.  In standard models, output referred quantities (such as shot noise) may be referred back to their input referred values to be compared with input quantities (such as dark current). For e.g., under certain assumptions it can be shown that the output referred noise is reduced by square of integration time when referred to input.

Dark current calibration is a very significant part of any sensor device and may not be ignored for even smaller intervals of time than in seconds.

Furthermore, DR and SNR are not the same things -- sensors typically quote 2 different numbers for these two. They do correlate in the sense that if we consider SNR to be a good measure of image quality, then high DR, can be equally regarded as a good measure of image quality.
« Last Edit: July 16, 2008, 12:41:42 pm by joofa »
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #114 on: July 15, 2008, 08:48:39 pm »

Quote
Emil,
The experimental procedure I outlined above was not for the purpose of checking noise reduction effects resulting only from downsampling. I've seen clear examples of this effect comparing equal FoV images from the P30 and 5D. Same size crops (but at different magnification) look about equally noisy, but the P30 image displays more detail because it's comprised of more pixels.

After downsampling the P30 crop of the shadows to the same file size as the 5D crop of the same shadows, the resolution in both images is more or less equalised, but the P30 crop suddenly appears a lot cleaner, viewed side by side on the monitor.

Perhaps first we should define what constitutes a stop of dynamic range before I do this experiment.

My understanding is as follows. I compare two images of equal FoV and file size that have been correctly exposed according to the same ETTR standards. Call them images A and B.

If image A shows more noise in the shadows than image B, then image A clearly has less DR than image B.

If I have to overexpose image A by (for example) one whole stop in order that the shadows in image A look as clean as the shadows in image B (a process which also result in image A having blown highlights), then it is true to say that the camera that produced image B has one stop more dynamic range than the camera that produced image A. Is this correct?
[a href=\"index.php?act=findpost&pid=208513\"][{POST_SNAPBACK}][/a]

OK, you've got me thoroughly confused as to what you want to do.  Yes it's true that two images with equivalent absolute exposure have different noise in shadows (when converted with the same tone curve), then the one with less noise at a given tonal value in deep shadows will be exhibiting more DR.  If you overexpose image C and convert it with the same tone curve as A and B, the noise at the same tonal value as before will not have changed.  What you will have done is to move the parts of the image  that used to be at that tonal value up to a higher tonal value, where there is less noise.  In ETTR, one then typically changes the tone curve to restore that part of the image to its original tonal "intent".  That part of the image then has less noise than it would have had with no ETTR and the original tone curve.  But for the purposes of testing, one should always use the same tone curve (preferably linear) to compare apples to apples.

If you're not going to resample one or the other of your 40D images, you're not testing any property of the sensor.  The sensor doesn't care what lens you put in front of it, at whatever f-stop, it always responds the same way to photons in a given absolute exposure.
Logged
emil

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #115 on: July 15, 2008, 08:58:49 pm »

Quote
Actually, dark current is significant at small intervals of time, and additionally, in standard models, output referred quantities (such as shot noise) should be referred back to their input referred values to be compared with input quantities (such as dark current). For e.g., under certain assumptions it can be shown that the output referred noise is reduced by square of integration time when referred to input.

Dark current calibration is a very significant part of any sensor device and may not be ignored for even smaller intervals of time than in seconds.

Furthermore, DR and SNR are not the same things -- sensors typically quote 2 different numbers for these two. They do correlate in the sense that if we consider SNR to be a good measure of image quality, then high DR, can be equally regarded as a good measure of image quality.
[{POST_SNAPBACK}][/a]

You seem to be disagreeing with the standard noise model as for instance laid out in
[a href=\"http://learn.hamamatsu.com/articles/ccdsnr.html]http://learn.hamamatsu.com/articles/ccdsnr.html[/url]
or perhaps I'm not understanding what you mean by output/input referred values (do you just mean noise in ADU vs noise in electrons?).  I would have thought, regardless how you normalize it, dark current noise variance is linear in integration time, not decreasing with it.  And certainly any black frame noise I've ever measured for exposure times of say a tenth or less have had negligible contributions from dark current (ie they are reasonably independent of exposure time).

I do agree that DR and SNR are two different things.  I think people are too hung up on DR; it's the level of S/N over the range that is more important, and that's where MFDB's have the edge -- gathering more photons as a percentage of frame area means higher S/N throughout the range, even if the range itself is the same as FF DSLR's.
« Last Edit: July 15, 2008, 09:52:28 pm by ejmartin »
Logged
emil

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #116 on: July 16, 2008, 12:59:31 am »

Quote
But for the purposes of testing, one should always use the same tone curve (preferably linear) to compare apples to apples.

Emil,
This I shall do, as well as zero everything, including sharpening.

Quote
If you're not going to resample one or the other of your 40D images, you're not testing any property of the sensor.  The sensor doesn't care what lens you put in front of it, at whatever f-stop, it always responds the same way to photons in a given absolute exposure.

I am going to resample one or the other. Comparing equal size images is the only meaningful comparison for the photographer. I shall compare both procedures; downsampling the larger file, and upsampling the smaller file.
Logged

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #117 on: July 16, 2008, 01:15:09 am »

Quote
You seem to be disagreeing with the standard noise model as for instance laid out in
http://learn.hamamatsu.com/articles/ccdsnr.html
or perhaps I'm not understanding what you mean by output/input referred values (do you just mean noise in ADU vs noise in electrons?).  I would have thought, regardless how you normalize it, dark current noise variance is linear in integration time, not decreasing with it. 

Mapping of output parameters in an electrical/electronics circuit/network is frequently done to relate a parameter as observed on the output of an electronics circuit/model as the equivalent quantity as what would have been observed inside. (Not to be confused with ADU -- do you mean ADC by ADU?).

Using standard symbols (and some standard assumptions), the noise power may be represented as q(i_ph + i_d) * t_int + sigma_r^2, where t_int is the integration time and i_d is the dark current. Suppose f is the functional that represents the accumulated charge obtained from the current i over integration time t_int. Then, if I have to model the noise as input referred; first realizing that under the assumption that input referred noise is small compared to signal and may be estimated by the first order approximation

f(i + N_i) ~= f(i) + N_i * f'(i),

where, f' is the derivative, the average power of the equivalent input referred noise may be seen as:

(q(i_ph + i_d) * t_int + sigma_r^2) / f'^2 = (q(i_ph + i_d) * t_int + sigma_r^2) / t_int^2,

where the last parameter t_int^2 is what I mentioned as the square in the input referred noise.

Quote
And certainly any black frame noise I've ever measured for exposure times of say a tenth or less have had negligible contributions from dark current (ie they are reasonably independent of exposure time).

What we refer to as "raw" signal is normally not raw. Typically extensive calibration is applied to properly offset the signal inside the hardware. I don't know if the equipment that you are using is applying these calibrations (it should), and assuming that it does, then perhaps dark current degradation may have already been corrected.

We have measured the dark current degradations right off the hardware without calibration, and some correction is always needed to make the image quality better.
« Last Edit: July 16, 2008, 01:28:28 am by joofa »
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #118 on: July 16, 2008, 08:05:45 am »

Quote
Actually, dark current is significant at small intervals of time, and additionally, in standard models, output referred quantities (such as shot noise) should be referred back to their input referred values to be compared with input quantities (such as dark current). For e.g., under certain assumptions it can be shown that the output referred noise is reduced by square of integration time when referred to input.

Dark current calibration is a very significant part of any sensor device and may not be ignored for even smaller intervals of time than in seconds.
[{POST_SNAPBACK}][/a]

Rather than discussing the matter in the abstract and obscuring the issues with needless jargon and mathematics (as in your reply to Emil), you might want to refer to actual examples. For example, the [a href=\"http://www.photomet.com/pm_solutions/library_encyclopedia/library_enc_dark.php]Photometrics[/url] web site has a good section in dark current and noise. In their example, they conclude:

"Thus, the dark current noise generated in a 4-second exposure has virtually no effect on the total camera system noise. Similarly, for a 30-second exposure we find that the total system noise equals 14.1 electrons. Again, even at a 30-second exposure, dark current noise barely contributes to the total camera system noise."

In another example, Roger Clark examines the noise characteristics of the Canon 1 D Mark II. The sensor has a read noise of 3.8 electrons. He found that average dark currents per pixel were 0.013 to 0.02 electrons/second, but that some pixels had dark currents as high as about 0.25 electrons/second. For an exposure of one second, the dark noise is negligible as compared to the read noise.

Quote
Furthermore, DR and SNR are not the same things -- sensors typically quote 2 different numbers for these two. They do correlate in the sense that if we consider SNR to be a good measure of image quality, then high DR, can be equally regarded as a good measure of image quality.
[{POST_SNAPBACK}][/a]

That is true, but I don't know why you bring up the matter in your reply to me. I did not mention DR. DR is defined as the full well in electrons divided by the read noise, also expressed in electrons. Shot noise, the most important source of noise over most of the range of exposures, does not enter into the equation.

SNR includes all sources of noise and may be calculated for various levels of exposure, as described by [a href=\"http://www.imatest.com/docs/noise.html]Norman Koren[/url] on his Imatest web site.

Bill
Logged

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
1DS3 vs 5D CoC shootout in MFDB forum
« Reply #119 on: July 16, 2008, 10:20:20 am »

Quote
Mapping of output parameters in an electrical/electronics circuit/network is frequently done to relate a parameter as observed on the output of an electronics circuit/model as the equivalent quantity as what would have been observed inside. (Not to be confused with ADU -- do you mean ADC by ADU?).

Using standard symbols (and some standard assumptions), the noise power may be represented as q(i_ph + i_d) * t_int + sigma_r^2, where t_int is the integration time and i_d is the dark current. Suppose f is the functional that represents the accumulated charge obtained from the current i over integration time t_int. Then, if I have to model the noise as input referred; first realizing that under the assumption that input referred noise is small compared to signal and may be estimated by the first order approximation

f(i + N_i) ~= f(i) + N_i * f'(i),

where, f' is the derivative, the average power of the equivalent input referred noise may be seen as:

(q(i_ph + i_d) * t_int + sigma_r^2) / f'^2 = (q(i_ph + i_d) * t_int + sigma_r^2) / t_int^2,

where the last parameter t_int^2 is what I mentioned as the square in the input referred noise.
What we refer to as "raw" signal is normally not raw. Typically extensive calibration is applied to properly offset the signal inside the hardware. I don't know if the equipment that you are using is applying these calibrations (it should), and assuming that it does, then perhaps dark current degradation may have already been corrected.

We have measured the dark current degradations right off the hardware without calibration, and some correction is always needed to make the image quality better.
[a href=\"index.php?act=findpost&pid=208579\"][{POST_SNAPBACK}][/a]

OK, now I understand where you're coming from (and BTW, ADU is a common abbreviation for quantization step, ie raw level).

What is the utility of using input-referred quantities?  I think the thrust of bjanes' reply and my previous post is that dark current is a negligible component of the output noise in all but very long time exposures (several seconds or more).  I'm also puzzled why one would want to extrapolate this back to a hypothetical input noise which, as far as I can tell, is not any actual noise of any actual component of the capture process.
Logged
emil
Pages: 1 ... 4 5 [6] 7 8   Go Up