Pages: 1 [2] 3 4 ... 8   Go Down

Author Topic: The Physics of Digital Cameras  (Read 62146 times)

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
The Physics of Digital Cameras
« Reply #20 on: December 21, 2009, 11:21:17 am »

Quote from: Jonathan Wienke
Here's one blatant error:

Your discussion of standard deviations per level is completely nonsensical. You completely fail to recognize that as the photon count increases, you are increasing the sample population, and that decreases the standard deviation

If you were correct, the lighter tones in a digital image would be just as noisy or even noisier than the deep shadows. But this is the opposite of the behavior of every digital camera ever made. You need to go back and get the Statistics 101 stuff right before getting fancy with Poisson aliasing and stuff like that.

Jonathan, you are incorrect here. In a Poisson distribution, the standard deviation of the count is equal to the square root of the count. Thus as the count increases, the standard deviation also increases, but more slowly than the count itself. The noise in digital images is greatest in the highlights. However, the signal to noise ratio is highest in the highlights and it is the S:N that correlates with perceived noise.

if N photons are collected, the standard deviation will be sqrt(N). The signal to noise will be N/sqrt(N) = sqrt (N). If you double the exposure, the S:N increases by a factor of 1.4.

For a detailed discussion see Emil Martinec.
« Last Edit: December 21, 2009, 11:23:11 am by bjanes »
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
The Physics of Digital Cameras
« Reply #21 on: December 21, 2009, 02:28:09 pm »

Quote from: WarrenMars
For those who think there are errors in my analysis: "Put up or shut up". Quoting some unknown friend who says: "He gets some things right but also quite a few wrong." is of no value.
Your analysis focuses entirely on Poisson noise or shot noise and you state that the problems are in the highlights, where shot noise is highest and you introduce the neologism of "Poisson alaising". However, you fail to take into account that what we perceive as noise is really a low ratio of the signal to noise. Anyone who does digital photography knows that the apparent noise is highest in the shadows, not the highlights. You totally neglect read noise, which predominates in the shadows. If you put the lens cap on the camera, the signal is zero and the noise is entirely read noise, and this is one way to measure the read noise.

Look at Table 2 in Roger Clark's analysis of the Canon 1D MII. Noise is highest in the highlights, but the signal to noise is also highest in this region and if you look at actual images you will see that the perceived noise is the smallest in this region.  The low noise characteristics of the latest sensors such as are used in the Nikon D3x have been achieved largely by reducing the read noise, although there has been some improvements in quantum efficiency.
« Last Edit: December 21, 2009, 02:29:22 pm by bjanes »
Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
The Physics of Digital Cameras
« Reply #22 on: December 21, 2009, 02:59:54 pm »

Quote from: bjanes
Jonathan, you are incorrect here. In a Poisson distribution, the standard deviation of the count is equal to the square root of the count. Thus as the count increases, the standard deviation also increases, but more slowly than the count itself.

When expressed as a percentage of the sample population count, SD decreases as the sample population increases. In absolute terms, yes, SD increases, but more slowly than the population count. I specified the percentage bit in the second part of my post but neglected to do so in the first.
Logged

joofa

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 544
The Physics of Digital Cameras
« Reply #23 on: December 21, 2009, 03:14:30 pm »

Quote from: Jonathan Wienke
When expressed as a percentage of the sample population count, SD decreases as the sample population increases. In absolute terms, yes, SD increases, but more slowly than the population count. I specified the percentage bit in the second part of my post but neglected to do so in the first.

Jonathan, it would appear to me that you are mixing up a few issues here. BJanes is right. When you talk about population basically every member in that population is an estimator with some variance of some quantity; however, the population is collectively considered to reduce that estimation variance. For e.g., pick a random adult in US and take his/her height to be representative of US adult person height. However, that is not a good estimator. Therefore, you increase the population size with the goal of reducing variance and perhaps bias also. In the photon example each single photon is not an estimator of the actual true number of the photons.
Logged
Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins

Hywel

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 294
    • http://www.restrainedelegance.com
The Physics of Digital Cameras
« Reply #24 on: December 21, 2009, 03:14:41 pm »

A much fuller treatment of the effects of noise in digital photography can be found in any decent textbook on astro imaging, such as Steve Howell's very good "Handbook of CCD Astronomy" ISBN 0-521-64834-3

http://www.amazon.co.uk/Astronomy-Cambridg...5075&sr=8-1

or the more practical-amateur-photography orientated "Handbook of Astronomical Image Processing" by Richard Berry and James Burnell, ISBN 0-9433396-82-4

http://www.amazon.co.uk/Handbook-Astronomi...5385&sr=1-2

None of this is even faintly new. Astronomers have been dealing with noise issues and digital sensors for decades and their signal to noise ratios are vastly less favourable than ours. That actually means there are a hell of a lot of tricks in the arsenal for controlling the noise that daylight photographers rarely if ever use.

For example, anyone intending to make serious astro images would certainly cool their sensor significantly below ambient temperature (reduces the thermal noise), capture dark frames (which allows an averaged noise subtraction), bias frames (zero exposure dark frames which allow you to separate readout noise from thermal noise) and flat fields to correct for uneven illumination by the optics, sensor contamination and channel to channel variations in the gain.

Sure, there will be a sqrt(N)/N behaviour from photon statistics, but there are an awful lot of other sources of noise which are also important, and they kick in in the shadows, not the bright areas.

Suppose that you DO get the case where one pixel randomly gets more photons and ends up one bit higher than it should be. So what? The pixel will read out as (for 14 bits) 16380 instead of 16379. No human being is sensitive to such a small change, and unless you are trying to resolve a pattern painted in stripy greys differing by 1 part in 16384 (i.e identical to the human eye) at a resolution equal to the pixel pitch (i.e. far smaller than the human eye could resolve) the impact on the perceived final image will be zero. Averaged over more than a few pixels, the noise in the bright areas soon gets beaten down to utterly imperceptible levels. Then add in the human eye's logarithmic response, which makes us even less sensitive to small differences in the bright areas and this is, frankly, a non-issue.

As lots of others have pointed out, the noise problems are in the shadows, not in the bright areas. N is small there, sqrt(N)/N is therefore large, and the expected deviation of a given pixel from the "true" value is much larger, made even worse by the Poisson tails which are appreciable for small N but utterly, utterly irrelevant for large N where the behaviour is so close to Gaussian that your spreadsheet couldn't even calculate it, as you found out.

You've also omitted quantisation noise, which is the noise introduced by digitising the signal. Controlling this noise is one reason why cameras have gone from a 12-bit internal readout to a 14-bit: you don't want to bodge up your nice clean signal by crudely slicing it into too coarse bins. You always want to digitise at a level comparable with or better than the noise, or the quantisation noise would overwhelm the shot noise, which would be really dumb.

The other noise sources also become relevant in the shadows, or with low light levels, which is why all astro imaging camera sensors are cooled but most dSLR or MFDB chips are not. (My Hasselblad has a hefty heat sink it in, though, and my Canon D30 did automatic dark frame subtraction for long exposures- but these features are not widespread these days).

Cheers, Hywel Phillips
« Last Edit: December 21, 2009, 03:24:42 pm by Hywel »
Logged

PierreVandevenne

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 512
    • http://www.datarescue.com/life
The Physics of Digital Cameras
« Reply #25 on: December 21, 2009, 05:16:49 pm »

Quote from: Hywel
A much fuller treatment of the effects of noise in digital photography can be found in any decent textbook on astro imaging, such as Steve Howell's very good "Handbook of CCD Astronomy" ISBN 0-521-64834-3

http://www.amazon.co.uk/Astronomy-Cambridg...5075&sr=8-1

or the more practical-amateur-photography orientated "Handbook of Astronomical Image Processing" by Richard Berry and James Burnell, ISBN 0-9433396-82-4

http://www.amazon.co.uk/Handbook-Astronomi...5385&sr=1-2

Fully agree - it becomes really tiring to see so many "final analysis" articles written by people who haven't even looked at the fundamentals, clearly exposed in those books and in, no doubt, many other serious references. The article that started the thread is, in some ways, hilarious. The guy begins by confusing pixel pitch and pixel surface... and a lot of very basic counts are just so wrong: 64 millions photons per pixel, yeah, sure, how do we count them? Assuming an utterly crappy QE of 1% with have 640 000 electrons to deal with in each well...
 
Logged

Guillermo Luijk

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2005
    • http://www.guillermoluijk.com
The Physics of Digital Cameras
« Reply #26 on: December 21, 2009, 06:58:23 pm »

Quote from: Hywel
You've also omitted quantisation noise, which is the noise introduced by digitising the signal. Controlling this noise is one reason why cameras have gone from a 12-bit internal readout to a 14-bit: you don't want to bodge up your nice clean signal by crudely slicing it into too coarse bins. You always want to digitise at a level comparable with or better than the noise, or the quantisation noise would overwhelm the shot noise, which would be really dumb.
Not exactly. Quantisation noise is not a problem in present digital cameras because the other sources of noise (read noise basically) is so far greater than the quantisation step.

Moving from 12 to 14 bits seems closer to a marketing move than anything that can really provide a quality advantage, because as long as read noise remains larger than a 1 bit step in a 12-bit scale, increasing bitdepth is unnecesary. It was said somewhere that the Nikon D3X is probably the first camera in really taking an advantage from its 14-bit encoding because of its very low read and pattern noise. Would be nice to test that doing a 12-bit RAW development on a D3X's NEF and compare it with the result of the 14-bit regular development.

Regards
« Last Edit: December 21, 2009, 07:01:17 pm by GLuijk »
Logged

WarrenMars

  • Guest
The Physics of Digital Cameras
« Reply #27 on: December 22, 2009, 02:08:56 am »

Quote from: Jonathan Wienke
Here's one blatant error:

Your discussion of standard deviations per level is completely nonsensical. You completely fail to recognize that as the photon count increases, you are increasing the sample population, and that decreases the standard deviation and increases the overall accuracy of the sampling (or confidence factor) by approximately the square root of the photon count. This is why digital images are noisiest in the shadows--the photon sample population per pixel is smallest in the darkest tones, and largest in the lightest tones. Your "Poisson Aliasing" chart seems to be assuming that the standard deviation increases (or at least stays constant) in proportion to average photon count as one increases the photon count, but this is completely backwards. As the photon count increases, the standard deviation decreases (at least when expressed as a percentage of the photon count) because you have a larger sample population of photons to work with. It's the same statistical principle used for calculating the margin of error for opinion surveys--the larger the sample population, the smaller the standard deviation becomes as a percentage of the sample population, and the greater the accuracy of the survey results becomes as a result.

If you were correct, the lighter tones in a digital image would be just as noisy or even noisier than the deep shadows. But this is the opposite of the behavior of every digital camera ever made. You need to go back and get the Statistics 101 stuff right before getting fancy with Poisson aliasing and stuff like that.

Actually it is you that have got it back-to-front.
Like you I thought the same, until (unlike you), I did the maths.
You think those graphs are a made up joke?
I assure you they are generated from the spreadsheet I set up using the Poison maths to PROVE the point.

I have posted this stuff BECAUSE it is counter-intuitive! The reality of camera physics is NOT what you think!
If you had taken the time to UNDERSTAND what I went to a great deal of trouble to show, instead of just skimming over it you would now be singing a different tune.

I put this stuff up to HELP you people understand what you are actually dealing with on a daily basis. The least you can do in return is to make the effort to fully understand what I have said.
Logged

Hywel

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 294
    • http://www.restrainedelegance.com
The Physics of Digital Cameras
« Reply #28 on: December 22, 2009, 03:42:58 am »

Quote from: GLuijk
Not exactly. Quantisation noise is not a problem in present digital cameras because the other sources of noise (read noise basically) is so far greater than the quantisation step.

Moving from 12 to 14 bits seems closer to a marketing move than anything that can really provide a quality advantage, because as long as read noise remains larger than a 1 bit step in a 12-bit scale, increasing bitdepth is unnecesary. It was said somewhere that the Nikon D3X is probably the first camera in really taking an advantage from its 14-bit encoding because of its very low read and pattern noise. Would be nice to test that doing a 12-bit RAW development on a D3X's NEF and compare it with the result of the 14-bit regular development.

Regards


Which is my point exactly. One wants to keep the digitisation noise (which is something entirely under your control, as you can choose the quantisation, the only impact is a slight increase in electronics cost) waaaay below the statistical noise. So if there's even the faintest chance of the quantisation noise coming into play, you might as well go for an extra couple of bits, because they are a lot cheaper than trying to redesign the sensor for lower noise.

The example posted earlier on the thread of the Canon 1D Mk2 analysis showed that there are some effects of quantisation noise, it basically provides a noise floor (along with the readout noise).

  Cheers, Hywel.
« Last Edit: December 22, 2009, 03:45:58 am by Hywel »
Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
The Physics of Digital Cameras
« Reply #29 on: December 23, 2009, 11:34:21 am »

Quote from: WarrenMars
Actually it is you that have got it back-to-front.
Like you I thought the same, until (unlike you), I did the maths.
You think those graphs are a made up joke?
I assure you they are generated from the spreadsheet I set up using the Poison maths to PROVE the point.

It's Poisson, not Poison...

The problem with your mathematical analysis is that it takes a flawed look at only one type of noise, when real-world cameras have a combination of several types of noise. I understand what you are saying just fine, and unlike you, I have done some mathematical analysis of actual RAW image data. I also understand that your theory does not fit the observed behavior of any digital camera ever made. Conduct the following experiment, and post your results:

  • Set up a camera on a tripod, and shoot a static scene with a fairly wide dynamic range, say a Color Checker.
  • With all camera settings on manual, shoot 100 identical RAW exposures of the Color checker.
  • Select one pixel each from the darkest and lightest patches of the Color Checker
  • Make lists of the RAW data values for the pixels you chose, and calculate the average and standard deviation of the white patch pixel data and the black patch pixel data. An Excel spreadsheet would be preferred, to verify that the average and standard deviation values of the data are being calculated correctly.
  • Divide the standard deviation of each pixel's data by the average of that pixel's data.

If you are correct, the standard deviation of the white patch data divided by the average of the white patch data will be greater than or equal to the standard deviation of the black patch data, divided by the average of the black patch data. If I'm correct, then the standard deviation of the black patch data divided by the average of the black patch data will be greater.

The sensitivity and reliability of photodetectors to detect incoming photons is one source of noise. The analog amplification of the resulting voltage is a second source of noise. The analog-to-digital converter is a third noise source. The quantum fluctuation in incoming photons (which is what you analyzed) is a fourth, and is the least significant of the four. The first three noise sources combine to form what is called read noise, which is present even when no incoming photons are present. Read noise is what limits dynamic range and high-ISO performance, and since it is present regardless of whether any photons are detected or not, it is most predominant in the darkest tones and shadows.
Logged

brianc1959

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 53
    • http://
The Physics of Digital Cameras
« Reply #30 on: December 23, 2009, 11:53:20 am »

Quote from: WarrenMars
Actually it is you that have got it back-to-front.
Like you I thought the same, until (unlike you), I did the maths.
You think those graphs are a made up joke?
I assure you they are generated from the spreadsheet I set up using the Poison maths to PROVE the point.

I have posted this stuff BECAUSE it is counter-intuitive! The reality of camera physics is NOT what you think!
If you had taken the time to UNDERSTAND what I went to a great deal of trouble to show, instead of just skimming over it you would now be singing a different tune.

I put this stuff up to HELP you people understand what you are actually dealing with on a daily basis. The least you can do in return is to make the effort to fully understand what I have said.

Warren:
Your understanding of physics seems to be astonishingly poor, as evidenced by the absurd statement you made in your article regarding magnifying glasses.  What are your credentials?  Can you point us to some peer-reviewed journal articles you've written in this field?
Logged

WarrenMars

  • Guest
The Physics of Digital Cameras
« Reply #31 on: December 29, 2009, 05:19:59 am »

Quote from: brianc1959
Warren:
Your understanding of physics seems to be astonishingly poor, as evidenced by the absurd statement you made in your article regarding magnifying glasses.  What are your credentials?  Can you point us to some peer-reviewed journal articles you've written in this field?

Think the sun is in focus when you can burn paper with a magnifying glass do you? You've obviously NEVER done any solar observing using a telescope!
There's not much point submitting one's results for peer analysis when people with your understanding are doing the reviewing.  
Logged

WarrenMars

  • Guest
The Physics of Digital Cameras
« Reply #32 on: December 29, 2009, 05:42:37 am »

Quote from: Jonathan Wienke
It's Poisson, not Poison...

Fair enough. my typo, fortunately I didn't make that error on my site.

Quote
If you are correct, the standard deviation of the white patch data divided by the average of the white patch data will be greater than or equal to the standard deviation of the black patch data, divided by the average of the black patch data. If I'm correct, then the standard deviation of the black patch data divided by the average of the black patch data will be greater.

What you are telling me is that noise is most visible in the shadows. Hey! You'll get no argument from me on that one! In fact I think I said exactly that on my site. What you and everyone else who has posted in this thread has failed to grasp is that the great problem is NOT in Poisson noise (which is apparent in the shadows) but in Poisson aliasing (which is most apparent in the hot spots)!

Poisson Aliasing does not manifest as coloured spots so you don't realise it's there. It is such a large problem that it dwarfs ALL other noise problems like an ocean. It is so large a problem that it's solution dictates the half the ENTIRE implementation of camera design! The camera you use is MASSIVELY compromised in dynamic range purely to solve the issue of Poisson Aliasing!

You don't know this stuff, and it's not available elsewhere on the net, so you choose to ignore it or disbelieve it. It's the Elephant in the room gentlemen! Too big to deal with so we'll just pretend it isn't there! Very much like over-population actually...

Anyway. You can continue to misunderstand what I have set down in clear and concise English, it doesn't bother me. I have made this analysis available for those who want to know, not for those who don't.

Go ahead and dismiss me but you can't change the laws of physics! The camera manufacturers have chosen to keep quiet about this stuff but it doesn't mean they don't know. Of course THEY know about it! Your camera won't work without it! They just don't want you to know the truth; it's bad for business. Cameras are like religion: the less people know the more they can sell.
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
The Physics of Digital Cameras
« Reply #33 on: December 29, 2009, 08:33:16 am »

Quote from: WarrenMars
Poisson Aliasing does not manifest as coloured spots so you don't realise it's there. It is such a large problem that it dwarfs ALL other noise problems like an ocean. It is so large a problem that it's solution dictates the half the ENTIRE implementation of camera design! The camera you use is MASSIVELY compromised in dynamic range purely to solve the issue of Poisson Aliasing!

Go ahead and dismiss me but you can't change the laws of physics! The camera manufacturers have chosen to keep quiet about this stuff but it doesn't mean they don't know. Of course THEY know about it! Your camera won't work without it! They just don't want you to know the truth; it's bad for business. Cameras are like religion: the less people know the more they can sell.

I was previously unaware of Poisson aliasing, so I did a Google search and came up with this article which I don't understand: Poisson summation. Perhaps you can explain how it applies to digital imaging. I still do not understand how this aliasing affects the image. Can you post an example actually demonstrating the artifact rather than merely describing it?

Logged

brianc1959

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 53
    • http://
The Physics of Digital Cameras
« Reply #34 on: December 29, 2009, 02:07:32 pm »

Quote from: WarrenMars
Think the sun is in focus when you can burn paper with a magnifying glass do you? You've obviously NEVER done any solar observing using a telescope!
There's not much point submitting one's results for peer analysis when people with your understanding are doing the reviewing.  

Warren:
Yep, gotta focus that sun if you wanna burn the paper quickly!  Ever bothered to try it?    

So I take it that you have no credentials, and that you've never published your results in a proper journal.
Logged

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
The Physics of Digital Cameras
« Reply #35 on: December 29, 2009, 03:06:48 pm »

As far as I can tell, in his talk of "A problem at the bright end" Warren Mars is using the criterion that the numerical level in the digital output should be exactly right a large fraction of the time. For example, the need for 2^36 photons at the brightest photosites with 16-bit output (with levels from 0 to 65,535) is based on the "4SD" (a.k.a. "4 sigma" in the field of statistical quality control) criterion which guarantees that when a pixel is reported as having output level 12,345 there is a 99.7% probability that the incident light at that photosite really corresponds to exactly that level, rather than for example being slightly dimmer so that the correct output level would be 12,344, or slightly brighter so that the correct output level would be 12,346.

The mathematics is fine as far as I can tell, but that criterion of getting the level exactly right is of no practical relevance. It does not matter if a fairly bright pixel (like level 12,235 out of 65,535 as above, roughly mid-tone) is reported at a value that is off by one, for a relative error of one part in 12,345, because the eye has no hope of detecting that discrepancy: a relative error of about 1 in 100 in luminosity is about the smallest that the eye can discriminate. And that is in part because the eye has even less hope of gathering these massive numbers of photons as a sensor does, due to its rather puny entrance pupil diameter!

It looks like a classic misunderstanding of statistics and of what constitutes a significant error: insisting on exact values when in practice they are neither attainable nor necessary. Instead, measurements that fall within some appropriate error tolerance of the exact value most of the time are a sufficient design goal: in this case about 1%, so getting the level within about 100 in my example above.


If Poisson aliasing were truly such a problem, we would all have severe vision problems!
Logged

nma

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 312
The Physics of Digital Cameras
« Reply #36 on: December 29, 2009, 05:46:52 pm »

Quote from: BJL
As far as I can tell, in his talk of "A problem at the bright end" Warren Mars is using the criterion that the numerical level in the digital output should be exactly right a large fraction of the time. For example, the need for 2^36 photons at the brightest photosites with 16-bit output (with levels from 0 to 65,535) is based on the "4SD" (a.k.a. "4 sigma" in the field of statistical quality control) criterion which guarantees that when a pixel is reported as having output level 12,345 there is a 99.7% probability that the incident light at that photosite really corresponds to exactly that level, rather than for example being slightly dimmer so that the correct output level would be 12,344, or slightly brighter so that the correct output level would be 12,346.

The mathematics is fine as far as I can tell, but that criterion of getting the level exactly right is of no practical relevance. It does not matter if a fairly bright pixel (like level 12,235 out of 65,535 as above, roughly mid-tone) is reported at a value that is off by one, for a relative error of one part in 12,345, because the eye has no hope of detecting that discrepancy: a relative error of about 1 in 100 in luminosity is about the smallest that the eye can discriminate. And that is in part because the eye has even less hope of gathering these massive numbers of photons as a sensor does, due to its rather puny entrance pupil diameter!

It looks like a classic misunderstanding of statistics and of what constitutes a significant error: insisting on exact values when in practice they are neither attainable nor necessary. Instead, measurements that fall within some appropriate error tolerance of the exact value most of the time are a sufficient design goal: in this case about 1%, so getting the level within about 100 in my example above.


If Poisson aliasing were truly such a problem, we would all have severe vision problems!

One can classify errors as systematic and random (i.e. statistical). Aliasing is a systematic error, due to undersampling of the signal. The Shannon sampling theorem is derived for noise-free data.  Poisson aliasing is essentially a non sequitur. There is no such thing.  



Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
The Physics of Digital Cameras
« Reply #37 on: December 29, 2009, 07:31:01 pm »

Quote from: WarrenMars
What you are telling me is that noise is most visible in the shadows. Hey! You'll get no argument from me on that one! In fact I think I said exactly that on my site. What you and everyone else who has posted in this thread has failed to grasp is that the great problem is NOT in Poisson noise (which is apparent in the shadows) but in Poisson aliasing (which is most apparent in the hot spots)!

Wow, this is hilarious. You acknowledge that noise is most visible in the shadows, yet you continue to insist that the biggest problem is in the highlights--due to "poisson aliasing'. If your assertions had any validity whatsoever, then the margin of error of data captured in the highlights would be much higher than the margin of error for data captured in the shadows; i.e. if the shadow values had an accuracy of ±1%, then highlight values would have an accuracy of ±5%, or something along those lines. But the irrefutable fact is the opposite: the accuracy of highlight values is always greater than the accuracy of shadow values.

Quote
Poisson Aliasing does not manifest as coloured spots so you don't realise it's there. It is such a large problem that it dwarfs ALL other noise problems like an ocean. It is so large a problem that it's solution dictates the half the ENTIRE implementation of camera design! The camera you use is MASSIVELY compromised in dynamic range purely to solve the issue of Poisson Aliasing!

The first sentence of paragraph is the greatest proof possible of the invalidity of your argument. Poisson aliasing, according to you, is the result of random fluctuations in the number of photons that strike a given photodetector in a sensor in a given exposure. Expecting such random fluctuations to "not manifest as coloured spots" is preposterous; in order for a random phonomenon to "not manifest as coloured spots", all of the color channels ( R,G,B ) for every pixel in the image must be affected exactly equally. If your alleged "random" fluctuations are not precisely synchronized among all of the color channels for each pixel, then each pixel in the image will undergo some hue shift, and "coloured spots" will inevitably result. You're basically claiming that every time I press the shutter release on my 1Ds, that a random phenomenon rolls a die 3 times for each pixel in the image, and the same number is rolled 3 times in a row for every pixel in the image--all 10,989,056 of them. For a 6-sided die, the odds would be ((6^3 / 6) ^ 10,989,056):1, a statistical impossibility. The odds of this sort of correlation are actually much worse than my example, since the 1Ds has 4096 possible RAW values per pixel, not merely 6. Yet you claim that this massive improbability happens every time an image is captured with a digital camera.

The only way this could work is if every digital camera had a miniature Infinite Improbability Drive built into the sensor. But since I've never had a model's clothing spontaneously teleport 3 feet to the left when I snapped her photo, I'm pretty sure there isn't one in there. Come to think of it, I've never even had a dust spot on the sensor teleport 3 centimeters to the left when I pressed the shutter release, so there you have it, definitive proof that your statistics are egregiously misapplied.
« Last Edit: December 29, 2009, 07:32:11 pm by Jonathan Wienke »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
The Physics of Digital Cameras
« Reply #38 on: December 29, 2009, 09:44:14 pm »

Quote from: BJL
The mathematics is fine as far as I can tell, but that criterion of getting the level exactly right is of no practical relevance. It does not matter if a fairly bright pixel (like level 12,235 out of 65,535 as above, roughly mid-tone) is reported at a value that is off by one, for a relative error of one part in 12,345, because the eye has no hope of detecting that discrepancy: a relative error of about 1 in 100 in luminosity is about the smallest that the eye can discriminate. And that is in part because the eye has even less hope of gathering these massive numbers of photons as a sensor does, due to its rather puny entrance pupil diameter!

And then there's the issue of photon shot noise (which has a Poisson distribution).

Cheers,
Bart
« Last Edit: December 30, 2009, 05:06:30 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

WarrenMars

  • Guest
The Physics of Digital Cameras
« Reply #39 on: December 29, 2009, 09:47:25 pm »

Quote from: brianc1959
Warren:
Yep, gotta focus that sun if you wanna burn the paper quickly!  Ever bothered to try it?    

So I take it that you have no credentials, and that you've never published your results in a proper journal.

Whether the sun is in sharp focus at the EXACT point of burning is an interesting question. No doubt most readers would think that it is, but consider this: Put your magnifying glass on its side your window ledge so that the lens is against the glass and look at something distant (not the sun). No matter how close or far you stand from the lens the scene is the same brightness with or without the lens.

The critical point here is that you can't brighten an image using optics alone, not with a lens that is designed to focus. If the distant sky is the same brightness through the lens as it is without the lens, then it is safe to say that the sun also will be no brighter. Why it burns the paper is an interesting question.

It is an interesting fact that many things in this world don't work the way you might think. Light amplification is one of them, academic peer review is another.
Logged
Pages: 1 [2] 3 4 ... 8   Go Up