Luminous Landscape Forum

Equipment & Techniques => Digital Cameras & Shooting Techniques => Topic started by: WarrenMars on December 18, 2009, 07:06:12 pm

Title: The Physics of Digital Cameras
Post by: WarrenMars on December 18, 2009, 07:06:12 pm
Think you understand the theoretical basis for Digital Cameras? ...  
Unless you have done the figures on Poisson aliasing I'm betting you don't.
Only a proper understanding of the physics of digital cameras and how they match the parameters of human visual perception will show you where the limits of camera technology lie.

You may be suprised to find that current image quality is already within 2 stops of its theoretical maximum and is unlikely to improve by more than 1 stop. You may also be surpised to discover that current technology has already pushed photography 6 stops beyond what can be achieved theoretically and that the difference has been covered up with a combination of human tolererance, noise reduction and sharpening.

There may be some other results that may surprise you also. Go ahead and read my exposé (http://warrenmars.com/photography/technical/resolution/photons.htm) on this fascinating and complex subject. I don't think you'll find this stuff anywhere else on the net, no doubt the big companies know it all but they keep their secrets to themselves.

http://warrenmars.com/photography/technica...ion/photons.htm (http://warrenmars.com/photography/technical/resolution/photons.htm)
Title: The Physics of Digital Cameras
Post by: feppe on December 18, 2009, 07:42:55 pm
Quote from: WarrenMars
You may be suprised to find that current image quality is already within 2 stops of its theoretical maximum and is unlikely to improve by more than 1 stop. You may also be surpised to discover that current technology has already pushed photography 6 stops beyond what can be achieved theoretically and that the difference has been covered up with a combination of human tolererance, noise reduction and sharpening.

I'll let those qualified to critique your theory and physics, but you yourself state that we've already gone 6 stops beyond what theory states. The obvious question is what makes you think further developments in hardware, noise reduction, sharpening and other post-processing techniques won't push the boundaries farther than one more stop of improvement? Sounds as compelling as Peak Oil, and just as hokum.
Title: The Physics of Digital Cameras
Post by: tim wolcott on December 19, 2009, 12:35:38 am
Quote from: WarrenMars
Think you understand the theoretical basis for Digital Cameras? ...  
Unless you have done the figures on Poisson aliasing I'm betting you don't.
Only a proper understanding of the physics of digital cameras and how they match the parameters of human visual perception will show you where the limits of camera technology lie.

You may be suprised to find that current image quality is already within 2 stops of its theoretical maximum and is unlikely to improve by more than 1 stop. You may also be surpised to discover that current technology has already pushed photography 6 stops beyond what can be achieved theoretically and that the difference has been covered up with a combination of human tolererance, noise reduction and sharpening.

There may be some other results that may surprise you also. Go ahead and read my exposé (http://warrenmars.com/photography/technical/resolution/photons.htm) on this fascinating and complex subject. I don't think you'll find this stuff anywhere else on the net, no doubt the big companies know it all but they keep their secrets to themselves.

http://warrenmars.com/photography/technica...ion/photons.htm (http://warrenmars.com/photography/technical/resolution/photons.htm)

I find this interesting, but this really is what is wrong about photography.  The art of teaching tech.  Its good to the practical limits of what you can do and achieve with your equipment.  However, true photography and the artist who see the best, compose the best, and choose the pre-visualization for the images they create, will always be the better photographer.

We have gotten so far away from this art of photography and instead of perfecting this art.  You are focusing on tech that cannot be really controlled unless you are making the equipment.  We are all subjected to equipment that has flaws,  I personally contact every manufacturer of equipment I use and ask and show what is wrong so I have a better piece of equipment to use and you do to.

Most photographers who want to get better should be focusing on the art of shooting it once and capturing it right the first time.

Granted we have better tools now than ever before, but the quality of the photographs are lacking at best.  This hit and miss style of just shooting and wondering what you captured is what is truly wrong with our industry.

Its easier to understand tech, its just knowledge and doesn't take that much effort.  Rather than studying what, where and how you are going to create your next great image.  I know I spend a tremendous amount of time doing this.  FAILUE TO PREPARE IS PREPARING TO FAIL.  

Still a 2 dollar framing card will do more for your composition than all the tech you can resolve in your brain.

The Masters that came before us must be laughing at most of us.  Tim Wolcott    www.galleryoftheamericanlandscape.com  
Title: The Physics of Digital Cameras
Post by: PeterAit on December 19, 2009, 10:20:58 am
Quote from: tim wolcott
I find this interesting, but this really is what is wrong about photography.  The art of teaching tech.  Its good to the practical limits of what you can do and achieve with your equipment.  However, true photography and the artist who see the best, compose the best, and choose the pre-visualization for the images they create, will always be the better photographer.

We have gotten so far away from this art of photography and instead of perfecting this art.  You are focusing on tech that cannot be really controlled unless you are making the equipment.  We are all subjected to equipment that has flaws,  I personally contact every manufacturer of equipment I use and ask and show what is wrong so I have a better piece of equipment to use and you do to.

Most photographers who want to get better should be focusing on the art of shooting it once and capturing it right the first time.

Granted we have better tools now than ever before, but the quality of the photographs are lacking at best.  This hit and miss style of just shooting and wondering what you captured is what is truly wrong with our industry.

Its easier to understand tech, its just knowledge and doesn't take that much effort.  Rather than studying what, where and how you are going to create your next great image.  I know I spend a tremendous amount of time doing this.  FAILUE TO PREPARE IS PREPARING TO FAIL.  

Still a 2 dollar framing card will do more for your composition than all the tech you can resolve in your brain.

The Masters that came before us must be laughing at most of us.  Tim Wolcott    www.galleryoftheamericanlandscape.com

Amen, bravo, and hear hear!
Title: The Physics of Digital Cameras
Post by: brianc1959 on December 19, 2009, 11:55:27 am
Quote from: WarrenMars
From the bogus article:  "The reason you can burn objects with a magnifying glass is because where the rays meet is not in focus. They are in focus when they form an accurate picture of the sun, and that won't burn."

     
Title: The Physics of Digital Cameras
Post by: HarryHoffman on December 19, 2009, 09:38:52 pm
I see on your site that your current camera is a D60 and you are happy with that. Have you tried anything better than that model to see what a real Pro camera can do for you?
Try a D3 or D3X and then try a Hassy or Phase One. These are not Hocus Pocuc cameras and are the real deal in terms of performance.
Title: The Physics of Digital Cameras
Post by: Slobodan Blagojevic on December 19, 2009, 10:12:22 pm
Quote from: WarrenMars
... current image quality is already within 2 stops of its theoretical maximum and is unlikely to improve by more than 1 stop. You may also be surpised to discover that current technology has already pushed photography 6 stops beyond what can be achieved theoretically ...
I will immediately admit I am not qualified to debate the theory behind this, but I still find the above quote logically challenging. Are we within two stops from a theoretical maximum, or we are six stops beyond? For me, "what can be achieved theoretically" = "theoretical maximum", hence my confusion.

And ultimately, every time I hear a statement like that, it reminds me of the bumble bee paradox (i.e., that given our knowledge of aerodynamics, bumble bees are too heavy to fly).
Title: The Physics of Digital Cameras
Post by: michael on December 20, 2009, 09:19:39 am
When the link to this article was first posted I read the piece with some considerable interest, since there is little like it available online.

Much of the math and physics are beyond me, (though I saw quite a few things that appeared erroneous) so I forwarded the link to a couple of friends, one a Physics Phd and photographer with serious credentials, and the other a Phd who is heavily involved in the design of digital imaging systems.

It appears that though there is some good information in the article, there are also a lot of errors and misinformation.

"Some of his arguments do follow a more or less logical flow, but his pages strike me (not knowing anything about the guy) like he is an amateur scientist trying to explain a very difficult subject that he himself does not understand beyond a rather superficial level."

"He gets some things right but also quite a few wrong. In general his analysis lacks an abundance of technical details, which will contradict some of his conclusions."

So, I caution anyone reading Mr. Mars essay, that though it might appear comprehensive, it is flawed, with numerous erronious conclusions, at least according to real two experts whose opinion I do trust.

Michael
Title: The Physics of Digital Cameras
Post by: ErikKaffehr on December 20, 2009, 10:03:38 am
Hi,

A good article discussing much of the same issue is here: http://www.northlight-images.co.uk/downloa...al_Limits_2.pdf (http://www.northlight-images.co.uk/downloadable_2/Physical_Limits_2.pdf)

Regarding Mr. Mars essay I also found it a bit confusing and less than stringent. In my view Mr. Mars has a good point about Poisson characteristics of noise but I don't think that his finding are consistent with what we can see. Even if the content would be correct, the presentation is quite sloppy.

Best regards
Erik



Quote from: michael
When the link to this article was first posted I read the piece with some considerable interest, since there is little like it available online.

Much of the math and physics are beyond me, (though I saw quite a few things that appeared erroneous) so I forwarded the link to a couple of friends, one a Physics Phd and photographer with serious credentials, and the other a Phd who is heavily involved in the design of digital imaging systems.

It appears that though there is some good information in the article, there are also a lot of errors and misinformation.

"Some of his arguments do follow a more or less logical flow, but his pages strike me (not knowing anything about the guy) like he is an amateur scientist trying to explain a very difficult subject that he himself does not understand beyond a rather superficial level."

"He gets some things right but also quite a few wrong. In general his analysis lacks an abundance of technical details, which will contradict some of his conclusions."

So, I caution anyone reading Mr. Mars essay, that though it might appear comprehensive, it is flawed, with numerous erronious conclusions, at least according to real two experts whose opinion I do trust.

Michael
Title: The Physics of Digital Cameras
Post by: JeffKohn on December 20, 2009, 02:06:45 pm
Quote from: slobodan56
I will immediately admit I am not qualified to debate the theory behind this, but I still find the above quote logically challenging. Are we within two stops from a theoretical maximum, or we are six stops beyond? For me, "what can be achieved theoretically" = "theoretical maximum", hence my confusion.
I was wondering the same thing, seems rather contradictory.
Title: The Physics of Digital Cameras
Post by: Jonathan Ratzlaff on December 20, 2009, 04:41:55 pm
The solar flux levels you quoted for energy from the sun includes all areas of the spectrum, not visible light.  You need to revise your numbers to reflect the visible light spectrum,  ~400 - 700nm.  You then need to look at the attenuation levels of these wavelegths to earth.   You  are comparing apples and oranges in your chart.
The interesting thing is that the eye can visualize both objects lit by the noonday sun and the milky way wich by your estimation is  about 36 stops of light. although it needs to take time to adjust.  Instantaneously your eye can view about 20 stops again more than you state.

You may want to go back to your numbers and look at them more closely  

There have been a number of instances where theoretical limits have been exceeded.  When I was in university the limit of resolultion for light microscopy was about 200nm.  The theoretical limit to optical resolution was 1/2 the wavelenth of light being used to observe the subject.  however now we are at the 10-20nm resolution stage; an improvement of an order of magnitude.
Other examples, hard drive density, semiconductor density, quantum entanglement.

So don't place too much faith in theoretical limits; they are there to be broken.
Title: The Physics of Digital Cameras
Post by: Bro.Luke on December 20, 2009, 08:43:42 pm
Son of "oh never mind...."
Title: The Physics of Digital Cameras
Post by: kmanphoto on December 20, 2009, 09:05:46 pm
is actually reading this stuff  like the Flux Capacitor from the movie "Back to the Future" ???
because if it is ----   oops  

kman
Title: The Physics of Digital Cameras
Post by: Slobodan Blagojevic on December 20, 2009, 09:42:57 pm
Quote from: Bro.Luke
Ya know I stayed away from this forum...
And if you decide to revert to it, your enlightening and eloquent comments would be sorely missed.
Title: The Physics of Digital Cameras
Post by: Guillermo Luijk on December 20, 2009, 09:59:27 pm
Quote from: PeterAit
Amen, bravo, and hear hear!
Amen and Bravo to someone who failed to notice that the op is talking about camera technology, and not about the good or the bad photographer?  
Title: The Physics of Digital Cameras
Post by: WarrenMars on December 20, 2009, 10:15:28 pm
For those who are mystified by the apparent contradiction between 2 stops and 6 I apologise for not making it clearer. What I am saying is that technology has got within 2 stops of the best it can ever achieve in the future, meanwhile today's cameras are claiming speeds 6 stops in excess of what can be achieved even with perfect technology. There is no contradiction here. What I am trying to do is to make you think! The point is that the manufacturers are cheating by offering noisy, filtered images.

To all those who think that theoretical limits mean nothing, you can get back in your matter transportation device and travel faster than light back in time to the land where 2+2=5 and the sun shines in the middle of the night!

As for those who think technology is irrelevant: What are you doing reading this thread?

For those who think there are errors in my analysis: "Put up or shut up". Quoting some unknown friend who says: "He gets some things right but also quite a few wrong." is of no value.
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on December 20, 2009, 11:28:38 pm
Quote from: WarrenMars
For those who think there are errors in my analysis: "Put up or shut up". Quoting some unknown friend who says: "He gets some things right but also quite a few wrong." is of no value.

Here's one blatant error:

Your discussion of standard deviations per level is completely nonsensical. You completely fail to recognize that as the photon count increases, you are increasing the sample population, and that decreases the standard deviation and increases the overall accuracy of the sampling (or confidence factor) by approximately the square root of the photon count. This is why digital images are noisiest in the shadows--the photon sample population per pixel is smallest in the darkest tones, and largest in the lightest tones. Your "Poisson Aliasing" chart seems to be assuming that the standard deviation increases (or at least stays constant) in proportion to average photon count as one increases the photon count, but this is completely backwards. As the photon count increases, the standard deviation decreases (at least when expressed as a percentage of the photon count) because you have a larger sample population of photons to work with. It's the same statistical principle used for calculating the margin of error for opinion surveys--the larger the sample population, the smaller the standard deviation becomes as a percentage of the sample population, and the greater the accuracy of the survey results becomes as a result.

If you were correct, the lighter tones in a digital image would be just as noisy or even noisier than the deep shadows. But this is the opposite of the behavior of every digital camera ever made. You need to go back and get the Statistics 101 stuff right before getting fancy with Poisson aliasing and stuff like that.
Title: The Physics of Digital Cameras
Post by: Slobodan Blagojevic on December 21, 2009, 12:12:09 am
Move over Ken Rockwell, the new king is here!  

The new king of hyperbole, oversimplification, overgeneralization, over-the-top opinionated arrogance... The new king of know-it-all, from quantum physics to corporate (im)morality. Smart people say that semi-knowledge is more dangerous than no knowledge... I now know what they mean.
Title: The Physics of Digital Cameras
Post by: Bro.Luke on December 21, 2009, 01:53:37 am
Oh never mind...
Title: The Physics of Digital Cameras
Post by: fike on December 21, 2009, 10:45:16 am
Always interesting to note certain stats when people make controversial statements.

OP post count = 2.  
OP Join Date = Dec 17, 2009

I'm just sayin...

Title: The Physics of Digital Cameras
Post by: bjanes on December 21, 2009, 11:21:17 am
Quote from: Jonathan Wienke
Here's one blatant error:

Your discussion of standard deviations per level is completely nonsensical. You completely fail to recognize that as the photon count increases, you are increasing the sample population, and that decreases the standard deviation

If you were correct, the lighter tones in a digital image would be just as noisy or even noisier than the deep shadows. But this is the opposite of the behavior of every digital camera ever made. You need to go back and get the Statistics 101 stuff right before getting fancy with Poisson aliasing and stuff like that.

Jonathan, you are incorrect here. In a Poisson distribution, the standard deviation of the count is equal to the square root of the count. Thus as the count increases, the standard deviation also increases, but more slowly than the count itself. The noise in digital images is greatest in the highlights. However, the signal to noise ratio is highest in the highlights and it is the S:N that correlates with perceived noise.

if N photons are collected, the standard deviation will be sqrt(N). The signal to noise will be N/sqrt(N) = sqrt (N). If you double the exposure, the S:N increases by a factor of 1.4.

For a detailed discussion see Emil Martinec (http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/index.html#shotnoise).
Title: The Physics of Digital Cameras
Post by: bjanes on December 21, 2009, 02:28:09 pm
Quote from: WarrenMars
For those who think there are errors in my analysis: "Put up or shut up". Quoting some unknown friend who says: "He gets some things right but also quite a few wrong." is of no value.
Your analysis focuses entirely on Poisson noise or shot noise and you state that the problems are in the highlights, where shot noise is highest and you introduce the neologism of "Poisson alaising". However, you fail to take into account that what we perceive as noise is really a low ratio of the signal to noise. Anyone who does digital photography knows that the apparent noise is highest in the shadows, not the highlights. You totally neglect read noise, which predominates in the shadows. If you put the lens cap on the camera, the signal is zero and the noise is entirely read noise, and this is one way to measure the read noise.

Look at Table 2 in Roger Clark's (http://www.clarkvision.com/imagedetail/evaluation-1d2/index.html) analysis of the Canon 1D MII. Noise is highest in the highlights, but the signal to noise is also highest in this region and if you look at actual images you will see that the perceived noise is the smallest in this region.  The low noise characteristics of the latest sensors such as are used in the Nikon D3x have been achieved largely by reducing the read noise, although there has been some improvements in quantum efficiency.
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on December 21, 2009, 02:59:54 pm
Quote from: bjanes
Jonathan, you are incorrect here. In a Poisson distribution, the standard deviation of the count is equal to the square root of the count. Thus as the count increases, the standard deviation also increases, but more slowly than the count itself.

When expressed as a percentage of the sample population count, SD decreases as the sample population increases. In absolute terms, yes, SD increases, but more slowly than the population count. I specified the percentage bit in the second part of my post but neglected to do so in the first.
Title: The Physics of Digital Cameras
Post by: joofa on December 21, 2009, 03:14:30 pm
Quote from: Jonathan Wienke
When expressed as a percentage of the sample population count, SD decreases as the sample population increases. In absolute terms, yes, SD increases, but more slowly than the population count. I specified the percentage bit in the second part of my post but neglected to do so in the first.

Jonathan, it would appear to me that you are mixing up a few issues here. BJanes is right. When you talk about population basically every member in that population is an estimator with some variance of some quantity; however, the population is collectively considered to reduce that estimation variance. For e.g., pick a random adult in US and take his/her height to be representative of US adult person height. However, that is not a good estimator. Therefore, you increase the population size with the goal of reducing variance and perhaps bias also. In the photon example each single photon is not an estimator of the actual true number of the photons.
Title: The Physics of Digital Cameras
Post by: Hywel on December 21, 2009, 03:14:41 pm
A much fuller treatment of the effects of noise in digital photography can be found in any decent textbook on astro imaging, such as Steve Howell's very good "Handbook of CCD Astronomy" ISBN 0-521-64834-3

http://www.amazon.co.uk/Astronomy-Cambridg...5075&sr=8-1 (http://www.amazon.co.uk/Astronomy-Cambridge-Observing-Handbooks-Astronomers/dp/0521617626/ref=sr_1_1?ie=UTF8&s=books&qid=1261425075&sr=8-1)

or the more practical-amateur-photography orientated "Handbook of Astronomical Image Processing" by Richard Berry and James Burnell, ISBN 0-9433396-82-4

http://www.amazon.co.uk/Handbook-Astronomi...5385&sr=1-2 (http://www.amazon.co.uk/Handbook-Astronomical-Image-Processing/dp/0943396824/ref=sr_1_2?ie=UTF8&s=books&qid=1261425385&sr=1-2)

None of this is even faintly new. Astronomers have been dealing with noise issues and digital sensors for decades and their signal to noise ratios are vastly less favourable than ours. That actually means there are a hell of a lot of tricks in the arsenal for controlling the noise that daylight photographers rarely if ever use.

For example, anyone intending to make serious astro images would certainly cool their sensor significantly below ambient temperature (reduces the thermal noise), capture dark frames (which allows an averaged noise subtraction), bias frames (zero exposure dark frames which allow you to separate readout noise from thermal noise) and flat fields to correct for uneven illumination by the optics, sensor contamination and channel to channel variations in the gain.

Sure, there will be a sqrt(N)/N behaviour from photon statistics, but there are an awful lot of other sources of noise which are also important, and they kick in in the shadows, not the bright areas.

Suppose that you DO get the case where one pixel randomly gets more photons and ends up one bit higher than it should be. So what? The pixel will read out as (for 14 bits) 16380 instead of 16379. No human being is sensitive to such a small change, and unless you are trying to resolve a pattern painted in stripy greys differing by 1 part in 16384 (i.e identical to the human eye) at a resolution equal to the pixel pitch (i.e. far smaller than the human eye could resolve) the impact on the perceived final image will be zero. Averaged over more than a few pixels, the noise in the bright areas soon gets beaten down to utterly imperceptible levels. Then add in the human eye's logarithmic response, which makes us even less sensitive to small differences in the bright areas and this is, frankly, a non-issue.

As lots of others have pointed out, the noise problems are in the shadows, not in the bright areas. N is small there, sqrt(N)/N is therefore large, and the expected deviation of a given pixel from the "true" value is much larger, made even worse by the Poisson tails which are appreciable for small N but utterly, utterly irrelevant for large N where the behaviour is so close to Gaussian that your spreadsheet couldn't even calculate it, as you found out.

You've also omitted quantisation noise, which is the noise introduced by digitising the signal. Controlling this noise is one reason why cameras have gone from a 12-bit internal readout to a 14-bit: you don't want to bodge up your nice clean signal by crudely slicing it into too coarse bins. You always want to digitise at a level comparable with or better than the noise, or the quantisation noise would overwhelm the shot noise, which would be really dumb.

The other noise sources also become relevant in the shadows, or with low light levels, which is why all astro imaging camera sensors are cooled but most dSLR or MFDB chips are not. (My Hasselblad has a hefty heat sink it in, though, and my Canon D30 did automatic dark frame subtraction for long exposures- but these features are not widespread these days).

Cheers, Hywel Phillips
Title: The Physics of Digital Cameras
Post by: PierreVandevenne on December 21, 2009, 05:16:49 pm
Quote from: Hywel
A much fuller treatment of the effects of noise in digital photography can be found in any decent textbook on astro imaging, such as Steve Howell's very good "Handbook of CCD Astronomy" ISBN 0-521-64834-3

http://www.amazon.co.uk/Astronomy-Cambridg...5075&sr=8-1 (http://www.amazon.co.uk/Astronomy-Cambridge-Observing-Handbooks-Astronomers/dp/0521617626/ref=sr_1_1?ie=UTF8&s=books&qid=1261425075&sr=8-1)

or the more practical-amateur-photography orientated "Handbook of Astronomical Image Processing" by Richard Berry and James Burnell, ISBN 0-9433396-82-4

http://www.amazon.co.uk/Handbook-Astronomi...5385&sr=1-2 (http://www.amazon.co.uk/Handbook-Astronomical-Image-Processing/dp/0943396824/ref=sr_1_2?ie=UTF8&s=books&qid=1261425385&sr=1-2)

Fully agree - it becomes really tiring to see so many "final analysis" articles written by people who haven't even looked at the fundamentals, clearly exposed in those books and in, no doubt, many other serious references. The article that started the thread is, in some ways, hilarious. The guy begins by confusing pixel pitch and pixel surface... and a lot of very basic counts are just so wrong: 64 millions photons per pixel, yeah, sure, how do we count them? Assuming an utterly crappy QE of 1% with have 640 000 electrons to deal with in each well...
 
Title: The Physics of Digital Cameras
Post by: Guillermo Luijk on December 21, 2009, 06:58:23 pm
Quote from: Hywel
You've also omitted quantisation noise, which is the noise introduced by digitising the signal. Controlling this noise is one reason why cameras have gone from a 12-bit internal readout to a 14-bit: you don't want to bodge up your nice clean signal by crudely slicing it into too coarse bins. You always want to digitise at a level comparable with or better than the noise, or the quantisation noise would overwhelm the shot noise, which would be really dumb.
Not exactly. Quantisation noise is not a problem in present digital cameras because the other sources of noise (read noise basically) is so far greater than the quantisation step.

Moving from 12 to 14 bits seems closer to a marketing move than anything that can really provide a quality advantage, because as long as read noise remains larger than a 1 bit step in a 12-bit scale, increasing bitdepth is unnecesary. It was said somewhere that the Nikon D3X is probably the first camera in really taking an advantage from its 14-bit encoding because of its very low read and pattern noise. Would be nice to test that doing a 12-bit RAW development on a D3X's NEF and compare it with the result of the 14-bit regular development.

Regards
Title: The Physics of Digital Cameras
Post by: WarrenMars on December 22, 2009, 02:08:56 am
Quote from: Jonathan Wienke
Here's one blatant error:

Your discussion of standard deviations per level is completely nonsensical. You completely fail to recognize that as the photon count increases, you are increasing the sample population, and that decreases the standard deviation and increases the overall accuracy of the sampling (or confidence factor) by approximately the square root of the photon count. This is why digital images are noisiest in the shadows--the photon sample population per pixel is smallest in the darkest tones, and largest in the lightest tones. Your "Poisson Aliasing" chart seems to be assuming that the standard deviation increases (or at least stays constant) in proportion to average photon count as one increases the photon count, but this is completely backwards. As the photon count increases, the standard deviation decreases (at least when expressed as a percentage of the photon count) because you have a larger sample population of photons to work with. It's the same statistical principle used for calculating the margin of error for opinion surveys--the larger the sample population, the smaller the standard deviation becomes as a percentage of the sample population, and the greater the accuracy of the survey results becomes as a result.

If you were correct, the lighter tones in a digital image would be just as noisy or even noisier than the deep shadows. But this is the opposite of the behavior of every digital camera ever made. You need to go back and get the Statistics 101 stuff right before getting fancy with Poisson aliasing and stuff like that.

Actually it is you that have got it back-to-front.
Like you I thought the same, until (unlike you), I did the maths.
You think those graphs are a made up joke?
I assure you they are generated from the spreadsheet I set up using the Poison maths to PROVE the point.

I have posted this stuff BECAUSE it is counter-intuitive! The reality of camera physics is NOT what you think!
If you had taken the time to UNDERSTAND what I went to a great deal of trouble to show, instead of just skimming over it you would now be singing a different tune.

I put this stuff up to HELP you people understand what you are actually dealing with on a daily basis. The least you can do in return is to make the effort to fully understand what I have said.
Title: The Physics of Digital Cameras
Post by: Hywel on December 22, 2009, 03:42:58 am
Quote from: GLuijk
Not exactly. Quantisation noise is not a problem in present digital cameras because the other sources of noise (read noise basically) is so far greater than the quantisation step.

Moving from 12 to 14 bits seems closer to a marketing move than anything that can really provide a quality advantage, because as long as read noise remains larger than a 1 bit step in a 12-bit scale, increasing bitdepth is unnecesary. It was said somewhere that the Nikon D3X is probably the first camera in really taking an advantage from its 14-bit encoding because of its very low read and pattern noise. Would be nice to test that doing a 12-bit RAW development on a D3X's NEF and compare it with the result of the 14-bit regular development.

Regards


Which is my point exactly. One wants to keep the digitisation noise (which is something entirely under your control, as you can choose the quantisation, the only impact is a slight increase in electronics cost) waaaay below the statistical noise. So if there's even the faintest chance of the quantisation noise coming into play, you might as well go for an extra couple of bits, because they are a lot cheaper than trying to redesign the sensor for lower noise.

The example posted earlier on the thread of the Canon 1D Mk2 analysis showed that there are some effects of quantisation noise, it basically provides a noise floor (along with the readout noise).

  Cheers, Hywel.
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on December 23, 2009, 11:34:21 am
Quote from: WarrenMars
Actually it is you that have got it back-to-front.
Like you I thought the same, until (unlike you), I did the maths.
You think those graphs are a made up joke?
I assure you they are generated from the spreadsheet I set up using the Poison maths to PROVE the point.

It's Poisson, not Poison...

The problem with your mathematical analysis is that it takes a flawed look at only one type of noise, when real-world cameras have a combination of several types of noise. I understand what you are saying just fine, and unlike you, I have done some mathematical analysis of actual RAW image data. I also understand that your theory does not fit the observed behavior of any digital camera ever made. Conduct the following experiment, and post your results:


If you are correct, the standard deviation of the white patch data divided by the average of the white patch data will be greater than or equal to the standard deviation of the black patch data, divided by the average of the black patch data. If I'm correct, then the standard deviation of the black patch data divided by the average of the black patch data will be greater.

The sensitivity and reliability of photodetectors to detect incoming photons is one source of noise. The analog amplification of the resulting voltage is a second source of noise. The analog-to-digital converter is a third noise source. The quantum fluctuation in incoming photons (which is what you analyzed) is a fourth, and is the least significant of the four. The first three noise sources combine to form what is called read noise, which is present even when no incoming photons are present. Read noise is what limits dynamic range and high-ISO performance, and since it is present regardless of whether any photons are detected or not, it is most predominant in the darkest tones and shadows.
Title: The Physics of Digital Cameras
Post by: brianc1959 on December 23, 2009, 11:53:20 am
Quote from: WarrenMars
Actually it is you that have got it back-to-front.
Like you I thought the same, until (unlike you), I did the maths.
You think those graphs are a made up joke?
I assure you they are generated from the spreadsheet I set up using the Poison maths to PROVE the point.

I have posted this stuff BECAUSE it is counter-intuitive! The reality of camera physics is NOT what you think!
If you had taken the time to UNDERSTAND what I went to a great deal of trouble to show, instead of just skimming over it you would now be singing a different tune.

I put this stuff up to HELP you people understand what you are actually dealing with on a daily basis. The least you can do in return is to make the effort to fully understand what I have said.

Warren:
Your understanding of physics seems to be astonishingly poor, as evidenced by the absurd statement you made in your article regarding magnifying glasses.  What are your credentials?  Can you point us to some peer-reviewed journal articles you've written in this field?
Title: The Physics of Digital Cameras
Post by: WarrenMars on December 29, 2009, 05:19:59 am
Quote from: brianc1959
Warren:
Your understanding of physics seems to be astonishingly poor, as evidenced by the absurd statement you made in your article regarding magnifying glasses.  What are your credentials?  Can you point us to some peer-reviewed journal articles you've written in this field?

Think the sun is in focus when you can burn paper with a magnifying glass do you? You've obviously NEVER done any solar observing using a telescope!
There's not much point submitting one's results for peer analysis when people with your understanding are doing the reviewing.  
Title: The Physics of Digital Cameras
Post by: WarrenMars on December 29, 2009, 05:42:37 am
Quote from: Jonathan Wienke
It's Poisson, not Poison...

Fair enough. my typo, fortunately I didn't make that error on my site.

Quote
If you are correct, the standard deviation of the white patch data divided by the average of the white patch data will be greater than or equal to the standard deviation of the black patch data, divided by the average of the black patch data. If I'm correct, then the standard deviation of the black patch data divided by the average of the black patch data will be greater.

What you are telling me is that noise is most visible in the shadows. Hey! You'll get no argument from me on that one! In fact I think I said exactly that on my site. What you and everyone else who has posted in this thread has failed to grasp is that the great problem is NOT in Poisson noise (which is apparent in the shadows) but in Poisson aliasing (which is most apparent in the hot spots)!

Poisson Aliasing does not manifest as coloured spots so you don't realise it's there. It is such a large problem that it dwarfs ALL other noise problems like an ocean. It is so large a problem that it's solution dictates the half the ENTIRE implementation of camera design! The camera you use is MASSIVELY compromised in dynamic range purely to solve the issue of Poisson Aliasing!

You don't know this stuff, and it's not available elsewhere on the net, so you choose to ignore it or disbelieve it. It's the Elephant in the room gentlemen! Too big to deal with so we'll just pretend it isn't there! Very much like over-population actually...

Anyway. You can continue to misunderstand what I have set down in clear and concise English, it doesn't bother me. I have made this analysis available for those who want to know, not for those who don't.

Go ahead and dismiss me but you can't change the laws of physics! The camera manufacturers have chosen to keep quiet about this stuff but it doesn't mean they don't know. Of course THEY know about it! Your camera won't work without it! They just don't want you to know the truth; it's bad for business. Cameras are like religion: the less people know the more they can sell.
Title: The Physics of Digital Cameras
Post by: bjanes on December 29, 2009, 08:33:16 am
Quote from: WarrenMars
Poisson Aliasing does not manifest as coloured spots so you don't realise it's there. It is such a large problem that it dwarfs ALL other noise problems like an ocean. It is so large a problem that it's solution dictates the half the ENTIRE implementation of camera design! The camera you use is MASSIVELY compromised in dynamic range purely to solve the issue of Poisson Aliasing!

Go ahead and dismiss me but you can't change the laws of physics! The camera manufacturers have chosen to keep quiet about this stuff but it doesn't mean they don't know. Of course THEY know about it! Your camera won't work without it! They just don't want you to know the truth; it's bad for business. Cameras are like religion: the less people know the more they can sell.

I was previously unaware of Poisson aliasing, so I did a Google search and came up with this article which I don't understand: Poisson summation (http://www.springerlink.com/content/t41041r47438p083/fulltext.pdf). Perhaps you can explain how it applies to digital imaging. I still do not understand how this aliasing affects the image. Can you post an example actually demonstrating the artifact rather than merely describing it?

Title: The Physics of Digital Cameras
Post by: brianc1959 on December 29, 2009, 02:07:32 pm
Quote from: WarrenMars
Think the sun is in focus when you can burn paper with a magnifying glass do you? You've obviously NEVER done any solar observing using a telescope!
There's not much point submitting one's results for peer analysis when people with your understanding are doing the reviewing.  

Warren:
Yep, gotta focus that sun if you wanna burn the paper quickly!  Ever bothered to try it?    

So I take it that you have no credentials, and that you've never published your results in a proper journal.
Title: The Physics of Digital Cameras
Post by: BJL on December 29, 2009, 03:06:48 pm
As far as I can tell, in his talk of "A problem at the bright end" Warren Mars is using the criterion that the numerical level in the digital output should be exactly right a large fraction of the time. For example, the need for 2^36 photons at the brightest photosites with 16-bit output (with levels from 0 to 65,535) is based on the "4SD" (a.k.a. "4 sigma" in the field of statistical quality control) criterion which guarantees that when a pixel is reported as having output level 12,345 there is a 99.7% probability that the incident light at that photosite really corresponds to exactly that level, rather than for example being slightly dimmer so that the correct output level would be 12,344, or slightly brighter so that the correct output level would be 12,346.

The mathematics is fine as far as I can tell, but that criterion of getting the level exactly right is of no practical relevance. It does not matter if a fairly bright pixel (like level 12,235 out of 65,535 as above, roughly mid-tone) is reported at a value that is off by one, for a relative error of one part in 12,345, because the eye has no hope of detecting that discrepancy: a relative error of about 1 in 100 in luminosity is about the smallest that the eye can discriminate. And that is in part because the eye has even less hope of gathering these massive numbers of photons as a sensor does, due to its rather puny entrance pupil diameter!

It looks like a classic misunderstanding of statistics and of what constitutes a significant error: insisting on exact values when in practice they are neither attainable nor necessary. Instead, measurements that fall within some appropriate error tolerance of the exact value most of the time are a sufficient design goal: in this case about 1%, so getting the level within about 100 in my example above.


If Poisson aliasing were truly such a problem, we would all have severe vision problems!
Title: The Physics of Digital Cameras
Post by: nma on December 29, 2009, 05:46:52 pm
Quote from: BJL
As far as I can tell, in his talk of "A problem at the bright end" Warren Mars is using the criterion that the numerical level in the digital output should be exactly right a large fraction of the time. For example, the need for 2^36 photons at the brightest photosites with 16-bit output (with levels from 0 to 65,535) is based on the "4SD" (a.k.a. "4 sigma" in the field of statistical quality control) criterion which guarantees that when a pixel is reported as having output level 12,345 there is a 99.7% probability that the incident light at that photosite really corresponds to exactly that level, rather than for example being slightly dimmer so that the correct output level would be 12,344, or slightly brighter so that the correct output level would be 12,346.

The mathematics is fine as far as I can tell, but that criterion of getting the level exactly right is of no practical relevance. It does not matter if a fairly bright pixel (like level 12,235 out of 65,535 as above, roughly mid-tone) is reported at a value that is off by one, for a relative error of one part in 12,345, because the eye has no hope of detecting that discrepancy: a relative error of about 1 in 100 in luminosity is about the smallest that the eye can discriminate. And that is in part because the eye has even less hope of gathering these massive numbers of photons as a sensor does, due to its rather puny entrance pupil diameter!

It looks like a classic misunderstanding of statistics and of what constitutes a significant error: insisting on exact values when in practice they are neither attainable nor necessary. Instead, measurements that fall within some appropriate error tolerance of the exact value most of the time are a sufficient design goal: in this case about 1%, so getting the level within about 100 in my example above.


If Poisson aliasing were truly such a problem, we would all have severe vision problems!

One can classify errors as systematic and random (i.e. statistical). Aliasing is a systematic error, due to undersampling of the signal. The Shannon sampling theorem is derived for noise-free data.  Poisson aliasing is essentially a non sequitur. There is no such thing.  



Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on December 29, 2009, 07:31:01 pm
Quote from: WarrenMars
What you are telling me is that noise is most visible in the shadows. Hey! You'll get no argument from me on that one! In fact I think I said exactly that on my site. What you and everyone else who has posted in this thread has failed to grasp is that the great problem is NOT in Poisson noise (which is apparent in the shadows) but in Poisson aliasing (which is most apparent in the hot spots)!

Wow, this is hilarious. You acknowledge that noise is most visible in the shadows, yet you continue to insist that the biggest problem is in the highlights--due to "poisson aliasing'. If your assertions had any validity whatsoever, then the margin of error of data captured in the highlights would be much higher than the margin of error for data captured in the shadows; i.e. if the shadow values had an accuracy of ±1%, then highlight values would have an accuracy of ±5%, or something along those lines. But the irrefutable fact is the opposite: the accuracy of highlight values is always greater than the accuracy of shadow values.

Quote
Poisson Aliasing does not manifest as coloured spots so you don't realise it's there. It is such a large problem that it dwarfs ALL other noise problems like an ocean. It is so large a problem that it's solution dictates the half the ENTIRE implementation of camera design! The camera you use is MASSIVELY compromised in dynamic range purely to solve the issue of Poisson Aliasing!

The first sentence of paragraph is the greatest proof possible of the invalidity of your argument. Poisson aliasing, according to you, is the result of random fluctuations in the number of photons that strike a given photodetector in a sensor in a given exposure. Expecting such random fluctuations to "not manifest as coloured spots" is preposterous; in order for a random phonomenon to "not manifest as coloured spots", all of the color channels ( R,G,B ) for every pixel in the image must be affected exactly equally. If your alleged "random" fluctuations are not precisely synchronized among all of the color channels for each pixel, then each pixel in the image will undergo some hue shift, and "coloured spots" will inevitably result. You're basically claiming that every time I press the shutter release on my 1Ds, that a random phenomenon rolls a die 3 times for each pixel in the image, and the same number is rolled 3 times in a row for every pixel in the image--all 10,989,056 of them. For a 6-sided die, the odds would be ((6^3 / 6) ^ 10,989,056):1, a statistical impossibility. The odds of this sort of correlation are actually much worse than my example, since the 1Ds has 4096 possible RAW values per pixel, not merely 6. Yet you claim that this massive improbability happens every time an image is captured with a digital camera.

The only way this could work is if every digital camera had a miniature Infinite Improbability Drive (http://www.earthstar.co.uk/drive.htm) built into the sensor. But since I've never had a model's clothing spontaneously teleport 3 feet to the left when I snapped her photo, I'm pretty sure there isn't one in there. Come to think of it, I've never even had a dust spot on the sensor teleport 3 centimeters to the left when I pressed the shutter release, so there you have it, definitive proof that your statistics are egregiously misapplied.
Title: The Physics of Digital Cameras
Post by: Bart_van_der_Wolf on December 29, 2009, 09:44:14 pm
Quote from: BJL
The mathematics is fine as far as I can tell, but that criterion of getting the level exactly right is of no practical relevance. It does not matter if a fairly bright pixel (like level 12,235 out of 65,535 as above, roughly mid-tone) is reported at a value that is off by one, for a relative error of one part in 12,345, because the eye has no hope of detecting that discrepancy: a relative error of about 1 in 100 in luminosity is about the smallest that the eye can discriminate. And that is in part because the eye has even less hope of gathering these massive numbers of photons as a sensor does, due to its rather puny entrance pupil diameter!

And then there's the issue of photon shot noise (which has a Poisson distribution).

Cheers,
Bart
Title: The Physics of Digital Cameras
Post by: WarrenMars on December 29, 2009, 09:47:25 pm
Quote from: brianc1959
Warren:
Yep, gotta focus that sun if you wanna burn the paper quickly!  Ever bothered to try it?    

So I take it that you have no credentials, and that you've never published your results in a proper journal.

Whether the sun is in sharp focus at the EXACT point of burning is an interesting question. No doubt most readers would think that it is, but consider this: Put your magnifying glass on its side your window ledge so that the lens is against the glass and look at something distant (not the sun). No matter how close or far you stand from the lens the scene is the same brightness with or without the lens.

The critical point here is that you can't brighten an image using optics alone, not with a lens that is designed to focus. If the distant sky is the same brightness through the lens as it is without the lens, then it is safe to say that the sun also will be no brighter. Why it burns the paper is an interesting question.

It is an interesting fact that many things in this world don't work the way you might think. Light amplification is one of them, academic peer review is another.
Title: The Physics of Digital Cameras
Post by: Jonathan Ratzlaff on December 29, 2009, 10:42:35 pm
Your argument is starting to sound a lot like xeno's paradox.  Unfortunately like xeno's paradox although it  seems to stand up under mathematical study, if falls down immediately the hare passes the toiroise.   So normally if a model is not supported by observation, then it is time to re-examine the model.
Title: The Physics of Digital Cameras
Post by: PierreVandevenne on December 30, 2009, 05:21:52 am
Quote from: WarrenMars
The critical point here is that you can't brighten an image using optics alone, not with a lens that is designed to focus. If the distant sky is the same brightness through the lens as it is without the lens, then it is safe to say that the sun also will be no brighter. Why it burns the paper is an interesting question.

Really? May I suggest you release a paper about "The Physics of Burning Paper". You still have four months to April 1st.

Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on December 30, 2009, 08:53:27 am
Quote from: WarrenMars
Whether the sun is in sharp focus at the EXACT point of burning is an interesting question. No doubt most readers would think that it is, but consider this: Put your magnifying glass on its side your window ledge so that the lens is against the glass and look at something distant (not the sun). No matter how close or far you stand from the lens the scene is the same brightness with or without the lens.

This is easily disproven with simple experiments. If you take a magnifying lens and use it to focus the sun on an object, it is easy to verify experimentally that the highest temperature is reached when the image of the sun is focused to the smallest possible diameter (which happens to coincide with most people's definition of "sharpest focus").

The "equal brightness" claim is equally absurd. If you look through the viewfinder of a SLR/DSLR wearing a fast lens, (say f.1.2 or larger aperture), it is possible for the viewfinder image to be brighter than the subject. And as every beginner photographer knows, by adjusting the aperture (which changes the effective diameter of the lens), it is possible to create any brightness level desired within the adjustment range of the lens.

The same principle is true of any number of high-end rifle scopes with large objective lenses; a good-quality scope can create an image that is brighter than looking at the subject directly. They are larger, heavier, bulkier, and far more expensive than scopes with smaller objective lenses, but if you need to shoot in dim light, there is a world of difference.

Quote
The critical point here is that you can't brighten an image using optics alone, not with a lens that is designed to focus. If the distant sky is the same brightness through the lens as it is without the lens, then it is safe to say that the sun also will be no brighter. Why it burns the paper is an interesting question.

An interesting question indeed; the obvious answer is that your understanding of optics is fundamentally flawed. Contrary to your assertions, the image can be either brighter or dimmer than the original subject, depending on the quality of the anti-reflective coatings on the optical elements, and the ratio of aperture to focal length. The larger the diameter of the aperture relative to focal length, the brighter the image drawn by the lens will be. If the aperture is larger than the focal length (e.g. an f/0.5 lens), the focused image will be quite a bit brighter than the original subject, because the surface area of the aperture is greater than the surface area of the focused image. Such lenses are bulky, heavy, and very expensive, but they do exist, and they can passively amplify light by concentrating photons collected from a large surface area into a smaller surface area.

Night vision goggles use electronics to amplify light because an f/0.0625 lens would be as large as a Spartan's shield, weigh more than all of a soldier's other gear combined, be difficult to engineer to focus an acceptably distortion-free image, cost more than most houses, and have a depth of field so shallow as to be completely useless at distances less than infinity.

Quote
It is an interesting fact that many things in this world don't work the way you might think. Light amplification is one of them, academic peer review is another.

And you are demonstrably clueless about both.
Title: The Physics of Digital Cameras
Post by: brianc1959 on December 30, 2009, 02:16:37 pm
Quote from: Jonathan Wienke
Night vision goggles use electronics to amplify light because an f/0.0625 lens would be as large as a Spartan's shield, weigh more than all of a soldier's other gear combined, be difficult to engineer to focus an acceptably distortion-free image, cost more than most houses, and have a depth of field so shallow as to be completely useless at distances less than infinity.

The real reason you wouldn't want an f/0.0625 lens is that you can't simultaneously correct spherical aberration and coma in any lens faster than f/0.5.  This is due to a breakdown of the Abbe sine condition when the marginal ray angle exceeds 90 degrees.
Title: The Physics of Digital Cameras
Post by: Plekto on December 30, 2009, 03:27:54 pm
If they were using another material other than glass, though, that would change.  The question, though, is, what other materials would we possibly use besides glass and plastic?
Title: The Physics of Digital Cameras
Post by: brianc1959 on December 30, 2009, 05:30:27 pm
Quote from: Plekto
If they were using another material other than glass, though, that would change.  The question, though, is, what other materials would we possibly use besides glass and plastic?

I'm not sure what you are referring to, but in case it was to my previous post the sine condition is not affected by the optical materials you use.
Title: The Physics of Digital Cameras
Post by: WarrenMars on December 30, 2009, 07:27:22 pm
Here is a photo of the sun's image correctly focused onto a piece of paper.

(http://warrenmars.com/pictures/misc/sun_spots.jpg)

Note that the sun's image is NOT a small circle and it does NOT burn the paper.

The point where the rays meet may be the focal point for the lens but it is not the point at which the image is in sharp focus.
I would hope that when a photographer talks about focus he is talking about the subject being a clear image and not whether that image is as small as possible.
Title: The Physics of Digital Cameras
Post by: brianc1959 on December 30, 2009, 07:36:04 pm
Quote from: WarrenMars
Here is a photo of the sun's image correctly focused onto a piece of paper.

(http://warrenmars.com/pictures/misc/sun_spots.jpg)

Note that the sun's image is NOT a small circle and it does NOT burn the paper.

The point where the rays meet may be the focal point for the lens but it is not the point at which the image is in sharp focus.
I would hope that when a photographer talks about focus he is talking about the subject being a clear image and not whether that image is as small as possible.

Damn, Warren - I never realized that the sun was shaped like a football!  This is a major discovery!  You should publish it!

By the way, any idea what a severely defocused point image looks like when vignetting is present?
Title: The Physics of Digital Cameras
Post by: Jonathan Ratzlaff on December 30, 2009, 08:05:23 pm
By the way that is a projected image, and is related to the focal length of the lens that is producing it.  You may want to look at the telescope that is producing that image to find out how much of the objective lens is covered to ensure that the screen doesn't catch fire.  There are thousands of solar reflectors around that disprove your comments.
I think this discussion is best described by the phrase, " not even wrong"

An apparently scientific argument is said to be not even wrong if it is based on assumptions that are known to be incorrect, or on theories that cannot be falsified or used to predict anything. In science and philosophy, the second meaning is known as the principle of falsifiability.

The phrase was coined by the early quantum physicist Wolfgang Pauli, who was known for his colorful objections to incorrect or sloppy thinking.[1] Rudolf Peierls writes that "a friend showed [Pauli] the paper of a young physicist which he suspected was not of great value but on which he wanted Pauli's views. Pauli remarked sadly, 'It is not even wrong.' "[2]
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on December 30, 2009, 08:11:08 pm
Quote from: WarrenMars
Here is a photo of the sun's image correctly focused onto a piece of paper.

(http://warrenmars.com/pictures/misc/sun_spots.jpg)

Note that the sun's image is NOT a small circle and it does NOT burn the paper.

The point where the rays meet may be the focal point for the lens but it is not the point at which the image is in sharp focus.

This is merely one more demonstration of your ignorance. In the configuration you show, the telescope is acting as a projector, not as a camera. The optics in a projector are designed differently than camera lenses; the imaging area is expanded to a far larger area than any camera sensor or film before it is focused. This image expansion is what prevents the image surface from burning, not the fact that the image is in focus. If you used a camera lens (which is designed to focus the image on a very small area) instead of the telescope (which is designed to focus the image on a large area), you'll discover that when the sun is in proper focus, its image will occupy the smallest possible area, and will definitely burn absorptive objects on the plane of focus.
Title: The Physics of Digital Cameras
Post by: col on January 01, 2010, 06:25:59 pm
Quote from: WarrenMars
Think the sun is in focus when you can burn paper with a magnifying glass do you? You've obviously NEVER done any solar observing using a telescope!

Warren,

It's great to see people like yourself thinking about physics, and how things work. Unfortunately you are wrong on a number of very basic points, including this one. You previously said that those that did not agree should "put up or shut up", and I agree entirely. I have no interest in making insulting remarks, and your academic qualifications are irrelevant. I prefer to judge only on the basis of your claims, so let's start with your claim that the sun is not in focus when paper is burned  with a magnifying glass.  

Forget for the moment about using a telescope. Your "paper burning" claim has to do with the opertaion of a simple magnifying glass, not a telescope.

Almost every schoolboy has burned paper with a magnifying glass and, as it turns out, the smallest and hottest spot on the paper corresponds to a focussed image of the sun on the paper. This is easily shown with a ray tracing diagram, for which I refer you to any basic text book.  

Ray tracing and trigonometry leads to derivation of the well know lens formulae. It is interesting to calculate the expected image size of the sun on the said sheet of paper. to see if the calculated prediction is in accordance with observation.

The lens formula is :-

1/u + 1/v = 1/f   where

u is the distance from the object (the sun) to the lens
v is the distance from the image to the lens
f is the lens focal length


Any units of distance can be used, such as meters, so long as the same units are used throughout. A typical magnifying glass has a focal length of 200mm, or 0.20m. So f=0.20

The distance from the sun to the earth is 150 million km, or 150E9 (m) So u=150E9

The first step in calculations is to find the distance from the lens to the image, that is, to find "v" in the above expression. As the object distance is effectively infinity, the term "1/u" is effectively zero, so "v" must equal "f". (ie, v=0.20m) In other words, for an object at infinity, the image will be located at the focal point of the lens. So far this seems roughly consistent with experience when burning paper, but let us continue, and calculate the expected image size.

The required formula (from any textbook)  is :-

Image Size = (hv)/u  where :-

h is the size of the object, in this case the diameter of the sun
h = 1.39 million km, or 1.39E9 (m)


Image size= (1.39E9 x 0.20) / 150E9
Image size = 0.00185 meters
Image size = 1.85mm


Hmmm. As an experienced paper burner, that is pretty much what I observe. With a typical magnifying lens having a focal length of 200mm, the hot spot on the paper (equals the focussed image of the sun) is around 1.8mm in diameter, and could be anywhere from around 1mm to 3mm depending on the focal length of the particular magnifying lens. The conventional optical theory really does seem to be right, does it not Warren? The ball is in your court.

Cheers, Colin






 

Title: The Physics of Digital Cameras
Post by: Slobodan Blagojevic on January 02, 2010, 05:22:53 pm
Quote from: col
... With a typical magnifying lens having a focal length of 200mm, the hot spot on the paper (equals the focussed image of the sun) is around 1.8mm in diameter...
I have no degree in physics, but what Colin is saying seems to be in line with a rule of thumb I heard long time ago regarding photographing sunsets/sunrises: the sun image on the film/sensor will be approximately 1/100 of the focal length. In other words, if you are a shooting with a 200 mm telephoto, the sun size will be approximately 2 mm. In order to fill the frame (24x36 mm), one would need a 2400 mm telephoto.
Title: The Physics of Digital Cameras
Post by: WarrenMars on January 02, 2010, 07:54:13 pm
Quote from: col
Image size= (1.39E9 x 0.20) / 150E9
Image size = 0.00185 meters
Image size = 1.85mm

Hmmm. As an experienced paper burner, that is pretty much what I observe. With a typical magnifying lens having a focal length of 200mm, the hot spot on the paper (equals the focussed image of the sun) is around 1.8mm in diameter, and could be anywhere from around 1mm to 3mm depending on the focal length of the particular magnifying lens. The conventional optical theory really does seem to be right, does it not Warren? The ball is in your court.

Cheers, Colin
Ok, folks, it is time for me to eat some humble pie, and I am happy to admit I was wrong about the sun's image not being at the focus during magnifying glass burning. I am familiar with the thin lens equation and it seems obvious in retrospect that I should have checked it, but when one is hot on the trail of a theory one tends to pay less attention to counter evidence than one ought. Mea culpa. Thank you Colin, I shall correct my website.

The telescope projection had a magnification of about 40x which explains the size difference over the magnifying glass which actually reduces the size of distant objects. There is no problem comparing telescopes, cameras and magnifying glasses in this matter, they all produce real images.

I now see why it is that the image burns, even though everything to the eye seems to be the same brightness with and without the lens. It is because the eye itself is not at the focus but back a comfortable distance. This stepping back to allow the eye's own optics to refocus the image allows the image brightness to dissipate, returning the brightness to "normal". Someone else may like to do the maths to demonstrate why it is that it should be EXACTLY normal.

As for the question of light amplification through optics alone, I believe the above effect may have something to say about its limitations when it comes to the human eye. I appreciate your comments on the subject Jonathon, but I must say that for the time being I am not convinced. When I use my f/2.8 lens I don't see a noticeable improvement in brightness over my f/5.6 even though according to you there should be a difference of 2 STOPS. I do see some attenuation over reality in both lenses but this I tend to put down to losses in the various mirrors. Yes, I should like the opportunity to test out a nice f/0.5 prime but strangely I just can't seem to source one.

Happy New Year to those of good will!
Title: The Physics of Digital Cameras
Post by: Jeremy Payne on January 02, 2010, 08:47:18 pm
Some more interesting content from our new friend Warren Mars ...

http://www.unemployedaustralia.org/ (http://www.unemployedaustralia.org/)

http://www.unemployedaustralia.org/about/wmars.htm (http://www.unemployedaustralia.org/about/wmars.htm)


Title: The Physics of Digital Cameras
Post by: joofa on January 02, 2010, 08:57:43 pm
Quote from: Jeremy Payne
Some more interesting content from our new friend Warren Mars ...

Jeremy, there is no need for personal attacks. Warren has already admitted one of his errors. It is always appreciative when one admits mistakes in understanding on a public forum. There are so many others on this forum who just don't admit they are wrong. For a proof look no farther than dpreview.com where everybody thinks they are an expert and there is so much uninformed "camera theory" flying around ...

 Remember the 3 golden words: "I don't know."
Title: The Physics of Digital Cameras
Post by: Jeremy Payne on January 02, 2010, 09:05:31 pm
Quote from: joofa
Jeremy, there is no need for personal attacks.

I put a link to his website so we could all have some additional background information.  He put it out there ... not me ...

How is that a personal attack?
Title: The Physics of Digital Cameras
Post by: Jeremy Payne on January 02, 2010, 09:32:16 pm
Quote from: joofa
...

Just be thankful I didn't link the porn, pardon me - "Ribauld Verse" - from his other website.
Title: The Physics of Digital Cameras
Post by: Jonathan Ratzlaff on January 03, 2010, 12:47:30 am
Even when something is off the wall so to speak, it always helps to think about it a bit to see whether there is some truth in what is being stated.  This has been a fairly lively discussion as to some of the nature of optics.  Even when there are errors, it still has been an interesting exercise.

Title: The Physics of Digital Cameras
Post by: col on January 03, 2010, 06:09:13 am
Quote
The critical point here is that you can't brighten an image using optics alone, not with a lens that is designed to focus.

Quote from: WarrenMars
When I use my f/2.8 lens I don't see a noticeable improvement in brightness over my f/5.6 even though according to you there should be a difference of 2 STOPS.

The first thing to realize is that the human eye must be removed totally from the discussion. The discussion pertains to digital cameras, where a lens projects a focussed image onto a CCD detector at the image plane. Therefore when Warren talks about "brigtness of an image", he is talking about the brightness of the image on the camera CCD. The units of brighness, AKA intensity, in this context is photons/second/unit_area.

As Warren has realized, we don't place our eye at the CCD image plane, and would learn nothing useful if we did, so Warren's previous observations involving looking through magnifying glasses at the sky are irrelevant. At the risk of being repetitious, the question is whether the image on a digital camera CCD can be be made brighter using optics alone.

To be even more precise, Warren apparently claims that an F2.8 lens does not produce a brighter image thanan F5.6 lens, and that claim is incorrect.

Fnumber is defined as the focal length divided by the aperture diameter.  Fnumbers are used precisely because for a given scene, the image intensity depends only on the Fnumber, assuming no losses in the optics.  Halving the Fnumber increases the image intensity by a factor of four (2 stops), and that's all there is to it. As every photographer knows, if the Fnumber is halved while the shutter speed and ISO are left unchanged, the image will indeed be much brighter, in fact hopelessly overexposed if the exposure was correct at the higher ISO.

I suspect Warren will agree in hindsight that all of the above is correct, and that the image can indeed be made brighter by fitting a faster lens with smaller Fnumber.

Cheers, Colin
Title: The Physics of Digital Cameras
Post by: bjanes on January 03, 2010, 07:21:21 am
Quote from: Jonathan Wienke
The same principle is true of any number of high-end rifle scopes with large objective lenses; a good-quality scope can create an image that is brighter than looking at the subject directly. They are larger, heavier, bulkier, and far more expensive than scopes with smaller objective lenses, but if you need to shoot in dim light, there is a world of difference.
This brings up an interesting point. A scope with a larger objective collects more light, but the eye can use this property only up to a certain limit. When you look through a telescope or microscope, you place your eye at the exit pupil of the eyepiece. The exit pupil or eyepoint can be found by placing a transparent sheet of paper in front of the eyepiece and adjusting the position of the paper until you get the smallest illuminated circle. The specimen is not in focus at the eyepoint and you are looking at a a virtual image (http://hyperphysics.phy-astr.gsu.edu/HBASE/geoopt/image2.html). If you place a ground glass at the eyepoint, you will simply get a circle of light with no image detail. If you move the glass back in a darkened room, you will get a real image.

With a binocular of telescopic sight, the size of the exit pupil can be found by dividing the objective diameter by the magnification. Then the exit pupil is larger than the diameter of the pupil of the observer's eye, the extra light gathering power of the instrument can not be used. See here (http://www.nikon.com/products/sportoptics/how_to/guide/binoculars/basic/basic_05.htm). This may be what Warren is talking about.

If you use a magnifying glass to examine a small object, you are looking at a virtual image, but if you use it as a burning glass, you are focusing an image of the sun at the focal point of the lens. For a given focal length, the size of the imaged disc will be the same, but the intensity will vary with the square of the diameter of the lens.
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 03, 2010, 01:02:55 pm
Quote from: WarrenMars
I appreciate your comments on the subject Jonathon, but I must say that for the time being I am not convinced. When I use my f/2.8 lens I don't see a noticeable improvement in brightness over my f/5.6 even though according to you there should be a difference of 2 STOPS.

The human eye quickly adapts to fluctuating light conditions, so you're not going to readily perceive a 2-stop change between lens changes. When you pull your eye from the viewfinder, it is adjusting to ambient conditions while you're changing lenses. What you need to do is a side-by-side comparison. Fortunately, there is an easy way to do this, even if you only have one camera and lens.

Simply attach an f/2.8 lens to your camera, set the aperture to 2.8, and adjust ISO and shutter speed manually to achieve proper exposure Shoot a few frames to make sure you have it right. Then set aperture to f/5.6, leaving all other settings the same. Shoot a few more frames, and compare the frames shot at 2.8 and 5.6. Be sure and let us know what you find.
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 03, 2010, 01:10:43 pm
Quote from: bjanes
If you use a magnifying glass to examine a small object, you are looking at a virtual image, but if you use it as a burning glass, you are focusing an image of the sun at the focal point of the lens. For a given focal length, the size of the imaged disc will be the same, but the intensity will vary with the square of the diameter of the lens.

This last point (intensity varying with the square of the diameter of the lens) is what the OP has completely confused, and why it is possible to concentrate light (at least light coming from quasi-point light sources) using only optics.
Title: The Physics of Digital Cameras
Post by: dmerger on January 03, 2010, 03:09:41 pm
Jonathan, wouldn't it be easier to see the difference between f2.8 and f5.6 if you just attached an f/2.8 lens to your camera, set the aperture to f5.6 and used the DOF preview button to toggle between f2.8 and f5.6?  Assuming, of course, your camera has a DOF preview button.
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 03, 2010, 03:20:06 pm
Quote from: dmerger
Assuming, of course, your camera has a DOF preview button.

I don't think his D60 has one of those...
Title: The Physics of Digital Cameras
Post by: ErikKaffehr on January 03, 2010, 03:50:12 pm
Hi,

I'd suggest that the construction of the viewfinder matters a lot here. Both Fresnel lenses and view finder screens are optimized for a certain aperture: For that reason the view in the viewfinder may not utilize all the light passing trough the lens when fast lenses are used. For the same reason, viewfinders may be a poor tool for visualizing depth of field at large apertures.

Best regards
Erik

Quote from: Jonathan Wienke
The human eye quickly adapts to fluctuating light conditions, so you're not going to readily perceive a 2-stop change between lens changes. When you pull your eye from the viewfinder, it is adjusting to ambient conditions while you're changing lenses. What you need to do is a side-by-side comparison. Fortunately, there is an easy way to do this, even if you only have one camera and lens.

Simply attach an f/2.8 lens to your camera, set the aperture to 2.8, and adjust ISO and shutter speed manually to achieve proper exposure Shoot a few frames to make sure you have it right. Then set aperture to f/5.6, leaving all other settings the same. Shoot a few more frames, and compare the frames shot at 2.8 and 5.6. Be sure and let us know what you find.
Title: The Physics of Digital Cameras
Post by: WarrenMars on January 03, 2010, 10:10:44 pm
Quote
The human eye quickly adapts to fluctuating light conditions, so you're not going to readily perceive a 2-stop change between lens changes. When you pull your eye from the viewfinder, it is adjusting to ambient conditions while you're changing lenses. What you need to do is a side-by-side comparison. Fortunately, there is an easy way to do this, even if you only have one camera and lens.

What I DID Jonathon, was to look at a scene with my left eye closed first through the viewfinder and then direct. I shuttled back and forth between these two views not giving my eye a chance to adjust for brightness. There was a slight attenuation, certainly less than 1 stop. I then changed lenses and performed the same test. I noted a slight attenuation, certainly less than 1 stop. Thus it was that I was able to compare the brightness through the viewfinder of 2 lenses with a maximum aperture difference of 2 stops. I encourage all readers of this thread to try it for themselves.

Quote
Simply attach an f/2.8 lens to your camera, set the aperture to 2.8, and adjust ISO and shutter speed manually to achieve proper exposure Shoot a few frames to make sure you have it right. Then set aperture to f/5.6, leaving all other settings the same. Shoot a few more frames, and compare the frames shot at 2.8 and 5.6. Be sure and let us know what you find.

Really Jonathon, just because I made an error with the sun's focus, doesn't mean that I know zero about cameras. Surely you realise that I am talking about the brightness apparent THROUGH THE VIEWFINDER. Of course I know a faster lens gives you a faster exposure. Why else would I spend money on buying an f/2.8 lens?

The question here is whether optics can give light amplification to the eye! Whether it be through a magnifying glass, a telescope or an SLR viewfinder.
Title: The Physics of Digital Cameras
Post by: col on January 04, 2010, 12:16:57 am
Quote from: WarrenMars
You may be suprised to find that current image quality is already within 2 stops of its theoretical maximum and is unlikely to improve by more than 1 stop. You may also be surpised to discover that current technology has already pushed photography 6 stops beyond what can be achieved theoretically and that the difference has been covered up with a combination of human tolererance, noise reduction and sharpening.

There may be some other results that may surprise you also. Go ahead and read my exposé (http://warrenmars.com/photography/technical/resolution/photons.htm) on this fascinating and complex subject.
http://warrenmars.com/photography/technica...ion/photons.htm (http://warrenmars.com/photography/technical/resolution/photons.htm)


I have read Warren's essay, and would like to make a few suggestions.

The first part of the essay is concerned with estimating how many (or more significantly, how few) photons are detected by each pixel. Basically, Warren starts out with known illumination levels in a variety of environments, and uses this to estimate in a very, very rough manner how many of these available photons might actually be detected by the camera CCD. This is an extremely bad way to estimate the number of detected photons. Problems include the unknown spectral distribution of the quoted illumination levels, all sorts of arguments about the efficiency of the camera optics, the detector quantum efficiency, the active area of the detector as a proportion of the toal area, the effectiveness of the detector microlenses, and who knows what else. The result is that the estimates obtained for the number of detected photons are very rough indeed, almost uselessly so, and I think Warren would acknowledge this.

Fortunately there is direct data available for the number of detected photons per pixel, elegantly sidestepping all the the guesswork and unknowns in Warren's approach. The total number of photons that can be detected per pixel is known as the "full well capacity", and this is shown for a large number of cameras and sensors at www.clarkvision.com Sure, it varies somewhat from camera to camera, but broadly speaking the full well capacity per pixel of a 12MP APS-C camera is about 25,000 electrons. Each electron corresponds to a detected photon, so we are talking about a MAXIMUM of 25,000 photons being detected per pixel for your typical APS-C SLR camera. This full well capacity corresponds to the maximum brightness level that can be recorded at the base ISO, which is usually pretty close to ISO 100. At higher ISOs, the number of detected photons is proportionally less. The full well capacity is roughly proportional to the pixel area, so full-frame cameras score better with typically 70,000 electrons, and point-n-shoot camera score much worse, in fact best not to talk about them.

Let's talk APS-C, where a maximum of around 25,000 photons are detected at base ISO, usually ISO 100. Warren can now more accurately re-calculate the horrific effects of his "poisson aliasing", which I personally don't believe is a problem, so I will instead talk very briefly about what I personally believe are the implications of detecting "only" 25,000 photons.

For simplicity I will consider only shot noise (photon counting noise), as this type of noise dominates in regions of medium to high intensity. The uncertainty in the measured photon count is SQRT(25000) or 158 photons. Therefore the signal-to-noise ratio of pixels in the brightest part of the image is 25000/158 or about one part in 158. That might not sound wonderful, but it is generally accepted (for example, by DXOmark) that an SNR of better than 30dB, which is one part in 31, is "good" image quality, so I am confident in stating that one part in 158 is very good indeed, and you won't see any noise at all. That is in accordance with general observation. Take an image with an APS-C camera at 100 ISO, and you won't see noise at the brightest parts of the image.

However, that is absolute best case. The average intensity level in most images is around 18% of the peak, corresponding to 0.18x25000 = 4500 photons, leading to an uncertainty of one part in 67. That is still pretty good, and again in accordance with the observation that the noise in the "average" parts of the image is still pretty low at 100 ISO.

At 400 ISO, those 4500 photons will be only 1125 photons, and the signal/noise wil be 1 part in 33.5, just a whisker above the 30dB accepted as "good" IQ. Yes, that is pretty much consistent with experience when using an APS-C camera.

At 1600 ISO the signal/noise has dropped to 1 part in 17, which is certainly visible, though not catastrophic. Keep in mind that all of these examples are in the worst case where the image is viewed at 100%. When the image is downsized, as it usually is, the effective noise is less.

In summary, the numbers obtained from conventional theory "add up" pretty well and match everyday experience, so I am yet to be convinced about the supposedly horrific effects of "poisson aliasing" though my mind is always open. The number of bits used for digitizing of course has nothing to do with it, just provided the number of bits is chosen conservatively so that quantization error is negligible compared to other noise, which is the case.

Warren's predictions of only modest improvements in the future seems likely. Shot noise is fundamentally related to the number of detected photons, and there really are only a limited number of ways to detect more photons.

On the detector front, expect gradual improvement in QE, pixel fill factor, microlenses, electronic noise and signal processing.

On the camera front, fundamentally the amount of light (= total number of photons) collected and imaged onto the sensor is set by the absolute aperture (not fnumber) of the lens, and a larger absolute aperture essentially means a physically larger lens. Therefore, if you want to collect more light, the lens will need to be larger and heavier, and this is true independently of sensor size. In practice, increasing the absolute aperture also requires scaling up the sensor size, to avoid impractically small Fnumbers. Experience strongly suggests that the cost of making larger sensors will continue to fall, but keep in mind that a larger aperture (= bigger, heavier) lens is required to collect more light onto the bigger sensor, so there are no free lunches here. How many kg of lens are you prepared to pay for and carry? The absolute aperture also sets the depth of field, so gathering more light onto the sensor fundamentally leads to a decrease in DOF, ultimately placing a practical limit on collecting more light from the lens. Warren made this point also.    

Cheers, Colin
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 04, 2010, 07:15:08 am
Quote from: WarrenMars
What I DID Jonathon, was to look at a scene with my left eye closed first through the viewfinder and then direct. I shuttled back and forth between these two views not giving my eye a chance to adjust for brightness. There was a slight attenuation, certainly less than 1 stop. I then changed lenses and performed the same test. I noted a slight attenuation, certainly less than 1 stop. Thus it was that I was able to compare the brightness through the viewfinder of 2 lenses with a maximum aperture difference of 2 stops. I encourage all readers of this thread to try it for themselves.

That still isn't a valid test, because your eye is adjusting while you are transitioning from looking through the viewfinder to looking directly at the subject. And it's adjusting even more while you are changing lenses. The human eye is completely unsuited to measuring that sort of thing. Taking exposures with a sensor that does not constantly and automatically adjust to changing light levels is the only way to gather any meaningful data and draw any meaningful conclusions. If your camera has a DOF preview button, you can see the viewfinder brightness change when you use it to switch between f/2.8 and f/5.6 simply by mounting an f/2.8 lens, setting aperture to f/5.6, and pressing the button. But accurately judging the degree of brightness by eye alone is still preposterous.

You're also ignoring the light losses inherent in the focusing screen of the camera's viewfinder and the pentamirror assembly (which has much higher light loss than the pentaprism found in higher-end cameras such as the Canon 1-series). There is a considerable difference between the brightness of a 1Ds viewfinder and a 10D viewfinder, even with the same lens mounted on both.

Quote
Really Jonathon, just because I made an error with the sun's focus, doesn't mean that I know zero about cameras. Surely you realise that I am talking about the brightness apparent THROUGH THE VIEWFINDER. Of course I know a faster lens gives you a faster exposure. Why else would I spend money on buying an f/2.8 lens?

That error is only one of several blatant and fundamental errors in your essay, as I've already pointed out. You're struggling to grasp some of the kindergarten-level fundamentals of optical physics, and yet you expect us to recognize you as an authority in more advanced areas of optic physics where your grasp of the principles involved is demonstrably more flawed and erroneous.

Quote
The question here is whether optics can give light amplification to the eye! Whether it be through a magnifying glass, a telescope or an SLR viewfinder.

It's not a question, it's an easily demonstrable fact, as proven by a simple experiment of burning things with a magnifying lens outdoors on a sunny day. Photons striking the relatively large surface area of the lens are refracted and concentrated into a much smaller area at the plane of focus. The concentration of photons is increased so far above the initial concentration that their energy is sufficient to start fires, which isn't possible without the lens. If you compare the brightness of the focused sun image to the ambient light level, it is obvious that the focused sun image is orders of magnitude brighter.

A faster lens with a larger aperture (or diameter) concentrates more photons per second to each unit of area at the focal plane than a slower lens with a smaller aperture (or physical diameter). That is why opening up the aperture (or increasing lens diameter relative to focal length) allows you to increase shutter speed. More photons/second = greater brightness and more exposure, regardless of whether the lens is directing those photons to an ant crawling in your yard, a piece of film stock, a digital sensor, a viewfinder screen, or your retina. The larger the diameter of the magnifying glass, the brighter the sun image will be, as long as focal length is not changed.

Consider this: the human eye is approximately 25mm in diameter, and normal pupil diameter (the effective aperture of the lens) varies from approximately 2-5mm. That works out to a maximum f-number of approximately 4. Any lens faster than f/4 is capable of creating a greater concentration of photons at the focal plane than the lens found in the human eye.
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 04, 2010, 02:47:50 pm
Quote from: col
The total number of photons that can be detected per pixel is known as the "full well capacity", and this is shown for a large number of cameras and sensors at www.clarkvision.com Sure, it varies somewhat from camera to camera, but broadly speaking the full well capacity per pixel of a 12MP APS-C camera is about 25,000 electrons. Each electron corresponds to a detected photon, so we are talking about a MAXIMUM of 25,000 photons being detected per pixel for your typical APS-C SLR camera. This full well capacity corresponds to the maximum brightness level that can be recorded at the base ISO, which is usually pretty close to ISO 100. At higher ISOs, the number of detected photons is proportionally less. The full well capacity is roughly proportional to the pixel area, so full-frame cameras score better with typically 70,000 electrons, and point-n-shoot camera score much worse, in fact best not to talk about them.

Well capacity refers to the number of electrons knocked across the junction of a photodetector by incoming photons, not the actual number of photons striking the detector. Not every photon adds an electron to the detector well, and in some circumstances a single photon can add more than one electron to the well. Google "multiple exciton generation" for more details on this. There is still room for improvement in the conversion rate from photons to electrons, especially considering that shorter-wavelength photons (blue) have more energy that longer-wavelength photons (red) and therefore are capable of transferring more electrons to the well.

This somewhat fuzzy relation between photon count and electron count is one source of sensor noise.
Title: The Physics of Digital Cameras
Post by: col on January 04, 2010, 06:35:15 pm
Quote from: Jonathan Wienke
Well capacity refers to the number of electrons knocked across the junction of a photodetector by incoming photons, not the actual number of photons striking the detector. Not every photon adds an electron to the detector well, and in some circumstances a single photon can add more than one electron to the well. Google "multiple exciton generation" for more details on this. There is still room for improvement in the conversion rate from photons to electrons, especially considering that shorter-wavelength photons (blue) have more energy that longer-wavelength photons (red) and therefore are capable of transferring more electrons to the well.

This somewhat fuzzy relation between photon count and electron count is one source of sensor noise.

I maintain that what I said is just fine for the intended purpose of estimating the number of detected photons, which in turn sets the shot noise, which is what Warren's original article was essentially all about. I am not an expert re CCD and CMOS sensors, but my impression is that Roger Clarke (ClarkeVision.com) has researched the topic extensively and has a fair idea what he is talking about. Quoting from his website :-

The trapped electrons correspond to absorbed photons, and in the sensor industry, photons and electrons are interchanged in describing sensor performance.
Thus, when a digital camera reads 10,000 electrons, it corresponds to absorbing 10,000 photons. So the graphs shown in this article that are in units of electrons, like Sensor Full Capacity, also indicate how many photons the sensor pixel captured.


Unless you can find evidence to the contrary, I believe that "multiple exciton generation" is a second or third order effect, and that the noise from that effect is small compared to shot noise. I make an effort to cross check everything I write by independent means, and I'm sure it is not just coincidence that my back-of-envelope analysis gives predictions that closely match real world observations, which was my cross-check.  

I fully agree with you though that in a small number of cases, a single photon will lead to more than one electron.  


Quote
Well capacity refers to the number of electrons knocked across the junction of a photodetector by incoming photons, not the actual number of photons striking the detector. Not every photon adds an electron to the detector well ...
Unless I misunderstand you here, you seem to be implying that what matters is the number of photons striking the detector. Not so. What matters is the number of detected photons, and I maintain that the number of captured electrons is a pretty good estimate of this, and I do not know of any other estimate that would be better.

Quote
There is still room for improvement in the conversion rate from photons to electrons, especially considering that shorter-wavelength photons (blue) have more energy that longer-wavelength photons (red) and therefore are capable of transferring more electrons to the well.
Agreed, and I said the same thing when writing that a gradual increases in quantum efficiency(QE) could be expected, though the QE of the best detectors is already extraordinarily high in my view (60%), so we may confidently predict that there will be no "quantum leaps" in this area of performance, as at best it is only possible to approach a QE of 100%. However, if your implication is that shorter wavelength photons have the potential to transfer more than one electron to the well per photon, then of course that would not improve shot noise.
 

Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 04, 2010, 09:11:41 pm
Quote from: col
Unless you can find evidence to the contrary, I believe that "multiple exciton generation" is a second or third order effect, and that the noise from that effect is small compared to shot noise. I make an effort to cross check everything I write by independent means, and I'm sure it is not just coincidence that my back-of-envelope analysis gives predictions that closely match real world observations, which was my cross-check.  

I fully agree with you though that in a small number of cases, a single photon will lead to more than one electron.

I don't have a source to quantify how often multiple excitons are generated; I would imagine it depends heavily in the design of the photodetector and the materials it's made of. But if you google the term, there's plenty of evidence to support the existence of the phenomenon.

Quote
What matters is the number of detected photons, and I maintain that the number of captured electrons is a pretty good estimate of this, and I do not know of any other estimate that would be better.

I agree with this, to a point. My point was that electrons in the well != total photons entering the detector, due to both less-than-perfect quantum efficiency and multiple exciton generation. So while there is a good correlation between photons and electrons, it isn't precisely 1:1.

Quote
Agreed, and I said the same thing when writing that a gradual increases in quantum efficiency(QE) could be expected, though the QE of the best detectors is already extraordinarily high in my view (60%), so we may confidently predict that there will be no "quantum leaps" in this area of performance, as at best it is only possible to approach a QE of 100%.

Quantum efficiency of the detector itself is only part of the puzzle. Fill factor (the sensor area actually occupied by photodetectors) and possible alternatives to the Bayer matrix (which intrinsically limits overall QE to ~33%) leave room for a few stops of additional improvement. Then there's alternative sensor designs that increase maximum well capacity for a given detector size, further improvements in read noise performance (better amplifiers and A/D converters), improved microlens design, and probably things that haven't been figured out yet...
Title: The Physics of Digital Cameras
Post by: col on January 05, 2010, 06:50:51 am
Quote from: Jonathan Wienke
I don't have a source to quantify how often multiple excitons are generated; I would imagine it depends heavily in the design of the photodetector and the materials it's made of. But if you google the term, there's plenty of evidence to support the existence of the phenomenon.

I agree with this, to a point. My point was that electrons in the well != total photons entering the detector, due to both less-than-perfect quantum efficiency and multiple exciton generation. So while there is a good correlation between photons and electrons, it isn't precisely 1:1.

Quantum efficiency of the detector itself is only part of the puzzle. Fill factor (the sensor area actually occupied by photodetectors) and possible alternatives to the Bayer matrix (which intrinsically limits overall QE to ~33%) leave room for a few stops of additional improvement. Then there's alternative sensor designs that increase maximum well capacity for a given detector size, further improvements in read noise performance (better amplifiers and A/D converters), improved microlens design, and probably things that haven't been figured out yet...


Hi Jonathon,

We appear to be in pretty good agreement.

Your point about the number of collected electrons being potentially slightly greater than number of detected photons is interesting but, as it turns out, ultimately irrelevant to the discussion at hand.

I read Roger Clarke's experimental procedure carefully, and strictly speaking, he does not actually measure the number of collected electrons, and nor is there any point in doing so, as fundamentally it is the number of detected photons that dictates the shot noise. He measures the image noise at a variety of ISOs, and also measures the readout noise and dark noise. He then inputs this experimental data into a model incorporating shot noise, readout noise and dark noise, and finds that the data fits the model extremely well. Thus confident that his model is valid, and using the well known properties of shot noise, it is then a simple matter to infer how many photons must have been detected to obtain the measured signal-to-noise ratios. The result is that the “full well photon count” for a typical APS-C sensor is around 25,000 photons, which (fortunately) is exactly the same as I said previously. This is the maximum effective number of photons that can be faithfully detected by a single pixel, which is exactly what we wanted to know in the first place. Whether or not this is exactly the same as the number of collected electrons is irrelevant to our discussion about image noise.  

Basically, what all this means is that all of my previous back-of envelope calculations and conclusions are correct and valid, except that strictly speaking I should have exclusively used the term "full well photon count", and avoided the term "full well electron capacity" altogether.

Cheers,  Colin
Title: The Physics of Digital Cameras
Post by: Graeme Nattress on January 05, 2010, 12:21:44 pm
Quote from: WarrenMars
What I DID Jonathon, was to look at a scene with my left eye closed first through the viewfinder and then direct. I shuttled back and forth between these two views not giving my eye a chance to adjust for brightness. There was a slight attenuation, certainly less than 1 stop. I then changed lenses and performed the same test. I noted a slight attenuation, certainly less than 1 stop. Thus it was that I was able to compare the brightness through the viewfinder of 2 lenses with a maximum aperture difference of 2 stops. I encourage all readers of this thread to try it for themselves.



Really Jonathon, just because I made an error with the sun's focus, doesn't mean that I know zero about cameras. Surely you realise that I am talking about the brightness apparent THROUGH THE VIEWFINDER. Of course I know a faster lens gives you a faster exposure. Why else would I spend money on buying an f/2.8 lens?

The question here is whether optics can give light amplification to the eye! Whether it be through a magnifying glass, a telescope or an SLR viewfinder.

If you take a fast prime, hold down the "DOF Preview" button and scroll through from F1.2 upwards, viewing through the viewfinder, you'd expect to see the image get a stop darker for each stop of aperture you've closed down, right? But you don't. On my camera (Canon 1d Mk III) I see a very small amount of darkening from F1.4 through about f2.8, and it only really seems to get a stop darker per stop of aperture from about F4 onwards. This is because the viewfinder screen has an effective aperture of around F4 ish, and hence sets the effective widest aperture for the viewfinder system, and that does indeed explain what we see. It also effects the appearance of DOF at wide apertures where the viewfinder effectively "lies" to you showing much more to be in focus than really is.

http://www.dphotoexpert.com/2007/09/21/liv...slr-viewfinder/ (http://www.dphotoexpert.com/2007/09/21/live-view-versus-the-cheating-dslr-viewfinder/) shows quite clearly the DOF Lie effect.

Graeme
Title: The Physics of Digital Cameras
Post by: WarrenMars on January 14, 2010, 03:41:28 am
You may be wondering why I have not responded to the later issues raised here... Well I have been away performing real world quantifiable experiments to verify my theorising and answering many of the points raised above. I believe that my results will be off interest to many of you. However, before I upload my findings I have one question to ask:

At what f number is the brightness of the object equal to the brightness of the image? Or to put it more precisely:
At what f-ratio is the Luminous Emittance of an object equal to the Luminosity of its corresponding image at the sensor? (assuming no lens losses)

I had assumed that the magic ratio was f/1.0 but my own derivation from 1st principles yielded a different figure.

Come on folks: Anyone think they know the the answer?
Title: The Physics of Digital Cameras
Post by: Bart_van_der_Wolf on January 14, 2010, 05:51:03 am
Quote from: WarrenMars
At what f number is the brightness of the object equal to the brightness of the image? Or to put it more precisely:
At what f-ratio is the Luminous Emittance of an object equal to the Luminosity of its corresponding image at the sensor? (assuming no lens losses)

Have a look at formula's 1, 2, 3, 4 and 11 on this page (http://toothwalker.org/optics/dofderivation.html). The magnification factor at the focal plane, including specifics of focus distance and pupil magnification to account for optical design, will give you one element of the equation you are looking for. The size of the entrance pupil is another important factor.

Here (http://www.microscopyu.com/articles/formulas/formulasimagebrightness.html) is another consideration.

You take it from there, I rely on my exposure meter/histogram when I'm out shooting. There are too many variables for a practical use of the theory you're after, including transmission characteristics due to coatings and lens element thickness and groupings which could easily account for a few percent of loss.

Cheers,
Bart
Title: The Physics of Digital Cameras
Post by: bjanes on January 14, 2010, 07:58:27 am
Quote from: BartvanderWolf
Have a look at formula's 1, 2, 3, 4 and 11 on this page (http://toothwalker.org/optics/dofderivation.html). The magnification factor at the focal plane, including specifics of focus distance and pupil magnification to account for optical design, will give you one element of the equation you are looking for. The size of the entrance pupil is another important factor.

You take it from there, I rely on my exposure meter/histogram when I'm out shooting. There are too many variables for a practical use of the theory you're after, including transmission characteristics due to coatings and lens element thickness and groupings which could easily account for a few percent of loss.

Cheers,
Bart
Bart,

Thanks for the excellent link. I am not familiar with Paul van Walree, but from his web site I see that he is an underwater acoustician, apparently with a PhD, and an avid amateur photographer.  He knows what he is talking about and brings scientific rigor to his presentation.

Another source that might be helpful is Equation 7 (http://doug.kerr.home.att.net/pumpkin/Photometry_101.pdf) on Doug Kerr's web site.
Bill
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 14, 2010, 09:45:25 am
Quote from: bjanes
Another source that might be helpful is Equation 7 (http://doug.kerr.home.att.net/pumpkin/Photometry_101.pdf) on Doug Kerr's web site.

Actually, equation 8 is the one that puts it in terms of the f-number as opposed to diameter. Rearranging it to solve Warren's question reduces to:

sqrt(pi/4) = f/0.886226925

The important thing to keep in mind is that this formula is for calculating the ratio of subject surface luminance to image plane luminance; i.e. a subject surface emitting 10 lumens/m^2 creates an image receiving 10 lumens/m^2 from the lens. This is not the same as the flux density in front of (or to the side of) the lens. The focused image of the subject will be brighter than the unfocused light from the subject not passing through the lens well before f/0.886226925.
Title: The Physics of Digital Cameras
Post by: BJL on January 14, 2010, 10:04:25 am
Quote from: WarrenMars
At what f number is the brightness of the object equal to the brightness of the image?
There is no specific f number that does it in all cases.

For example, with an SLR's viewfinder, brightness depends on factors like how efficient the VF screen is at passing light on to the viewer. Companies like www.brightscreen.com make replacement VF screens that give a brighter image than the standard issue ones that come in the camera. Conversely, the pentamirror VFs' used in cheaper SLRs usually give a dimmer image than good pentaprism VFs.

Other factors are the magnification of the VF: using lenses of the same focal length and aperture with a VF of lower magnification typically gives a smaller, brighter image, while adding a clip-on accessory VF magnifier gives a larger, dimmer image.

As a further complication, the f number of the human eye adjusts with the overall brightness of the scene in front of it, which changes the perceived brightness of parts of the scene.


No need to believe me; these are experiments that you can try.
Title: The Physics of Digital Cameras
Post by: bjanes on January 14, 2010, 10:21:50 am
Quote from: Jonathan Wienke
Actually, equation 8 is the one that puts it in terms of the f-number as opposed to diameter. Rearranging it to solve Warren's question reduces to:

sqrt(pi/4) = f/0.886226925

The important thing to keep in mind is that this formula is for calculating the ratio of subject surface luminance to image plane luminance; i.e. a subject surface emitting 10 lumens/m^2 creates an image receiving 10 lumens/m^2 from the lens. This is not the same as the flux density in front of (or to the side of) the lens. The focused image of the subject will be brighter than the unfocused light from the subject not passing through the lens well before f/0.886226925.
That's right Jonathan. And for those who are too lazy or disinterested to do calculations this table shows the ratio of the scene luminance to image illuminance for various f/stops:

[attachment=19449:LuminanceRatios.gif]

And as BJL pointed out, the brightness in the viewfinder will be appreciably less bright than what you would obtain in the sensor plane.

Also, the eye has a log response to luminance and the perceived brightness can be predicted by the Stevens Power Law (http://en.wikipedia.org/wiki/Stevens%27_power_law).

Note: Post edited 1/17/2010 to indicate that Ef is image illuminance, not luminance. Scene luminance is expressed in Cd/m^2 (lumens per steradian per meter^2) whereas illuminance is expressed in lumens/m^2. Luminance (http://en.wikipedia.org/wiki/Luminance) is invariant in geometric optics. Readers might also want to look up étendue  (http://en.wikipedia.org/wiki/Etendue). Photographic light meters measure lux (lumens/meter^2) and luminance is measured by a luminance photometer. See here (http://www.crompton.com/wa3dsp/light/lumin.html) for details.
Title: The Physics of Digital Cameras
Post by: WarrenMars on January 14, 2010, 07:33:41 pm
Thanks for the link to Doug Kerr's page. Note the following:
Quote
If we then follow a long trail of photometric algebra (which I will spare the reader here!)...
I think before I accept his formula I should like to see his derivation.
My own derivation has the pi canceled out, thus implying a theoretical "ideal" aperture of f/0.5.

I would like to see another source for this figure though. One would think that such an important quantity would be found all over the net with a variety of derivations. Strange that Mr Kerr's site is the only one found so far.
Title: The Physics of Digital Cameras
Post by: bjanes on January 14, 2010, 09:05:00 pm
Quote from: Jonathan Wienke
Actually, equation 8 is the one that puts it in terms of the f-number as opposed to diameter. Rearranging it to solve Warren's question reduces to:

sqrt(pi/4) = f/0.886226925

The important thing to keep in mind is that this formula is for calculating the ratio of subject surface luminance to image plane luminance; i.e. a subject surface emitting 10 lumens/m^2 creates an image receiving 10 lumens/m^2 from the lens. This is not the same as the flux density in front of (or to the side of) the lens. The focused image of the subject will be brighter than the unfocused light from the subject not passing through the lens well before f/0.886226925.
Jonathan,

Each time I look at your post, it seems to have an additional qualification. Unfortunately, LuLa does not keep track of or show the editing history. Definitions are important, and we need to specify exactly what is being calculated. I'm not certain what you are trying to calculate.  As Mr. Kerr states in a footnote:

"4 Note here that since luminance and illuminance are measures of different physical
properties, the statements we sometimes hear about “what fraction of the scene
luminance ends up on the focal plane” are misguided and meaningless."
Title: The Physics of Digital Cameras
Post by: col on January 15, 2010, 06:52:22 am
Quote from: WarrenMars
At what f number is the brightness of the object equal to the brightness of the image? Or to put it more precisely:
At what f-ratio is the Luminous Emittance of an object equal to the Luminosity of its corresponding image at the sensor? (assuming no lens losses)

I had assumed that the magic ratio was f/1.0 but my own derivation from 1st principles yielded a different figure.

Come on folks: Anyone think they know the the answer?

The best answer I have seen so far is :-

Quote
Note here that since luminance and illuminance are measures of different physical
properties, the statements we sometimes hear about “what fraction of the scene
luminance ends up on the focal plane” are misguided and meaningless.

Can Warren (or anyone) precisely explain what they intend the question to mean, with examples of how you might experimentally verify the answer? To put this another way, can Warren explain why he is asking this question in the first place? What physical/experimental situation do people have in mind when posing the question? What does the question really mean?

Cheers,  Colin
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 15, 2010, 08:40:30 am
Quote from: bjanes
Jonathan,

Each time I look at your post, it seems to have an additional qualification.

I edited the post to add a few additional thoughts. The formula calculates (for an ideal lens focused at infinity) what aperture is required for outgoing photon flux density measured at a point on the surface of the subject to equal incoming photon flux density of the corresponding point on the image plane. In other words, if the lens is imaging a light source emitting 1 W/m^2 of light in the visible spectrum, if the lens has an aperture of f/0.886226925, the image plane will receive 1 W/m^2 of visible light energy in the area of the image plane occupied by the image of the light source.

The distinction I was trying to make is that in this situation, the photon flux density at the front element of the lens is going to be significantly less than 1 W/m^2. How much less will depend on the distance between the subject and the lens and various other factors. So while an aperture of f/0.886226925 is needed to make the flux density of photons leaving the subject equal to the flux density of photons intersecting the image plane, one can get away with a much smaller aperture if the objective is to make the flux density of photons intersecting the image plane greater than the flux density of photons intersecting the front element of the lens.

The latter condition is all that is necessary to amplify image brightness by optics alone. Therefore, if we use a magnifying glass with an aperture of f/2 to burn paper, it will work just fine. The visible-spectrum energy density (W/m^2) at the image plane is not as great as it is at the surface of the sun, but it is significantly greater than what it would be if the lens was not present. As a result, the lens can be used to ignite the paper.
Title: The Physics of Digital Cameras
Post by: col on January 15, 2010, 08:18:00 pm
Quote
Note here that since luminance and illuminance are measures of different physical
properties, the statements we sometimes hear about “what fraction of the scene
luminance ends up on the focal plane” are misguided and meaningless.

This statement is correct, but requires explanation. Firstly though, I will from here on refer only to the equivalent radiometric quantity power density, in units of W/m^2.  This is easier to think about, and also the correct and relevant quantity when talking about burning holes in paper using a simple magnifying glass to form a focused image of the sun on a sheet of paper, which is the example that I will later use.

In general, it is impossible to calculate the power density at the sensor (focal plane) as a function only of the Fnumber and surface power density at the source. The reason is that power density pays no regard to the angular distribution of light leaving the source, and is concerned only with the TOTAL amount of power per unit area. However, the amount of light collected by a distant lens forming an image of the source most definitely DOES depend on the angular distribution of light from the source, so merely knowing the total emitted power per unit are is NOT enough – you also need to know, or make some assumption about, the angular distribution of light leaving the source, and in general you do not and cannot know that angular distribution. Therefore the question as to what Fnumber corresponds to the power density at the focal plane being equal to the surface power density of the source does not (in general) have an answer, as the answer depends on unknown factors other than the Fnumber, namely on the angular distribution of the light leaving the object.

However, with all that said, it still makes sense to ask the question under very specific conditions, such as when the source is known to emit light equally in all directions, AKA isotropically. An excellent example of such an object is the sun, which is a spherical, isotropic radiator. Therefore, I will derive the correct and meaningful equation that relates the surface power density of the sun, to the power density on a focused image of the sun, produced by a simple magnifying glass. For simplicity, I’ll quote the result first, so those not inclined to do so don’t need to wade through the derivation.

PDs = Power density at the surface of the sun (=5900W/cm^2)
PDi = Power density at the image of the sun (W/cm^2)
F = f-number of the lens (magnifying glass), = L/D
L = focal length of the lens
D = absolute aperture diameter of the lens

PDi = PDs / 4(F^2)

Therefore, an F0.5 lens (magnifying glass) would result in the power density on at the focused image on the paper being the same as at the surface of the sun, which I find really cute. A typical magnifying glass has a focal length of 200mm, and a diameter of 70mm, so in this case the achievable power density is 0.25(70/200)^2 = 0.03, which is “only” 3% of the power density at the sun’s surface. Still not bad, and quite enough to burn paper. I note that Warren also derived the value of F0.5, though I suspect he does not realize that it is not generally applicable.

By far the more significant point I wish to make is that, in general, there is no formula that describes image power density at the sensor (focal plane) as a function only of the Fnumber and surface power density at the source, so people are wasting their time trying to derive and use a general formula. Remember, the formula that I have given, and that Warren appears to have independently derived, applies only to an isotropically emitting source, which is not generally the case. Maybe this is why you don't see too much about it in the textbooks?

I will present the derivation in a future posting.

Cheers,  Colin

Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 15, 2010, 10:25:41 pm
Quote from: col
However, with all that said, it still makes sense to ask the question under very specific conditions, such as when the source is known to emit light equally in all directions, AKA isotropically.

The condition of isotropic emission is mentioned in Doug's web page.

Quote
I will present the derivation in a future posting.

I look forward to seeing it.
Title: The Physics of Digital Cameras
Post by: WarrenMars on January 16, 2010, 03:07:43 am
Indeed I made the assumption that the object light source emitted isotropically. Yes, there are sources that emit mostly in one direction but the isotropic emitter is a worse case scenario, close enough for real-world photography and a good place to start.
I am gratified Colin that you derived the same result that I did. For the time being at least we are in a majority.

The figure f/0.5 is interesting as a limit since it is also the limit imposed by the Abbe Sine Condition. Together these limits mean that as far as air based photography goes we will never get a greater energy density at the image plane that there is at the object. I don't know about you but I find this symmetry profoundly soothing.

For those that are wondering what this has to do with real world photography the answer is that it establishes the theoretical limit to tolerable noise ISO levels.
Consider a spherical light source in the distance focused onto the sensor such that its image area is exactly that of one pixel. The photon flux density at that pixel can never be greater than the flux density of the object. Since image quality is dependent on per pixel photon flux per exposure, this set the upper limit.

The magnifying glass example is not a counter-example: as Colin pointed out, you don't need the full energy density of the surface of the sun to burn paper!
Yes, as you reduce the focal length you condense the radiant energy of a distant object, but there is a limit to this: the Abbe Sine Condition. When you reach the state where the focal length is half the aperture, then the only way to reduce the focal length further is to reduce the aperture and then, of course, you reduce the afferent flux at the lens so the energy density of the image remains the same. You cannot achieve light amplification through optics alone.
Title: The Physics of Digital Cameras
Post by: Bart_van_der_Wolf on January 16, 2010, 04:15:44 am
Quote from: col
By far the more significant point I wish to make is that, in general, there is no formula that describes image power density at the sensor (focal plane) as a function only of the Fnumber and surface power density at the source, so people are wasting their time trying to derive and use a general formula.

Hi Colin,

Wouldn't that be solved by measuring the luminous flux incident at the aperture (entrance pupil)?

Cheers,
Bart
Title: The Physics of Digital Cameras
Post by: col on January 16, 2010, 06:34:41 am
Hi to all. As promised, the derivation for the power density at the image, as a function of the surface power density at the source, and the Fnumber, for the case of an isotropically emitting source, such as the sun.

 
PDs = Power density at the surface of the sun (=5900W/cm^2)
PDi = Power density at the image of the sun (W/cm^2)
PDlens = Power density, from the sun, at the lens (W/m^2)
Plens = total power incident upon the lens (W)
F = f-number of the lens (magnifying glass), = L/D
L = focal length of the lens = distance from lens to image
Dlens = absolute aperture diameter of the lens
Rlens = radius of the lens
r = radius of the sun
d = diameter of the sun
R = distance from the centre of the sun, to the lens

For a spherical isotropic radiator (such as the sun), the power density (W/m^2) is inversely proportional to the square of the distance from the centre of the sphere. Therefore, in our example of the sun, the power density at the lens is :-

PDlens = PDs x (r/R)^2

The total power (W) incident upon the lens is equal to the power density at the lens, multiplied by the lens area :-

Plens = PDs x (r/R)^2   x   pi x Rlens^2


The magnification of this simple lens with object at infinity is given by L/R. Therefore, the image radius is rL/R, and therefore the area of the image is :

Ai = pi x (rL/R)^2


In this example, as all of the power entering the lens appears in the image, the power density at the image plane is equal to the power entering the lens, divided by the area of the image.
 
PDi = PDs x (r/R)^2   x   pi x Rlens^2  / ( pi x (rL/R)^2)

PDi = PDs x Rlens^2 / L^2

PDi = PDs / 4(F^2)

Therefore, an F0.5 lens is required to make the power density at the image plane equal to the surface power density at the object, for this non-general example of imaging the sun.

In general, there is no exact formula that describes power density at the sensor (focal plane) as a function only of the Fnumber and surface power density at the source. All of the above algebra applies only to an isotropically emitting source, which is true for the sun, but will not be true in general.

Colin
Title: The Physics of Digital Cameras
Post by: col on January 16, 2010, 07:43:59 am
Quote from: Jonathan Wienke
Actually, equation 8 (Doug Kerr) is the one that puts it in terms of the f-number as opposed to diameter. Rearranging it to solve Warren's question reduces to:

sqrt(pi/4) = f/0.886226925

The important thing to keep in mind is that this formula is for calculating the ratio of subject surface luminance to image plane luminance; i.e. a subject surface emitting 10 lumens/m^2 creates an image receiving 10 lumens/m^2 from the lens.

Hi Jonathon,

Actually, you appear to have the photometric quantities and units slightly mixed up here.
Doug Kerr's equation 8 shows how the Illuminance (Lumens/m^2) on the image plane is a function of the Fnumber, and the Scene Luminance (Lumens/steradian/m^2).

Interesting though Doug's formula is, it does not answer Warrens question, which was how the Illuminance on the image plane is a function of Fnumber and the Luminous Emittance (Lumens/m^2) at the surface of the object.

That could explain why I got a result of F/0.5, while Dougs formula gives F/0.886

Colin
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 16, 2010, 12:15:14 pm
Quote from: WarrenMars
You cannot achieve light amplification through optics alone.

You may not be able to make a focused image with a greater flux density (W/m^2) than what is emitted by the surface of the subject, but you certainly can create a focused image that has a much higher flux density than what is present immediately in front of the front element of the lens, or than what would be present at the focal plane in the absence of the lens. In other words, a magnifying lens may not be able to create a focused image of the sun with a flux density greater than the surface of the sun, but the flux density of the focused image can be orders of magnitude greater than the flux density of the sunlight reaching the surface of the earth that is not being focused by the lens. And the greatest flux density will occur at the plane where the image of the sun is most precisely focused.
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 16, 2010, 12:28:55 pm
Quote from: col
Hi Jonathon,

Actually, you appear to have the photometric quantities and units slightly mixed up here.
Doug Kerr's equation 8 shows how the Illuminance (Lumens/m^2) on the image plane is a function of the Fnumber, and the Scene Luminance (Lumens/steradian/m^2).

W/m^2 is the total energy emitted by the surface, and W/steradian/m^2 denotes the portion of that energy that reaches the aperture of the lens. I don't think confusing one with the other would account for the discrepancy between f/0.5 and f/0.886, especially when distances approach infinity. The energy passing through the lens aperture is much less than the total emitted/reflected energy emanating from the subject surface.
Title: The Physics of Digital Cameras
Post by: col on January 16, 2010, 07:11:47 pm
Quote from: WarrenMars
The figure f/0.5 is interesting as a limit since it is also the limit imposed by the Abbe Sine Condition. Together these limits mean that as far as air based photography goes we will never get a greater energy density at the image plane that there is at the object. I don't know about you but I find this symmetry profoundly soothing.

For those that are wondering what this has to do with real world photography the answer is that it establishes the theoretical limit to tolerable noise ISO levels.
Consider a spherical light source in the distance focused onto the sensor such that its image area is exactly that of one pixel. The photon flux density at that pixel can never be greater than the flux density of the object. Since image quality is dependent on per pixel photon flux per exposure, this set the upper limit.

Warren:
For those that are wondering what this has to do with real world photography the answer is that it establishes the theoretical limit to tolerable noise ISO levels.


No. This statement if wrong.
Warren has apparently assumed that the photon counting (shot) image noise is set and limited by the photon flux density (photons/area/second) falling on the sensor. This is not true, with the result that all of the recent discussions are irrelevant, albeit sometimes interesting in their own right. Here are the (very simple) facts of the matter.

Image shot noise is set by the total number of detected photons which, all else equal, is set by the total number of photons striking the sensor, NOT by the intensity of light striking the sensor per se.

For the same field of view and exposure time, the number of photons collected by the lens and delivered to the sensor is set by the absolute aperture diameter of the lens. Period. For those that have heard me say that many times before, I apologize.

One method of increasing absolute aperture is to built a "faster" lens, with a smaller F-number. Smaller F-numbers result in a more intense image, more photons/area/time. We all agree that there is a limit to the smallest theoretical and practical F-number and, therefore, a limit to attainable image intensity.

The other method of increasing absolute aperture (and thus improving shot noise by delivering more photons to the sensor) is to simultaneously scale up the absolute aperture and focal length, thus maintaining F-number, and then use a larger sensor to maintain the same field of view. In this case, the image intensity is not increased, but the all-important number of photons striking the sensor is increased due to the larger sensor area.

Therefore, the limit to how many photons can be collected is more practical than theoretical. One practical limit is the cost, weight and bulk of physically large lenses. More light means a larger absolute aperture means a larger lens. Real simple.

The other limit is tolerable depth of field. More light means a larger absolute aperture means a poorer DOF. Again, the limit here is more practical than in the nature of a theoretical limit.

That's it in a nutshell, Warren.

Cheers, Colin  
 
Title: The Physics of Digital Cameras
Post by: WarrenMars on January 16, 2010, 07:38:57 pm
I see Colin that your derivation followed the same basic idea as mine. Here is mine:

(http://warrenmars.com/pictures/misc/luminousity_ratio_proof.jpg)

On reflection I believe that this derivation is more generally applicable than just to single spherical isotropic light sources such as the sun.
The critical point is that the luminosity ratio is relative to the surface of the ball, NOT the imaginary light source in the centre of it. Thus the emission need not be isotropic. For this reason a ball that emits ALL its luminous flux from the hemisphere facing toward the lens will still find the above analysis applies. Pack a whole bunch of these hemispheres as closely together as you like and you have surface that is not unlike the surface of things in the real world, such as skin, leaves, dirt etc.
Title: The Physics of Digital Cameras
Post by: col on January 16, 2010, 08:53:22 pm
Hi Warren,

I need to ask a favour of you. Read my previous post (post#92) carefully, and tell me if you disagree with anything.

Unless and until we agree on all I have said in post#92, all further discussion is pointless.

Cheers,  Colin
Title: The Physics of Digital Cameras
Post by: joofa on January 17, 2010, 02:09:38 am
Quote from: col
Warren has apparently assumed that the photon counting (shot) image noise is set and limited by the photon flux density (photons/area/second) falling on the sensor. This is not true, with the result that all of the recent discussions are irrelevant, albeit sometimes interesting in their own right. Here are the (very simple) facts of the matter.

Image shot noise is set by the total number of detected photons which, all else equal, is set by the total number of photons striking the sensor, NOT by the intensity of light striking the sensor per se.

The above is incorrect. Poisson noise is not determined by the total number of detected photons. It is determined by the average number of detected photons for the underlying distribution. Further more, for any given pixel in a particular image, even the sqrt of average number of photons does not represent the actual noise on that pixel. The actual noise is the deviation from the average value. The sqrt thingy just says that on average the noise is that far from the mean value, but, for a given pixel and a given image it could fluctuate, and not given by sqrt (average photon count).

In fact, the underlying Poisson variable has a photon intensity parameter, in this case measured as photons/seconds/area, which together with the integration time determine the average on which noise calculations are based.

I had some comments on how to calculate these numbers on dpreview and you are welcome to have a look at an example. Click here. (http://forums.dpreview.com/forums/read.asp?forum=1019&message=34244294)

Joofa
Title: The Physics of Digital Cameras
Post by: col on January 17, 2010, 04:40:43 am
I wrote:
Quote
Image shot noise is set by the total number of detected photons which, all else equal, is set by the total number of photons striking the sensor, NOT by the intensity of light striking the sensor per se.

Quote from: joofa
The above is incorrect. Poisson noise is not determined by the total number of detected photons. It is determined by the average number of detected photons for the underlying distribution. Further more, for any given pixel in a particular image, even the sqrt of average number of photons does not represent the actual noise on that pixel. The actual noise is the deviation from the average value. The sqrt thingy just says that on average the noise is that far from the mean value, but, for a given pixel and a given image it could fluctuate, and not given by sqrt (average photon count).

In fact, the underlying Poisson variable has a photon intensity parameter, in this case measured as photons/seconds/area, which together with the integration time determine the average on which noise calculations are based.

I had some comments on how to calculate these numbers on dpreview and you are welcome to have a look at an example.

Hi Joofa,

What you say about the Poisson noise being determined by the average number of detected photons for the underlying distribution is technically correct, though I think that what I said was just fine within the context of the point I was trying to get across.

I think you may have read too much into my term "detected photons". My intention here was only to make it clear that it doesn't matter a rat's arse how many photons strike the detector - what actually matters  re image noise is how many you are able to detect, on average of course. If I was sloppy enough to say that what mattered was the number of photons striking the detector, rather than the number of detected photons, I would have many people saying "that's wrong because you haven't considered QE, fill factor, microlenses etc"

If you read the entire thread, you would realize that Warren appears to be hung up on the light intensity (photons/second/area) striking the sensor, and he appears to have concluded that there is a theoretical limit to how low the shot noise could be made, because F-number limitations put an upper limit on the intensity of light striking the sensor.

The intention of my quoted sentence above was only to point out that it is not the image intensity that matters per se, but the total number of detected photons (you now now the context in which I say this) and you can collect and detect more photons with larger pixels while keeping the F-number and image intensity the same, though you still need a larger lens of course.  

Basically, I don't believe that your posting in any way alters the substance or thrust of what I was saying. If you still think it does, then get back to me, and we'll thrash it out

Cheers,  Colin

This addition edited in later.
Hi again.
At first I thought you were just nitpicking, for example, making the distinction between the detected photon count, and the "true" photon count of the underlying distribution. However, I have now read your post on DPreview, and on the basis of the errors you have made there, I now tend to the view that you were not nitpicking at all, but just plain wrong!
It starts look like we will indeed need to thrash this out.
Title: The Physics of Digital Cameras
Post by: joofa on January 17, 2010, 10:22:06 am
Quote from: col
At first I thought you were just nitpicking, for example, making the distinction between the detected photon count, and the "true" photon count of the underlying distribution. However, I have now read your post on DPreview, and on the basis of the errors you have made there, I now tend to the view that you were not nitpicking at all, but just plain wrong!
It starts look like we will indeed need to thrash this out.

What is wrong? There is a typo there in a number that I calculated, which I have to correct, but that typo does not change the thrust of what I am saying.
Title: The Physics of Digital Cameras
Post by: col on January 17, 2010, 05:06:40 pm
Quote from: joofa
What is wrong? There is a typo there in a number that I calculated, which I have to correct, but that typo does not change the thrust of what I am saying.

Hi Joofa,

Just for now, I would like to concentrate on whether you still feel I have made any significant error in anything I said in my post #92, as that is from where this discussion started. I certainly maintain that everything I said is just fine.

Exactly which of my statements or claims do you disagree with?

Firstly though, some terminology. There is a difference between "estimated uncertainty" and "error". The error is the difference between the measured value, and the "true" value. In terms of photon counting, the "true" value is the average value you would get if the count was repeated an infinite number of times. The estimated uncertainty is a prediction of the expected error. For example, if you measure 100 photon counts, then the estimated uncertainty is SQRT(100), which is 10. In general, you do not know the true value, so the estimated uncertainty is usually all you have.

I think (but am not sure) that you have a fundamental problem with my statement that the shot noise in an image is a function of the number of detected photons, rather than the intensity (photons/second/area) as such, so I will attempt to spell that out in detail with an example.

Let the "true" intensity (photons/second/area) be 100 photons/mm^2/second, and let this intensity be perfectly uniform over the entire detector. By "true", I mean the intensity you would measure if you had the luxury of measuring a very large number of photons.

Let the area of each pixel be 1mm^2, and let the exposure time be 1 second, so the "true" photon count is 100 photons/pixel.

By any meaningful measure, the uncertainty in the photon count for each pixel is 10, so the image SNR is 100/10, or 10. There is nothing to be gained (IMHO) by pointing out that the actual error in the count for some pixels will not be 10.

To improve the image SNR, we need to detect more photons, on average, of course. To take a specific example, if we wish to double the SNR, we need to detect 4 times as many photons. All my post#92 is saying, is that there are many ways by which we could increase the number of detected photons, and the improvement in image SNR would be the same in each case. For example :-

(1) Increase the detector QE by a factor of 4
(2) Increase the image intensity by a factor of 4, for example by fitting a lense with smaller Fnumber
(3) Double the sensor dimensions (4 times the area), keep Fnumber the same, and maintain FOV by doubling focal length

All of these will have an identical effect of improving image SNR by a factor of two. Agreed or not?

Cheers,  Colin


Title: The Physics of Digital Cameras
Post by: WarrenMars on January 17, 2010, 06:26:04 pm
Colin, you seem to have strangely misunderstood me.

Quote
If you read the entire thread, you would realize that Warren appears to be hung up on the light intensity (photons/second/area) striking the sensor, and he appears to have concluded that there is a theoretical limit to how low the shot noise could be made, because F-number limitations put an upper limit on the intensity of light striking the sensor.

The intention of my quoted sentence above was only to point out that it is not the image intensity that matters per se, but the total number of detected photons (you now now the context in which I say this) and you can collect and detect more photons with larger pixels while keeping the F-number and image intensity the same, though you still need a larger lens of course.
When you say "total number of detected photons" I will assume you mean per pixel. Surely you must see that the number of detected photons per pixel per exposure is proportional to the light intensity at that pixel.

Since it is "photons/second/area" you can increase your photon count by increasing the pixel area or the exposure. However exposure is constrained by other factors so it is out. Since the light intensity at the pixel is dependent on the f-stop and the source intensity you can also increase the photon count by increasing the aperture or by lighting the source, say with flash for example.

Any of these techniques will increase the photon count per pixel and hence decrease the Poisson noise.

Increasing the pixel pitch can be achieved by 1) reducing the number of pixels for a given sensor size, a good move, but you can't go too low else you have insufficient resolution for your final image 2) increasing the sensor size for a given number of pixels, fine except that it makes larger cameras with shallower DoF. Lighting the scene is fine but is not always possible or artistically desirable. Increasing aperture is 1) achieved at the expense of DoF and 2) eventually limited by the Abbe Sine Condition. Such are the limiting factors in minimising Poisson noise.

Title: The Physics of Digital Cameras
Post by: joofa on January 17, 2010, 06:42:02 pm
Quote from: col
Just for now, I would like to concentrate on whether you still feel I have made any significant error in anything I said in my post #92, as that is from where this discussion started. I certainly maintain that everything I said is just fine.

Hi Col,

You seem to be going back and forth between the errors you thought you found in my dpreview post and your post # 92.

Quote from: col
Exactly which of my statements or claims do you disagree with?

That the Poisson noise is the sqrt of total number of detected photons.  This is a misunderstanding.

Quote from: col
I think (but am not sure) that you have a fundamental problem with my statement that the shot noise in an image is a function of the number of detected photons, rather than the intensity (photons/second/area) as such, so I will attempt to spell that out in detail with an example.

Let the "true" intensity (photons/second/area) be 100 photons/mm^2/second, and let this intensity be perfectly uniform over the entire detector.
Let the area of each pixel be 1mm^2, and let the exposure time be 1 second, so the "true" photon count is 100 photons/pixel.

It is interesting that you are now adopting the photon intensity in photons/area/seccond, where as you chided Warren for that and I repeat it here:

Quote from: col
Warren has apparently assumed that the photon counting (shot) image noise is set and limited by the photon flux density (photons/area/second) falling on the sensor. This is not true, ...

Quote from: col
By any meaningful measure, the uncertainty in the photon count for each pixel is 10, so the image SNR is 100/10, or 10. There is nothing to be gained (IMHO) by pointing out that the actual error in the count for some pixels will not be 10.

There is. First of all there is the correct interpretation of theory. Secondly, suppose you are denoising images. Then, the point is to develop appreciation of image correlation structure in SNR determination.

Quote from: col
To improve the image SNR, we need to detect more photons, on average, of course. To take a specific example, if we wish to double the SNR, we need to detect 4 times as many photons. All my post#92 is saying, is that there are many ways by which we could increase the number of detected photons, and the improvement in image SNR would be the same in each case. For example :-

(1) Increase the detector QE by a factor of 4
(2) Increase the image intensity by a factor of 4, for example by fitting a lense with smaller Fnumber
(3) Double the sensor dimensions (4 times the area), keep Fnumber the same, and maintain FOV by doubling focal length

All of these will have an identical effect of improving image SNR by a factor of two. Agreed or not?

As long as you maintain a difference between actual and average values some of the above stuff can be sorted out. However, we are back to the original proposition here. Taking our example where the average count for a given pixel is 100. Suppose you are able to predict/measure with certainty that the average value is now 400. Again that does not mean that the SNR for the same pixel is now up by a factor of 2, in a given image. What Poisson statistics is saying is that if you acquire a large number of images (i.e., not a single image), under the same conditions, then the average SNR for that pixel is up by a factor of 2.
Title: The Physics of Digital Cameras
Post by: col on January 17, 2010, 07:18:38 pm
Quote from: WarrenMars
Colin, you seem to have strangely misunderstood me.


When you say "total number of detected photons" I will assume you mean per pixel. Surely you must see that the number of detected photons per pixel per exposure is proportional to the light intensity at that pixel.

Since it is "photons/second/area" you can increase your photon count by increasing the pixel area or the exposure. However exposure is constrained by other factors so it is out. Since the light intensity at the pixel is dependent on the f-stop and the source intensity you can also increase the photon count by increasing the aperture or by lighting the source, say with flash for example.

Any of these techniques will increase the photon count per pixel and hence decrease the Poisson noise.

Increasing the pixel pitch can be achieved by 1) reducing the number of pixels for a given sensor size, a good move, but you can't go too low else you have insufficient resolution for your final image 2) increasing the sensor size for a given number of pixels, fine except that it makes larger cameras with shallower DoF. Lighting the scene is fine but is not always possible or artistically desirable. Increasing aperture is 1) achieved at the expense of DoF and 2) eventually limited by the Abbe Sine Condition. Such are the limiting factors in minimising Poisson noise.

Hi Warren,

Seems like we are in pretty good agreement then, which is great  

However, increasing absolute aperture (which is what matters re the number of photons dumped on the sensor, and thus shot noise) is not " eventually limited by Abbe Sine Condition", but you would know that already if you had read and agreed with my post# 92.

So, yet again, I ask you to tell me if you agree with everything I said in post#92? As simple yes or no would make my life so much easier in future discussion ....

Cheers, Colin

PS. When I say "total number of detected photons", I actually mean over the entire detector, though for simplicity you can take me to mean per pixel. If you do a proper image noise comparison at equivalent resolution, as for example is done by DXOmark, then a good case can be made that what actually matters is the total number of detected photons for the entire image, independent of the size or number of pixels. However, that is a big and possibly controversial topic that I don't want to get into right now, so by all means take me to mean "total number of detected photons per pixel", especially in your yes/no answer as to whether you agree with everything in post# 92 ....


Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 17, 2010, 09:49:43 pm
Quote from: WarrenMars
Yes, as you reduce the focal length you condense the radiant energy of a distant object, but there is a limit to this: the Abbe Sine Condition. When you reach the state where the focal length is half the aperture, then the only way to reduce the focal length further is to reduce the aperture and then, of course, you reduce the afferent flux at the lens so the energy density of the image remains the same. You cannot achieve light amplification through optics alone.

Actually you can, but not in the traditional context of light from a subject passing through a lens and being focused on a plane somewhere outside the lens. Consider the following thought experiment:

Suppose you have a sphere of transparent material having a refractive index of 2.37 and a radius of 2.37cm. One of the corollaries of Snell's Law (http://en.wikipedia.org/wiki/Snell's_law) is that all photons that refract into the sphere will pass through a region in the center of the sphere having a radius equal to (R * I / A), where R is the radius of the sphere, A is the speed of light in the sphere's ambient environment, and I is the speed of light within the sphere. If the sphere is in a vacuum, this can be simplified to (radius of sphere) / (refractive index of sphere). The following diagram illustrates this:

[attachment=19523:HemiLens.png]

The blue lines represent photon paths, some of which refract into the sphere, and some of which bypass the sphere's surface. But as you can see, all of the photons that enter the sphere pass through a region in the center of the sphere that is inversely proportional to the refractive index of the sphere. If we hollow out this 1cm region in the center of the sphere and optically bond an absorptive coating to this newly-created inner surface, all photons that enter the sphere will intersect this inner absorptive surface. Assuming an isotropic distribution of photons intersecting the outer surface of the sphere, the photon flux density at this inner surface will be equal to the square of the refractive index of the sphere (the ratio of the area of the outer surface of the sphere divided by the area of the inner surface of the sphere), minus reflective losses (photons that reflect off the surface of the sphere instead of refracting into the sphere). These losses can be calculated by Fresnel's equations (http://en.wikipedia.org/wiki/Fresnel_equations).

For a lens having a refractive index of 2.37, these losses are approximately 29.49% (assuming an isotropic distribution of photons intersecting the outer surface of the sphere). The following diagram shows this visually:

[attachment=19524:ABPower.png]

The red line represents total power isotropically distributed relative to the surface of the sphere. The green line represents the percentage of photons that will refract into the surface of the sphere instead of reflecting off the surface of the sphere, given a refractive index of 2.37. The black line is the product of the red line and the green line. The area below the red line graphically represents the total energy of the photon flux at the sphere's outer surface. The area below the black line represents the portion of the total energy that actually enters the sphere, and the area between the black line and the red line represents energy that reflects off the surface of the sphere.

Thus, if the isotropic photon flux density at the outer surface of the sphere is 100 W/m^2, the photon flux density at the inner surface of the sphere will be:

100 * 2.37^2 * (1 - 0.2949) = 396.02 W/m^2

The astute observer will note that this has some interesting ramifications with regard to thermodynamics and equilibrium.
Title: The Physics of Digital Cameras
Post by: WarrenMars on January 18, 2010, 07:18:48 pm
Quote
a good case can be made that what actually matters is the total number of detected photons for the entire image, independent of the size or number of pixels.
I don't agree with this position. Yes, you can make a case, but not, in my opinion, a good one.
I don't know what your case would be, but if I were given the job of submitting the case it would be that: on a given screen view or print, the noisiness of small pixels will be canceled out by the pixel binning required to reduce the resolution to that of larger pixels.

Yes, pixel binning improves the per pixel quality of an image, I'm sure we can all see that, but NOT to the extent of a shot that is taken with pixels of that size in the original. Fuji are one camp that is trying this on with their Exmor sensor, but the real world evidence is fooling no one. Any serious photographer prefers a moderate number of large pixels to a large number of moderate pixels. You can hear the cry for megapixel reduction in every forum and review site in the world.

As someone who has used a 5MP compact and a 10MP compact, there is no doubt that I will take the 5MP job EVERY time, pixel binning or no! Even at 2MP image resolution the shots are obviously superior.

As for your thought experiment Jonathon: I'm not sure what you are trying to achieve. You CAN achieve greater energy density than is permissible by the Abbe Sine Condition simply by moving the image plane to the point where the rays converge, a point that is NOT, as a general rule, at the point where the image is in focus. Even for the sun this point is a VERY small amount closer to the lens than the image. For objects that we normally photograph this point is substantially closer than the image. Yes the energy density is greater at this point but they are not in sharp focus, at least not in focus as the photographer knows it. I believe I made this point at the beginning of all this...    

You can also alter the Abbe Sine Condition by altering the refractive index of the external medium. You can do microscopy within high RI oil for example, allowing a considerable increase in minimum f number, but we don't live in an oil bath, so for the real world photographer such considerations are irrelevant.
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 18, 2010, 08:34:21 pm
Quote from: WarrenMars
As for your thought experiment Jonathon: I'm not sure what you are trying to achieve. You CAN achieve greater energy density than is permissible by the Abbe Sine Condition simply by moving the image plane to the point where the rays converge, a point that is NOT, as a general rule, at the point where the image is in focus.

The primary application I have in mind for this doesn't have involve imaging, although there is at least one instance where this is relevant to imaging--the microlenses over photodetectors. The "real" application involves thermal emission and absorption that occurs naturally in the infrared region. The general idea is to take the photons thermally emitted by a large blackbody surface (surface A, the outer black surface in the diagram above) and use refraction to direct the majority of those photons to a second blackbody surface having a much smaller area (surface B, the inner black surface in the diagram). If the blackbody surfaces are thermally insulated from each other to prevent heat transfer via convection or conduction, then according to Stefan-Boltzmann's Law (http://en.wikipedia.org/wiki/Stefan–Boltzmann_law), the increased photon flux density at surface B (compared to surface A) will cause surface B's temperature to rise above the temperature of surface A until the energy thermally emitted by surface B is equal to the energy being absorbed by surface B.

By heatsinking surface A to the ambient environment and placing a thermocouple between surface A and surface B, it is possible to create a device that absorbs thermal energy from its surroundings and converts the absorbed energy into a small amount of electricity. It's not creating or destroying energy, but it does have the novel ability to re-use existing energy (previously considered to be useless/unusable) an indefinite number of times.
Title: The Physics of Digital Cameras
Post by: joofa on January 19, 2010, 12:10:32 am
Quote from: Jonathan Wienke
The primary application I have in mind for this doesn't have involve imaging, .....

So what is next Jonathan, a Flux-Capacitor?  
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 19, 2010, 01:04:34 am
Quote from: joofa
So what is next Jonathan, a Flux-Capacitor?  

No, I was thinking more along the lines of a battery replacement for watches, PDAs, cell phones, pacemakers, etc. that would absorb and convert enough ambient thermal energy to electricity to power the device indefinitely, as long as the device's ambient environment was above a minimum temperature. If you lived/worked in a remote area outside the power grid, having a cell phone or GPS that could operate continuously without conventional recharging would be pretty handy, even if it meant paying a hefty price premium when making the initial purchase. For medical implants, eliminating the need for a plutonium-based nuclear generator (the current state of the art for pacemakers) would mean not having to have radioactive material implanted inside your body, and no periodic operations (with the attendant risk of infection and other complications) to replace batteries. Even if the electrical output was only a fraction of a watt, there are plenty of commercial applications where a hefty price increase could be easily justified by the increase in convenience/usability.

How much extra would you pay for a cell phone that never needed to be plugged in to a charger?
Title: The Physics of Digital Cameras
Post by: Daniel Browning on January 19, 2010, 02:40:05 am
Quote from: WarrenMars
Yes, pixel binning improves the per pixel quality of an image, I'm sure we can all see that, but NOT to the extent of a shot that is taken with pixels of that size in the original.

That is always incorrect for tonal levels dominated by photon-shot noise (i.e. almost all of them at low ISO). The only time when there is even a possibility of advantage for large pixels is in tonal levels where read noise contributes significantly (e.g. low light). But even then, the preponderance of actual shipping cameras demonstrate that smaller pixels are the same or better. The only tenable position is that they would have been even better at read-noise-dominated tonal levels if the pixel size was larger.

Quote from: WarrenMars
Any serious photographer prefers a moderate number of large pixels to a large number of moderate pixels. You can hear the cry for megapixel reduction in every forum and review site in the world.

Most of those people are making huge mistakes in their image comparison/analysis:


For example, one of the most common positions is this:


Obviously, there is a logical error in that: correlation is not causation. As you know, the reality is that it is not the small pixels that cause the noise, but small sensors.

Another common position is that smaller pixels have more noise due to collecting fewer photons. This is plainly not the case, as QE per area is generally higher in smaller pixels, or at worst the same. (This thanks to microlenses.)
Title: The Physics of Digital Cameras
Post by: AlexB2010 on January 21, 2010, 06:56:15 am
The noise level is increased in a situation of low light. At good light in base ISO the level of noise even in small sensors is low. The noise in a image is more pronounced in dark areas, so the picture quality is not constant on al zones. When a camera is working with signal amplification, the signal to noise ratio will influence the output most, the raw number of photons captured is only a small part of the equation, since most noise come from the analog amplified signal before the AD converter. Small sensor allow use of CCD architecture that is more efficient than CMOS. Its obvious that the sensor architecture have a maximum theoretical performance based oh physical characteristics of light and a 1 stop increase in real light conversion performance is a huge increase. Taken apart real sensor performance and cosmetical advertisement, there is little room for improvement. So, the particularities of photon isn't the only constrain in noise level on digital photo.
A question that arise is what the picture quality difference at base ISO?
Best,
Alex
Title: The Physics of Digital Cameras
Post by: col on January 22, 2010, 05:23:51 am
I wrote:
Quote
a good case can be made that what actually matters is the total number of detected photons for the entire image, independent of the size or number of pixels.


Quote from: WarrenMars
I don't agree with this position. I don't know what your case would be, but if I were given the job of submitting the case it would be that: on a given screen view or print, the noisiness of small pixels will be canceled out by the pixel binning required to reduce the resolution to that of larger pixels.

Yes, pixel binning improves the per pixel quality of an image, I'm sure we can all see that, but NOT to the extent of a shot that is taken with pixels of that size in the original.

Hi Warren,

Fortunately my statement is not a matter of opinion, but of simple maths.
We are talking here about image noise caused by photon counting statistics.

Consider two cameras, one with 4MP  and the other 16MP, and each detects the same total number of photons, as per my statement.

To  make any fair or meaningful comparsion of image noise, the resultion of both images must be made equal. In this example, the resolution is made identical by binning groups of 4 pixels on the 16MP camera, in which case the number of photons detected by each pixel in the 4MP camera will be the same as for each bin of 4 pixels on the 16MP camera, and therefore the SNR of the two (equal resolution) images will also be the same. End of story. My statement stands correct.

Cheers,  Colin
Title: The Physics of Digital Cameras
Post by: col on January 22, 2010, 06:41:52 am
Quote from: Jonathan Wienke
The primary application I have in mind for this doesn't have involve imaging, although there is at least one instance where this is relevant to imaging--the microlenses over photodetectors. The "real" application involves thermal emission and absorption that occurs naturally in the infrared region. The general idea is to take the photons thermally emitted by a large blackbody surface (surface A, the outer black surface in the diagram above) and use refraction to direct the majority of those photons to a second blackbody surface having a much smaller area (surface B, the inner black surface in the diagram). If the blackbody surfaces are thermally insulated from each other to prevent heat transfer via convection or conduction, then according to Stefan-Boltzmann's Law (http://en.wikipedia.org/wiki/Stefan–Boltzmann_law), the increased photon flux density at surface B (compared to surface A) will cause surface B's temperature to rise above the temperature of surface A until the energy thermally emitted by surface B is equal to the energy being absorbed by surface B.

By heatsinking surface A to the ambient environment and placing a thermocouple between surface A and surface B, it is possible to create a device that absorbs thermal energy from its surroundings and converts the absorbed energy into a small amount of electricity. It's not creating or destroying energy, but it does have the novel ability to re-use existing energy (previously considered to be useless/unusable) an indefinite number of times.

Your idea is fascinating, Jonathon.

To better understand your proposal, I would like to ask the following. Consider an enclosed room or box, where all the walls are at held at the same temperature. You can paint the inside surfaces black if you like. Can your proposed electricity-producing-device work inside this room? You may put anything you like in the room, except of course, heat or energy sources.

Colin
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 22, 2010, 01:14:30 pm
Quote from: col
Your idea is fascinating, Jonathon.

To better understand your proposal, I would like to ask the following. Consider an enclosed room or box, where all the walls are at held at the same temperature. You can paint the inside surfaces black if you like. Can your proposed electricity-producing-device work inside this room? You may put anything you like in the room, except of course, heat or energy sources.

It will, with the following caveat: The load powered by the device must be in the same isolated environment as the device. If the electrical energy produced by the device is allowed to leave the isolated environment, then the isolated environment's temperature will decrease until the device stops functioning. But if all of the electrical energy output is used within the isolated environment, then the heat energy given off by the electrical load (say an incandescent light bulb), either directly or indirectly, will exactly match the heat being absorbed by my proposed device, and the cycle can repeat an indefinite number of times.

My proposed device cannot create new energy, it can only convert existing heat energy from an unusable to a usable form. I have a PowerPoint presentation that explains how it works in more detail posted here (http://www.visual-vacations.com/physics/PhotonTrap.pps), as well as a PDF version (http://www.visual-vacations.com/physics/PhotonTrap.pdf). The presentation covers the underlying math and physics in detail, including a detailed section covering why this isn't a violation of the First or Second Law of Thermodynamics. Some experiments I've conducted that appear to verify the theory are also covered.

I'd be interested in feedback on the presentation, particularly regarding the underlying math and physics principles.
Title: The Physics of Digital Cameras
Post by: col on January 22, 2010, 03:11:40 pm
I wrote:
Quote
To better understand your proposal, I would like to ask the following. Consider an enclosed room or box, where all the walls are at held at the same temperature. You can paint the inside surfaces black if you like. Can your proposed electricity-producing-device work inside this room? You may put anything you like in the room, except of course, heat or energy sources.


Quote from: Jonathan Wienke
It will, with the following caveat: The load powered by the device must be in the same isolated environment as the device. If the electrical energy produced by the device is allowed to leave the isolated environment, then the isolated environment's temperature will decrease until the device stops functioning. But if all of the electrical energy output is used within the isolated environment, then the heat energy given off by the electrical load (say an incandescent light bulb), either directly or indirectly, will exactly match the heat being absorbed by my proposed device, and the cycle can repeat an indefinite number of times.

My proposed device cannot create new energy, it can only convert existing heat energy from an unusable to a usable form. I have a PowerPoint presentation that explains how it works in more detail posted here (http://www.visual-vacations.com/physics/PhotonTrap.pps), as well as a PDF version (http://www.visual-vacations.com/physics/PhotonTrap.pdf). The presentation covers the underlying math and physics in detail, including a detailed section covering why this isn't a violation of the First or Second Law of Thermodynamics. Some experiments I've conducted that appear to verify the theory are also covered.

I'd be interested in feedback on the presentation, particularly regarding the underlying math and physics principles.

I suspect you did not understand exactly what I said, which is that the walls of the room are held at a the same (constant) temperature. I believe that removes your caveat. According to your claims, electrical energy could therefore leave the room, which would attempt to cool the walls but, as I stated that the wall temperature is held constant, there would be a flow of energy from the outside environment into the walls, to maintain the walls at constant temperature. Thus no violation of the first law.

Did I get that right?

Colin

PS. I have not read your presentation as yet, but I will. Interesting stuff. How many people have you run this idea past?




Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 22, 2010, 05:42:22 pm
Quote from: col
I suspect you did not understand exactly what I said, which is that the walls of the room are held at a the same (constant) temperature. I believe that removes your caveat. According to your claims, electrical energy could therefore leave the room, which would attempt to cool the walls but, as I stated that the wall temperature is held constant, there would be a flow of energy from the outside environment into the walls, to maintain the walls at constant temperature. Thus no violation of the first law.

Oops. You are correct. As long as energy from outside the room is available to replace the heat energy absorbed by the device inside the room and keep the wall temperature constant, then electrical energy output by the device can be used wherever you like. From a thermodynamic perspective, though, it is important for the device to be able to work indefinitely even in a completely isolated environment containing a finite amount of energy. IMO, this is the key difference between my proposal and everything else energy-related I've looked at--my device can operate indefinitely, even when starting out in a completely isolated isothermal environment (no temperature difference between one point and another), as long as all energy output by the device remains within the isolated environment (or closed system).

Quote
How many people have you run this idea past?
Several members of my family so far. Only one real skeptic so far thinks it's impossible, but after two years, he has yet to do a rigorous examination of the math and physics and show me anywhere I've misplaced a decimal point or misapplied an equation. He has an excuse though, he is currently deployed and commanding an air base in Iraq. This is actually the first time I've let the idea out in the cold cruel world...
Title: The Physics of Digital Cameras
Post by: WarrenMars on January 22, 2010, 05:47:14 pm
Quote
To make any fair or meaningful comparsion of image noise, the resultion of both images must be made equal. In this example, the resolution is made identical by binning groups of 4 pixels on the 16MP camera, in which case the number of photons detected by each pixel in the 4MP camera will be the same as for each bin of 4 pixels on the 16MP camera, and therefore the SNR of the two (equal resolution) images will also be the same. End of story. My statement stands correct.
Although it sounds fine in theory, in the real world  there are various problems with the pixel binning solution.

The main problem is that there is always some loss at the margins of the pixel. You can talk about micro lenses that can bend the light around the depletion zone, but 1) they ain't gonna bend all the light, 2) there's still gonna be appreciable loss at the micro lens boundaries. In your example above, if the small pixels were square with side length 1, then the large pixel would have a boundary length 8 and the 4 small pixels a total boundary length 16. The margin losses inherent in the pixel binning are then DOUBLE those of the unbinned. Since the width of pixel margins is more or less constant, the smaller the pixels the more significant the margin losses. When you're looking at pixel pitches of 3µm² the area taken up by the cell margins is a significant fraction of the total cell area. It follows that the number of photons detected by the bin is significantly less that those detected at the large pixel. Hence photon noise is greater at the bin.

Due to the relatively low electron counts in the small cells other noise sources such as thermal noise and readout error become more significant, in the above example: 4 times more significant in fact!

Then there is the problem of over-large file sizes that take too long to process, slow up your camera, slow up your computer and take up too much room.

Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 22, 2010, 05:59:11 pm
Quote from: WarrenMars
Since the width of pixel margins is more or less constant, the smaller the pixels the more significant the margin losses.

This is where you're going off-base--assuming equal margin width. If you have the ability to shrink the physical size of the other parts of the sensor (microlenses, photodetectors, etc.), then you can generally shrink the margin width as well.
Title: The Physics of Digital Cameras
Post by: bjanes on January 22, 2010, 06:23:55 pm
Quote from: col
Fortunately my statement is not a matter of opinion, but of simple maths.
We are talking here about image noise caused by photon counting statistics.

Consider two cameras, one with 4MP  and the other 16MP, and each detects the same total number of photons, as per my statement.

To  make any fair or meaningful comparsion of image noise, the resultion of both images must be made equal. In this example, the resolution is made identical by binning groups of 4 pixels on the 16MP camera, in which case the number of photons detected by each pixel in the 4MP camera will be the same as for each bin of 4 pixels on the 16MP camera, and therefore the SNR of the two (equal resolution) images will also be the same. End of story. My statement stands correct.

Cheers,  Colin
Binning can be done in hardware with CCDs. With 2x2 binning one collects 4 times as many photoelectrons with only one read noise as described here (http://www.photomet.com/learningzone/binning.php). However, if binning is done is software post capture, there are 4 read noises. Until recently, such hardware binning could only be done with monochrome, but the new Sensor+ technology from Phase One allows hardware binning in color. AFAIK no current CMOS sensors can perform hardware binning. You reduce shot noise, but read noise is not affected.
Title: The Physics of Digital Cameras
Post by: PierreVandevenne on January 22, 2010, 08:09:49 pm
Quote from: bjanes
Binning can be done in hardware with CCDs. With 2x2 binning one collects 4 times as many photoelectrons with only one read noise as described here (http://www.photomet.com/learningzone/binning.php). However, if binning is done is software post capture, there are 4 read noises. Until recently, such hardware binning could only be done with monochrome, but the new Sensor+ technology from Phase One allows hardware binning in color. AFAIK no current CMOS sensors can perform hardware binning. You reduce shot noise, but read noise is not affected.

Clever stuff (http://www.directdigitalimaging.com/pdf/PhaseOneSensorPlus.pdf - http://www.phaseone.com/Digital-Backs/P65/...hnologies.aspx) (http://www.phaseone.com/Digital-Backs/P65/P65-Technologies.aspx)), apparently mixing a classical CCD binning approach with a "virtual" green pixel organization similar to Fuji's SuperCCD (http://en.wikipedia.org/wiki/Super_CCD)... Clever, certainly useful, but I wonder if it can be seen as original enough to actually secure a patent.

BTW, did the Expose+ previous technology get its patent or is there a detailed explanation somewhere? Googling for xpose+ patent -pending -angemeldet doesn't give any meanignful result. At first sight, it doesn't look much different from using a temp scaled dark frame. Maybe there's more to it... If I remember well, it was advertised as "patent pending" for the P20+. Shoud have been granted by now. Does anyone know the patent application number?

Apparently the Kodak KAC serie (at least the KAC-5000) is advertised as a CMOS sensor with hardware binning capability. And in that case, the recent patent can be found. http://www.freepatentsonline.com/EP1900191.html (http://www.freepatentsonline.com/EP1900191.html)

Title: The Physics of Digital Cameras
Post by: col on January 22, 2010, 08:26:40 pm
Quote from: Jonathan Wienke
Oops. You are correct. As long as energy from outside the room is available to replace the heat energy absorbed by the device inside the room and keep the wall temperature constant, then electrical energy output by the device can be used wherever you like. From a thermodynamic perspective, though, it is important for the device to be able to work indefinitely even in a completely isolated environment containing a finite amount of energy. IMO, this is the key difference between my proposal and everything else energy-related I've looked at--my device can operate indefinitely, even when starting out in a completely isolated isothermal environment (no temperature difference between one point and another), as long as all energy output by the device remains within the isolated environment (or closed system).


Several members of my family so far. Only one real skeptic so far thinks it's impossible, but after two years, he has yet to do a rigorous examination of the math and physics and show me anywhere I've misplaced a decimal point or misapplied an equation. He has an excuse though, he is currently deployed and commanding an air base in Iraq. This is actually the first time I've let the idea out in the cold cruel world...

I'm not easily impressed, but your idea, and the thought you have put into it, impresses me greatly. As yet I have only very briefly skimmed over your detailed description, but see no obvious errors.

As with so many potentially attractive methods of harnessing useful energy, I suspect that your idea may be impractical, though theoretically possible. Have you actually calculated/estimated how large a device would need to be to generate 100mW of  electrical energy? As I see it, if you are able to come up with a commercially viable power source based on your idea, then that is bonus, but the theoretical idea is fascinating regardless.

Colin

 

Title: The Physics of Digital Cameras
Post by: PierreVandevenne on January 22, 2010, 08:34:21 pm
Quote from: Jonathan Wienke
Oops. You are correct. As long as energy from outside the room is available to replace the heat energy absorbed by the device inside the room and keep the wall temperature constant, then electrical energy output by the device can be used wherever you like. From a thermodynamic perspective, though, it is important for the device to be able to work indefinitely even in a completely isolated environment containing a finite amount of energy. IMO, this is the key difference between my proposal and everything else energy-related I've looked at--my device can operate indefinitely, even when starting out in a completely isolated isothermal environment (no temperature difference between one point and another), as long as all energy output by the device remains within the isolated environment (or closed system).

Great that it is not creating energy, it would have been suspicious ;-).

Thermodynamically speaking, how does one prevent the completely isolated environment from reaching some kind of thermal equilibrium? Adding conversion steps makes the analysis more complex, yes, but what keeps the system unstable? Assuming the system is unstable, and there is a perpetual flow in a perfectly isolated environment, how does it fundamentally differ from a perpetual motion machine in a frictionless environment?

And, assuming it works in a non isolated environment, ultimately relying on the energy coming from outside the room to produce electricity, how _efficient_ is it at producing it compared to other energy sucking devices?

At first sight, there is a lot of similarities between your machine and that one

http://www.lhup.edu/~dsimanek/museum/sucker.pdf (http://www.lhup.edu/~dsimanek/museum/sucker.pdf)

I'd like to stress the fact that I am impressed by the thought and the experimental work you put into this: not only did you come up with something that is quite close to a very cool "paradox", but you also avoided the obvious pitfall of creating energy ex-nihilo.


 

Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 22, 2010, 08:50:18 pm
Quote from: col
I'm not easily impressed, but your idea, and the thought you have put into it, impresses me greatly. As yet I have only very briefly skimmed over your detailed description, but see no obvious errors.

As with so many potentially attractive methods of harnessing useful energy, I suspect that your idea may be impractical, though theoretically possible. Have you actually calculated/estimated how large a device would need to be to generate 100mW of  electrical energy? As I see it, if you are able to come up with a commercially viable power source based on your idea, then that is bonus, but the theoretical idea is fascinating regardless.

I did exactly that in slide 22, and came up with 288 mW/cm^3 of device volume at 300K as a rough estimate of achievable power density using currently available thermocouples and other "off the shelf" components. This could possibly be improved on considerably, perhaps by a factor of 10 or 20, by refining the device design. 288 mW/cm^3 is within the realm of practicality for powering cars and other ground vehicles, but probably too heavy/bulky for aircraft. It's certainly within the practical size/weight range for powering laptops, cellphones, wireless security sensors, etc. And for fixed applications (powering a building, etc) it would be just fine--a refrigerator-sized box outside your house could perform all the necessary heating and cooling you'd need, and supply all the electricity you'd need to run your TV, washer/dryer, lights, etc.
Title: The Physics of Digital Cameras
Post by: PierreVandevenne on January 22, 2010, 08:55:43 pm
Quote from: WarrenMars
is more or less constant, the smaller the pixels the more significant the margin losses. When you're looking at pixel pitches of 3µm² the area taken up by the cell margins is a significant fraction of the total cell area. It follows that the number of photons detected by the bin is significantly less that those

I am not a native English speaker, so please forgive my ignorance if I put my foot in my mouth :-). As I understand it, a pixel pitch is actually a measure of distance. That seems confirmed by different sources... (http://en.wikipedia.org/wiki/Dot_pitch for example). Therefore, I must confess I am a bit puzzled by the fact that your distance unit of choice seems to remain the µm². What your preferred unit for area?
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 22, 2010, 09:09:01 pm
Quote from: PierreVandevenne
Thermodynamically speaking, how does one prevent the completely isolated environment from reaching some kind of thermal equilibrium? Adding conversion steps makes the analysis more complex, yes, but what keeps the system unstable? Assuming the system is unstable, and there is a perpetual flow in a perfectly isolated environment, how does it fundamentally differ from a perpetual motion machine in a frictionless environment?

The cool thing about this idea is that it is perfectly fine with starting out in a condition of perfect thermal equilibrium within a closed system (all temperatures of all components exactly the same). The key concept is the asymmetric thermodynamic boundary--a barrier that energetic particles (photons in this case) can easily cross of their own volition in one direction, but not the other. All common conceptions about thermodynamics (heat cannot flow from a cold object to a hot object, etc) are based on the premise that all thermodynamic boundaries are symmetric; i.e. that any given energetic particle has an equal probability of crossing the boundary in either direction unless acted on by an outside force (which requires the expenditure of energy). By creating an asymmetric boundary, the system naturally gravitates to a state where energy concentration is unequal, and that variance in energy concentration can then be exploited in any number of conventional ways. The equilibrium-state ratio of energy concentration on opposite sides of the boundary is inversely proportional to the ratio of the probability that a particle will cross in one direction vs the probability it will cross in the opposite direction. With the normal symmetric boundary, this ratio is 1:1, therefore the equilibrium state is also 1:1, or equal concentration of energy on both sides.

I go into this in more detail in slides 53-69 of the presentation.
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 22, 2010, 09:30:38 pm
Quote from: PierreVandevenne
At first sight, there is a lot of similarities between your machine and that one

http://www.lhup.edu/~dsimanek/museum/sucker.pdf (http://www.lhup.edu/~dsimanek/museum/sucker.pdf)

I initially thought of something along similar lines, and after spending a few months doing ray-tracing simulations calculating the properties of various emitter and reflector geometries, I proved to my own satisfaction that you can't concentrate isotropic radiation with reflectors to a higher concentration than that of the source of the radiation. Regardless of the shape of the reflective cavity and the arrangement of the blackbody masses, the flux density of photons measured anywhere in the cavity will be identical in the conditions described in your link.

Using the same ray-tracing code, but adding in calculations for refraction, total internal reflection, and Fresnel's equations (to calculate the proportion of refracted vs reflected energy), I proved to my satisfaction that concentrating isotropic radiation can be accomplished with refraction or total internal reflection (though the latter is more effective). The experiments I conducted appear to validate my theory, though not necessarily beyond all possibility of doubt.
Title: The Physics of Digital Cameras
Post by: Daniel Browning on January 22, 2010, 09:33:45 pm
Quote from: WarrenMars
Due to the relatively low electron counts in the small cells other noise sources such as thermal noise and readout error become more significant, in the above example: 4 times more significant in fact!

I agree. Readout error is the most significant issue for smaller pixel sizes. In order to have the same noise power at any given spatial frequency, read noise has to decrease at the same rate as the pixel size, so that a 3 micron pixel must have half the read noise of a 6 micron pixel in order for both to have the same noise power at any common level of detail. Fortunately, there are many smaller pixels that do indeed have lower read noise -- often even lower than is even needed just to match a larger pixel. But only at base amplification. At high gain the trend is reversed and larger pixels often have less read noise than small pixels scaled.

I think it should also be mentioned that thermal noise is only significant to a minority of photographic applications (shutter speeds longer than 10 seconds).

Quote from: WarrenMars
Although it sounds fine in theory, in the real world  there are various problems with the pixel binning solution.

The main problem is that there is always some loss at the margins of the pixel.

Although the idea that there is always some loss sounds fine in theory, in the real world, actual shipping cameras prove that sensor designers have achieved equal QE across a huge variety of pixel sizes (including 3 microns) for typical focal lengths and f-numbers, with more than a full order of magnitude between their areas.

Quote from: WarrenMars
Then there is the problem of over-large file sizes that take too long to process, slow up your camera, slow up your computer and take up too much room.

This can be solved. It's a Simple Matter Of Programming. If you want the full return from smaller pixels, you have to accept the larger files, slower processing, etc. But if you only want the full return some of the time, and the rest of the time you want smaller files, normal quality, etc., then you just select the compressed raw option. It will give you the same filesize and processing speed as if you had larger pixels. Look at REDCODE for example. It compresses 9.5 MP into a 1 MB compressed raw file. Now sure, the quality is not quite as high as an 11 MB lossless-compress raw file -- but whatever quality issues there (very little) are do not reach into any of the lower spatial frequencies, which is all that larger pixels could offer anyway.
Title: The Physics of Digital Cameras
Post by: col on January 22, 2010, 09:53:11 pm
I wrote:
a
Quote
good case can be made that what actually matters is the total number of detected photons for the entire image, independent of the size or number of pixels.

Quote from: WarrenMars
Although it sounds fine in theory, in the real world  there are various problems with the pixel binning solution.

The main problem is that there is always some loss at the margins of the pixel. You can talk about micro lenses that can bend the light around the depletion zone, but 1) they ain't gonna bend all the light, 2) there's still gonna be appreciable loss at the micro lens boundaries. In your example above, if the small pixels were square with side length 1, then the large pixel would have a boundary length 8 and the 4 small pixels a total boundary length 16. The margin losses inherent in the pixel binning are then DOUBLE those of the unbinned. Since the width of pixel margins is more or less constant, the smaller the pixels the more significant the margin losses. When you're looking at pixel pitches of 3µm² the area taken up by the cell margins is a significant fraction of the total cell area. It follows that the number of photons detected by the bin is significantly less that those detected at the large pixel. Hence photon noise is greater at the bin.

Due to the relatively low electron counts in the small cells other noise sources such as thermal noise and readout error become more significant, in the above example: 4 times more significant in fact!

Then there is the problem of over-large file sizes that take too long to process, slow up your camera, slow up your computer and take up too much room.

And yet, Warren, my statement above still stands as absolutely correct, as regards "shot noise" from photon counting statistics. All you are saying is that, for a given overall sensor size, the total number of photons collected may be less when the number of pixels are increased, due to a greater proportion of dead area between the pixels. Just so. My statement makes no comment on the many variables that determine how many photons are detected. All I said was that the effective shot noise in the image as a whole is set by the total number of detected photons in the image. You need to read and understand exactly what I say before rushing in and declaring that it is untrue.

Colin


Title: The Physics of Digital Cameras
Post by: PierreVandevenne on January 22, 2010, 09:58:17 pm
Quote from: Jonathan Wienke
Using the same ray-tracing code, but adding in calculations for refraction, total internal reflection, and Fresnel's equations (to calculate the proportion of refracted vs reflected energy), I proved to my satisfaction that concentrating isotropic radiation can be accomplished with refraction or total internal reflection (though the latter is more effective). The experiments I conducted appear to validate my theory, though not necessarily beyond all possibility of doubt.

In a non closed system, at first sight, I can believe your system could generate _some_ electricity forever. In a closed system, for a while. But intuitively, I can't agree with the 0.288 W per cm3 though. That would lead to 2.8 KW per 10000 cm3 (a one square meter, 1 cm high layer) and 288 KW (2.8MW (!) if you increase efficiency as planned) for a cubic meter of device. Since we can basically assume that all the energy we receive comes from the sun, we are already at about twice the solar constant per square meter in orbit. Granted, you could take the energy elsewhere (note that thermal energy input will definitely be limited by the area of the exchanger, not its volume!) without bad long term consequences for the planet since we'll be giving it back one way or another when using it (too bad it isn't a solution to global warming as well ;-))... Still being in the neighborhood of the machine is likely to be somewhat uncomfortable if it is that efficient at converting heat into electricity.

Also, I can't think of a geometry that would allow both dense stacking and temp gradient preservation...
Title: The Physics of Digital Cameras
Post by: col on January 22, 2010, 11:36:48 pm
Quote from: Jonathan Wienke
I initially thought of something along similar lines, and after spending a few months doing ray-tracing simulations calculating the properties of various emitter and reflector geometries, I proved to my own satisfaction that you can't concentrate isotropic radiation with reflectors to a higher concentration than that of the source of the radiation. Regardless of the shape of the reflective cavity and the arrangement of the blackbody masses, the flux density of photons measured anywhere in the cavity will be identical in the conditions described in your link.

Using the same ray-tracing code, but adding in calculations for refraction, total internal reflection, and Fresnel's equations (to calculate the proportion of refracted vs reflected energy), I proved to my satisfaction that concentrating isotropic radiation can be accomplished with refraction or total internal reflection (though the latter is more effective). The experiments I conducted appear to validate my theory, though not necessarily beyond all possibility of doubt.

Hi Jonathan,

There is something that concerns me about the very fundamentals of what you propose.

Refer to Page 5 of your PDF document, headed "Theory of Operation"

The ray tracing from emissive surface 'A" is just fine. Photons leaving surface "A" travel for a brief distance through air (or vacuum), and then strike the refractive layer, and are refracted at the interface. No problems.

The ray tracing from surface "B" may just be a bit dodgy, because you have apparently assumed that it is posssible for the photons to start their journey from just inside the refractive material. Whether that is valid is unclear. Consider the situation if the emissive surface "B" was simply pressed hard up against the refractive layer. In that case, there would still be a very small air gap, and your ray tracing would be wrong - in fact, the ray trace would look identical to the rays leaving emissive material "A", and the device would therefore not work. As I understand, what you actually do (in effect) is to "paint" the emissive surface onto the refractive surface. It could be argued that the photons still originate outside of the refractive material, and will therefore be refracted as they enter it, just as if there was an infinitesimally small airgap between the two. If this is true, your device will behave symmetrically, and will not work.

Your experimentally measured temperature difference of 0.018K strikes me as very small, and does not support your theory beyond all possible doubt.

What do you think?

Colin
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 23, 2010, 09:18:12 am
Quote from: col
Hi Jonathan,

There is something that concerns me about the very fundamentals of what you propose.

Refer to Page 5 of your PDF document, headed "Theory of Operation"

The ray tracing from emissive surface 'A" is just fine. Photons leaving surface "A" travel for a brief distance through air (or vacuum), and then strike the refractive layer, and are refracted at the interface. No problems.

The ray tracing from surface "B" may just be a bit dodgy, because you have apparently assumed that it is posssible for the photons to start their journey from just inside the refractive material. Whether that is valid is unclear. Consider the situation if the emissive surface "B" was simply pressed hard up against the refractive layer. In that case, there would still be a very small air gap, and your ray tracing would be wrong - in fact, the ray trace would look identical to the rays leaving emissive material "A", and the device would therefore not work. As I understand, what you actually do (in effect) is to "paint" the emissive surface onto the refractive surface. It could be argued that the photons still originate outside of the refractive material, and will therefore be refracted as they enter it, just as if there was an infinitesimally small airgap between the two. If this is true, your device will behave symmetrically, and will not work.

It is easy to verify experimentally that it is possible to optically bond an emissive surface to refractive material so that refraction does not occur between the emissive surface and the refractive material. If you lay a triangular prism on top of a sheet of printed text, it is possible to observe the total internal reflection effect--when attempting to view the text from certain angles, the text will disappear and be replaced by a reflection of whatever is on the other side of the prism from the viewer. In contrast, if you paint the surface of the prism, then you can view the painted surface from any angle you like, and it will never "disappear" due to total internal reflection.

The same principle is employed in fingerprint scanners. You have a triangular prism with a 90° angle and two 45° angles. You have a light source attached to one 45° surface and the sensor attached to the other 45° surface. When nothing is being scanned, total internal reflection causes the photons emitted from the light source to bounce off the inner surface of the prism and reflect to the sensor. But when a finger is placed on the surface, a temporary optical bond forms between the top of the fingerprint ridges. This optical bond enables photons to travel from inside the prism directly into the finger being scanned without being refracted. As a result, the sensor sees an image of the light source where the ridges are not in contact with the prism, and the ridges where they are in contact with the prism. The brightness difference between the reflection of the light source compared to the reflection of the light source off the skin is used to determine which pixels are fingerprint, and which are background.

The diagrams below illustrate the concept; imagine the black circle to be a fingerprint ridge pressed against the prism surface and creating the optical bond. Instead of being reflected directly to the detector, the photons are mostly absorbed by the fingerprint ridge. If you have a triangular glass prism, it is easy to visually verify this for yourself experimentally.

[attachment=19680:Prism_TIR.gif]  [attachment=19681:Prism_OB.gif]

As long as the emissive surface is optically bonded to the layer of refractive material (basically the difference between painting the surface of the prism directly instead of just laying it on a painted surface), the photons will not be refracted as they are emitted from the emissive surface into the refractive material.

Quote
Your experimentally measured temperature difference of 0.018K strikes me as very small, and does not support your theory beyond all possible doubt.

I regard the experiments as interesting, but not as ironclad beyond-a-shadow-of-a-doubt proof.
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 23, 2010, 05:34:14 pm
Here's another refraction / total internal reflection experiment that is easy to try:

Fill a rectangular or square transparent container until it is almost full of water. Observe the surface of the water from just below water level, and you will see that it is behaving like a mirror from that vantage point. If you hold your finger a few millimeters above the water's surface, you will not be able to see it through the surface when your viewpoint is just below the surface level of the water. But if you dip your finger partway into the water, you will have no trouble at all seeing the part of your finger below water level. The reason for this is because the water-air boundary forms a refractive interface where refraction and total internal reflection will occur, but no such refractive interface exists between the water and your finger. As a result, you can clearly see the portion of your finger below water level, and no part of your finger below water level is hidden by reflections. Any photons being emitted or reflected by your finger inside the water go directly into the water without being refracted or reflected.
Title: The Physics of Digital Cameras
Post by: col on January 23, 2010, 07:18:42 pm
Quote from: Jonathan Wienke
Here's another refraction / total internal reflection experiment that is easy to try:

Fill a rectangular or square transparent container until it is almost full of water. Observe the surface of the water from just below water level, and you will see that it is behaving like a mirror from that vantage point. If you hold your finger a few millimeters above the water's surface, you will not be able to see it through the surface when your viewpoint is just below the surface level of the water. But if you dip your finger partway into the water, you will have no trouble at all seeing the part of your finger below water level. The reason for this is because the water-air boundary forms a refractive interface where refraction and total internal reflection will occur, but no such refractive interface exists between the water and your finger. As a result, you can clearly see the portion of your finger below water level, and no part of your finger below water level is hidden by reflections. Any photons being emitted or reflected by your finger inside the water go directly into the water without being refracted or reflected.

Yes, through this simple yet clever example, as well from the excellent examples in your previous post, you convince me that the photons leaving the emissive layer will not be refracted upon entering the refractive substrate, provided the two layers are "optically bonded", which can be achieved by painting the emissive layer onto the refractive substrate. Alternatively, a thin refractive layer could be "painted" or similarly deposited onto a thicker emissive substrate, which (as I understand) is what you would actually do in your final embodiment of this device.

OK. My next question, is whether you have made a serious attempt to calculate what temperature difference you should be getting in your constructed model. I'm being lazy here, in that I could do a back of envelope calculation on this myself, but suspect you have already done it. My gut feeling is that, if all your claims are true, and if there is the slightest chance that a useful amount of electrical power could ever be produced, then you should be measuring more than a 0.018K temperature difference in the prototype model. Do back-of-envelope calculations suggest you should get a dT of more that 0.018K?  

Another question. Your surfaces "B" were made from 6" Edmund Optics IR Fresnel lenses, painted black on the grooved side. I know what a Fresnel lens is, but you make no mention of this in your Background and Theory sections, so you have confused me a little here. Your Theory of Operation clearly shows a simple, parallel-sided slab (or layer) of refractive material, not a Fresnel lens.

Cheers, Colin

   

Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 23, 2010, 08:58:18 pm
Quote from: col
Alternatively, a thin refractive layer could be "painted" or similarly deposited onto a thicker emissive substrate, which (as I understand) is what you would actually do in your final embodiment of this device.

Yes, that is what I had in mind. The refractive layer wouldn't have to be much more than a few wavelengths thick to work as intended, though the wavelengths in question range all the way to about 60 µm

Quote
OK. My next question, is whether you have made a serious attempt to calculate what temperature difference you should be getting in your constructed model. I'm being lazy here, in that I could do a back of envelope calculation on this myself, but suspect you have already done it. My gut feeling is that, if all your claims are true, and if there is the slightest chance that a useful amount of electrical power could ever be produced, then you should be measuring more than a 0.018K temperature difference in the prototype model. Do back-of-envelope calculations suggest you should get a dT of more that 0.018K?  

Another question. Your surfaces "B" were made from 6" Edmund Optics IR Fresnel lenses, painted black on the grooved side. I know what a Fresnel lens is, but you make no mention of this in your Background and Theory sections, so you have confused me a little here. Your Theory of Operation clearly shows a simple, parallel-sided slab (or layer) of refractive material, not a Fresnel lens.

I used the Fresnel lenses because they were the cheapest thing I could find that claimed to be reasonably transparent in the long-wave infrared region where room-temperature thermal emission occurs. I wasn't using them as lenses, I was simply using them as a field-expedient refractive coating for my "B" surface. However, they are far from ideal for that purpose, as the following spectral transmission graph shows:

(http://www.edmundoptics.com/images/catalog/5910.gif)

Ideally, transmission should be near 100% from 3-60µm. If you compare this graph with the graph in slide 74 showing emitted power distribution at 300K, you'll see that there are several bands where the fresnel lens material is opaque (and therefore highly emissive) in wavelengths active at 300K, which reduces maximum efficiency greatly. On top of that, the experiment was conducted in air, not a vacuum, reducing efficiency even further. Given the inefficiency of the lens material for the intended purpose, and the less-than-optimal experimental conditions (not having access to a vacuum pump and suitably-sized container), 0.018 C/K is within the realm of plausibility. I'd obviously be much happier if I'd measured a greater temperature difference, but it's not that far out of line from what I was realistically expecting to get.

It all comes to how much money I can can get the wife to approve for abstract science experiments she doesn't really understand, and what kind of materials and facilities I can get access to to try some experiments under more optimal conditions with materials better-suited to the purpose.
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 23, 2010, 09:28:27 pm
Quote from: PierreVandevenne
In a non closed system, at first sight, I can believe your system could generate _some_ electricity forever. In a closed system, for a while. But intuitively, I can't agree with the 0.288 W per cm3 though. That would lead to 2.8 KW per 10000 cm3 (a one square meter, 1 cm high layer) and 288 KW (2.8MW (!) if you increase efficiency as planned) for a cubic meter of device.

The calculations behind that power density figure are on slides 22-23 of the presentation, and I think they are fairly conservative. One thing to note though; the calculation is for the size of the converter device itself, not for the heat exchanger needed to supply it with enough heat energy to keep functioning. The size of heat exchanger will vary dramatically, depending on whether the heat is being extracted from air or a liquid such as water.

Quote
Since we can basically assume that all the energy we receive comes from the sun, we are already at about twice the solar constant per square meter in orbit. Granted, you could take the energy elsewhere (note that thermal energy input will definitely be limited by the area of the exchanger, not its volume!) without bad long term consequences for the planet since we'll be giving it back one way or another when using it (too bad it isn't a solution to global warming as well ;-))... Still being in the neighborhood of the machine is likely to be somewhat uncomfortable if it is that efficient at converting heat into electricity.

Today's internal combustion engines are about 50% efficient. If you were to have an electric car powered by one of my devices, you'd need an air heat exchanger and airflow roughly comparable to an internal combustion engine's radiator and the airflow needed to keep the engine cool to run an electric car with comparable horsepower. The difference would be that the air would be cooled instead of heated as it passed through the heat exchanger.

IMO, this idea is a possible solution for carbon-based fuel consumption and the pollution it generates. If you can extract enough energy from ambient air to power your car as you drive (cooling the air in the process), then there is no reason to burn any fuel, and driving your car will have essentially zero environmental impact. It's the ultimate "green" technology if it can be built cheaply enough to be economically feasible.

Consider this question: would you pay an extra 10000 euro for your next car if you NEVER had to stop at a fuel station again?
Title: The Physics of Digital Cameras
Post by: WarrenMars on January 24, 2010, 09:19:57 am
We seem to have gone a long way from the original topic, which is ok, but I'm out.

Thank you to all who contributed to this thread and who read my web pages. Yes there was a fair amount of ridicule but no more than I deserved, considering the arrogance of my original post and the presence of a number of factual errors in my site. I hereby apologise for these shortcomings. Now some more humble pie before I go:

Thanks to ridicule from some of you I have forced myself to research more deeply than I had previously and I have found that some of the fundamental assumptions that I had made were incorrect. In particular: RAW files are not gamma compressed, the ideal F number is not 1 and pixels contain only 1 colour not 4. Thanks also to Jonathon and Colin for alerting me to the limited electron carrying capacity per pixel.

The Gods don't like hubris and I have been justly embarrassed, however it has been said that: "The man who never made a mistake never made anything!" Furthermore I don't believe this thread has been a waste of time, far from it! In particular the magic number of f/0.5 is surely worth the price of admission on its own.

What doesn't break you only makes you stronger and I am off to rewrite my web pages in the light of my deeper understanding. The thrust of my original analysis remains correct, however it must be seen now as an idealised theoretical analysis rather than a study of real-world cameras. I will address the issue of real world cameras in the rewrite and I will produce real world data to verify my analysis. For the moment I am modifying Dave Coffin's DCRAW application in order to produce the required TRUE RAW; something I haven't been able to find anywhere.

I will post a new thread in this forum when I have finished the rewrite and will appreciate any constructive criticism at that time.
See you folks around.
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 24, 2010, 02:53:57 pm
Quote from: WarrenMars
I will post a new thread in this forum when I have finished the rewrite and will appreciate any constructive criticism at that time.
See you folks around.

Fair enough. It's been interesting...
Title: The Physics of Digital Cameras
Post by: Theresa on January 25, 2010, 11:23:27 am
QUOTE (WarrenMars @ Jan 22 2010, 03:47 PM) *
Then there is the problem of over-large file sizes that take too long to process, slow up your camera, slow up your computer and take up too much room.

With a modern computer, such as one with a quad core chip and 6GB of memory and copious hard drive space, processing is VERY fast.  The files aren't "overly large" since the computer has no problem coping with them.  I even had a four year old computer that was fast enough.  The only time I see complaints about digital photos being too big is when someone is trying to work on a 24GB with an old computer that is worth maybe one fourth as much as the camera.  Older Macs are no faster than old pcs, its just a fact of life that a hi-def camera requires a fast computer.
Title: The Physics of Digital Cameras
Post by: joofa on January 25, 2010, 04:37:08 pm
Quote from: WarrenMars
Thank you to all who contributed to this thread and who read my web pages. Yes there was a fair amount of ridicule but no more than I deserved, considering the arrogance of my original post and the presence of a number of factual errors in my site. I hereby apologise for these shortcomings.

Warren, you don't have to apologize. It takes a lot of courage to admit a mistake and I really appreciate your boldness in doing it online.

Quote from: WarrenMars
Thanks to ridicule from some of you I have forced myself to research more deeply than I had previously and I have found that some of the fundamental assumptions that I had made were incorrect. In particular: RAW files are not gamma compressed, the ideal F number is not 1 and pixels contain only 1 colour not 4. Thanks also to Jonathon and Colin for alerting me to the limited electron carrying capacity per pixel.

It is sad that you were ridiculed, especially by those whose own understanding in some of these matters is not correct.

Quote from: WarrenMars
For the moment I am modifying Dave Coffin's DCRAW application in order to produce the required TRUE RAW; something I haven't been able to find anywhere.

Now that is something courageous. Unfortunately, DCRAW is written in a bad software programming style. Very difficult to read. A good example of a code that is functional but otherwise poorly written. Good luck with it.

Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 25, 2010, 07:30:07 pm
Quote from: PierreVandevenne
In a non closed system, at first sight, I can believe your system could generate _some_ electricity forever. In a closed system, for a while. But intuitively, I can't agree with the 0.288 W per cm3 though. That would lead to 2.8 KW per 10000 cm3 (a one square meter, 1 cm high layer) and 288 KW (2.8MW (!) if you increase efficiency as planned) for a cubic meter of device.

I double-checked my calculations regarding the 0.288 W/cm^3 estimate, and did find some errors from converting units incorrectly. I've corrected those errors, and have a revised estimate of 0.171 W/cm^3 power density, based on the following assumptions:

Slides 22 and 23 of the presentation files have been updated to reflect the revised power density estimate. 0.171 W/cm^3 is equivalent to 171 kW/m^3.
Title: The Physics of Digital Cameras
Post by: DarkPenguin on January 25, 2010, 08:12:58 pm
Quote from: joofa
Now that is something courageous. Unfortunately, DCRAW is written in a bad software programming style. Very difficult to read. A good example of a code that is functional but otherwise poorly written. Good luck with it.

http://www.libraw.org/ (http://www.libraw.org/)





Title: The Physics of Digital Cameras
Post by: col on January 25, 2010, 08:26:02 pm
Quote from: WarrenMars
We seem to have gone a long way from the original topic, which is ok, but I'm out.
Thank you to all who contributed to this thread and who read my web pages. Yes there was a fair amount of ridicule but no more than I deserved, considering the arrogance of my original post and the presence of a number of factual errors in my site. I hereby apologise for these shortcomings.
I will post a new thread in this forum when I have finished the rewrite and will appreciate any constructive criticism at that time.
See you folks around.

Best luck with everything. Sure, you made a few mistakes, but forced us all to think and learn, and some of the topics which came up were interesting.
Title: The Physics of Digital Cameras
Post by: dwdallam on January 27, 2010, 04:38:22 am
Quote from: Plekto
If they were using another material other than glass, though, that would change.  The question, though, is, what other materials would we possibly use besides glass and plastic?

Diamond.
Title: The Physics of Digital Cameras
Post by: dwdallam on January 27, 2010, 05:02:24 am
Quote from: Jonathan Wienke
I did exactly that in slide 22, and came up with 288 mW/cm^3 of device volume at 300K as a rough estimate of achievable power density using currently available thermocouples and other "off the shelf" components. This could possibly be improved on considerably, perhaps by a factor of 10 or 20, by refining the device design. 288 mW/cm^3 is within the realm of practicality for powering cars and other ground vehicles, but probably too heavy/bulky for aircraft. It's certainly within the practical size/weight range for powering laptops, cellphones, wireless security sensors, etc. And for fixed applications (powering a building, etc) it would be just fine--a refrigerator-sized box outside your house could perform all the necessary heating and cooling you'd need, and supply all the electricity you'd need to run your TV, washer/dryer, lights, etc.

Why don't you just build it Jonathan? I want one. And you better shut your mouth. Corporate fascists will assassinate you. Best way to do it is to make the machine, patent your findings, and spread it all over the world via the internet. Then, even if somehow the patent is bought, it can no way be eliminated.

How much would you need to build the machine? If yuor idea is that revolutionary, then I'm sure if it isn't millions, the community would help. I mean this could change everything in all people's lives all over the globe. It would be the energy equivalent of the Emancipation Proclamation. This is no small thing here.
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 27, 2010, 11:52:30 am
Quote from: dwdallam
Why don't you just build it Jonathan? I want one. And you better shut your mouth. Corporate fascists will assassinate you. Best way to do it is to make the machine, patent your findings, and spread it all over the world via the internet. Then, even if somehow the patent is bought, it can no way be eliminated.

I've done what experimenting and testing of the idea I can within my shoestring budget; to really test the idea properly, I'd need access to a reasonably well-stocked optics lab containing the type of equipment used to manufacture lenses, as well as thermometers accurate down to the mK range.

One of the reasons I posted the idea here is to have a dated, verifiable record of my theory via Google and other search engines' web caches, and as a means to get it circulating among people who are recognized experts in optics and physics for some meaningful peer review (hopefully sparking some interest within the scientific community, and eventually from corporations capable of building the device). The other is that I figure my odds of "surviving the assassins" are better if I'm widely known to be associated with the project than if I'm one of only a small number of people who know of it. Public figures are harder to dispose of without anyone noticing...

Quote
How much would you need to build the machine? If yuor idea is that revolutionary, then I'm sure if it isn't millions, the community would help. I mean this could change everything in all people's lives all over the globe. It would be the energy equivalent of the Emancipation Proclamation. This is no small thing here.

Given access to a decent optics lab, I could conclusively prove or disprove the validity of my theory for less than the cost of a MFDB, perhaps as little as a thousand dollars. The trick is finding someone in charge of such a lab who would be willing to let me use the equipment, or have lab staff conduct experiments to my specifications. If you know of anyone, please feel free to send them the link to the presentation (http://www.visual-vacations.com/physics/) and/or email me at jonwienke(at)yahoo.com.

If my theory is conclusively validated, devices could probably be manufactured in quantity fairly cheaply, but I don't know enough about what is possible with current manufacturing methods to intelligently estimate how cheaply. And even if the devices were very expensive at first (say $100/watt of output power), there are plenty of commercial applications for wireless devices where such a price premium would be acceptable: watches, PDAs, MP3 players, Bluetooth headsets, GPS navigation devices, satellite phones, emergency distress beacons, any electronic device operating in remote areas where AC power is not available and battery resupply is inconvenient, etc. And like I asked before, would you pay an extra 25% premium for your next car if you never had to visit a gas station again?
Title: The Physics of Digital Cameras
Post by: dwdallam on January 27, 2010, 06:13:48 pm
Quote from: Jonathan Wienke
If my theory is conclusively validated, devices could probably be manufactured in quantity fairly cheaply, but I don't know enough about what is possible with current manufacturing methods to intelligently estimate how cheaply. And even if the devices were very expensive at first (say $100/watt of output power), there are plenty of commercial applications for wireless devices where such a price premium would be acceptable: watches, PDAs, MP3 players, Bluetooth headsets, GPS navigation devices, satellite phones, emergency distress beacons, any electronic device operating in remote areas where AC power is not available and battery resupply is inconvenient, etc. And like I asked before, would you pay an extra 25% premium for your next car if you never had to visit a gas station again?

One word, but not limited to this one word, expresses the economic viability of such a device: Military. There's a 1 Trillion dollar a year industry.
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 27, 2010, 09:29:01 pm
Quote from: dwdallam
One word, but not limited to this one word, expresses the economic viability of such a device: Military. There's a 1 Trillion dollar a year industry.

Having not-so-long-ago been a participant in that industry, I haven't forgotten about it. There's all kinds of stuff that grunts need to carry around that use electricity, and dealing with carrying spare batteries is a huge PITA. If a battery replacement can be built into a GPS, red-dot scope, or radio that eliminates the need for carrying spare batteries (a HUGE deal on longer missions where resupply is not feasible or subject to enemy action), that can be a literal life-saver.

And then there's the notion of energy weapons that have their own power supply, allowing them to be fired indefinitely without requiring ammo resupply...
Title: The Physics of Digital Cameras
Post by: bg2b on January 28, 2010, 06:55:04 am
Quote from: Jonathan Wienke
By creating an asymmetric boundary, the system naturally gravitates to a state where energy concentration is unequal, and that variance in energy concentration can then be exploited in any number of conventional ways.
It sounds to me like you've made (or at least you think you've made) Maxwell's demon.  To be convincing, I think you need to show how current arguments for the impossibility of the demon are flawed.
Title: The Physics of Digital Cameras
Post by: Jonathan Wienke on January 28, 2010, 10:33:29 am
Quote from: bg2b
It sounds to me like you've made (or at least you think you've made) Maxwell's demon.  To be convincing, I think you need to show how current arguments for the impossibility of the demon are flawed.

I cover this in my presentation (http://visual-vacations.com/physics/) explaining the theory in detail. The arguments against Maxwell's demon are valid, as long as you assume that thermodynamic boundaries must always be symmetrical, and the only way to create a disequilibrium in the concentration of energetic particles on opposite sides of the boundary is to engage some sort of active interference (such as opening and closing the demon's trap door) with the natural behavior of the particles. Any such active interference of course requires energy, which cancels out any gains you get from the interference.

What I'm theorizing is completely different from Maxwell's demon; instead of trying to force the particles to move to one side of the boundary via energy-consuming active interference in their natural behavior, I've devised a boundary that exploits the innate properties of the particles (photons) to passively concentrate themselves on one side of the boundary without any active energy-consuming external interference via principles of refraction and total internal reflection.

It's sort of like a yellowjacket trap (http://www.buy.com/retail/product.asp?sku=206925864&listingid=35919362) for IR photons; designed to be easy for them to enter, but difficult to exit. And like the yellowjacket trap, it doesn't require any external power source to operate, other than a supply of IR photons to be trapped. There's no need for any energy-consuming "demon" or "trap door"--the intrinsic natures of the trap and the things being trapped ensure the trap operates effectively without needing active external intervention.
Title: The Physics of Digital Cameras
Post by: JoeThibodeau on February 14, 2010, 12:52:40 pm
I think there is a future for more innovation in digital imaging. Some folks at Xerox Parc have produced a imaging cell which encapsulates blue, red, and green filters in a stack rather than the Bayer approach which naturally generates chroma aliasing by introducing a lower frequency color component to the signal. As sensor density increases these different aliasing artifacts will become invisible to the naked eye in print. Granted there are limitations relating to the source of energy and the means of acquiring that energy but I think there is some room for continual improvement over the next 10 years minimum. I am also wondering out loud about high density sensor arrays with sensor cells shaped more like amoeba and dispersed in a chaotic pattern like film grain.
Title: The Physics of Digital Cameras
Post by: fredjeang on February 14, 2010, 03:59:24 pm
Hey, I see that there is no one woman's post on this topic!    

Fred.
Title: The Physics of Digital Cameras
Post by: Theresa on February 19, 2010, 05:44:02 pm
Quote from: fredjeang
Hey, I see that there is no one woman's post on this topic!    

Fred.

I've read some of this and am just wondering if there's some sort of reality check that could be used.
Title: The Physics of Digital Cameras
Post by: bg2b on February 26, 2010, 08:17:40 pm
Quote from: Jonathan Wienke
I've devised a boundary that exploits the innate properties of the particles (photons) to passively concentrate themselves on one side of the boundary without any active energy-consuming external interference via principles of refraction and total internal reflection.
I saw a puzzle at the NY Times along a similar lines here (http://tierneylab.blogs.nytimes.com/2010/02/22/monday-puzzle-getting-something-from-nothing/).  It proposes a "papaya battery" with a certain shape that supposedly will cause one electrode to heat up and another to cool down with no energy input.  Then the temperature difference can be used to produce power.  The challenge to the readers is to figure out what (if anything) is wrong with it.