You may be suprised to find that current image quality is already within 2 stops of its theoretical maximum and is unlikely to improve by more than 1 stop. You may also be surpised to discover that current technology has already pushed photography 6 stops beyond what can be achieved theoretically and that the difference has been covered up with a combination of human tolererance, noise reduction and sharpening.
There may be some other results that may surprise you also. Go ahead and read my exposé on this fascinating and complex subject.
http://warrenmars.com/photography/technica...ion/photons.htm
I have read Warren's essay, and would like to make a few suggestions.
The first part of the essay is concerned with estimating how many (or more significantly, how few) photons are detected by each pixel. Basically, Warren starts out with known illumination levels in a variety of environments, and uses this to estimate in a very, very rough manner how many of these available photons might actually be detected by the camera CCD. This is an extremely bad way to estimate the number of detected photons. Problems include the unknown spectral distribution of the quoted illumination levels, all sorts of arguments about the efficiency of the camera optics, the detector quantum efficiency, the active area of the detector as a proportion of the toal area, the effectiveness of the detector microlenses, and who knows what else. The result is that the estimates obtained for the number of detected photons are very rough indeed, almost uselessly so, and I think Warren would acknowledge this.
Fortunately there is direct data available for the number of detected photons per pixel, elegantly sidestepping all the the guesswork and unknowns in Warren's approach. The total number of photons that can be detected per pixel is known as the "full well capacity", and this is shown for a large number of cameras and sensors at
www.clarkvision.com Sure, it varies somewhat from camera to camera, but broadly speaking the full well capacity per pixel of a 12MP APS-C camera is about 25,000 electrons. Each electron corresponds to a detected photon, so we are talking about a MAXIMUM of 25,000 photons being detected per pixel for your typical APS-C SLR camera. This full well capacity corresponds to the maximum brightness level that can be recorded at the base ISO, which is usually pretty close to ISO 100. At higher ISOs, the number of detected photons is proportionally less. The full well capacity is roughly proportional to the pixel area, so full-frame cameras score better with typically 70,000 electrons, and point-n-shoot camera score much worse, in fact best not to talk about them.
Let's talk APS-C, where a maximum of around 25,000 photons are detected at base ISO, usually ISO 100. Warren can now more accurately re-calculate the horrific effects of his "poisson aliasing", which I personally don't believe is a problem, so I will instead talk very briefly about what I personally believe are the implications of detecting "only" 25,000 photons.
For simplicity I will consider only shot noise (photon counting noise), as this type of noise dominates in regions of medium to high intensity. The uncertainty in the measured photon count is SQRT(25000) or 158 photons. Therefore the signal-to-noise ratio of pixels in the brightest part of the image is 25000/158 or about one part in 158. That might not sound wonderful, but it is generally accepted (for example, by DXOmark) that an SNR of better than 30dB, which is one part in 31, is "good" image quality, so I am confident in stating that one part in 158 is very good indeed, and you won't see any noise at all. That is in accordance with general observation. Take an image with an APS-C camera at 100 ISO, and you won't see noise at the brightest parts of the image.
However, that is absolute best case. The average intensity level in most images is around 18% of the peak, corresponding to 0.18x25000 = 4500 photons, leading to an uncertainty of one part in 67. That is still pretty good, and again in accordance with the observation that the noise in the "average" parts of the image is still pretty low at 100 ISO.
At 400 ISO, those 4500 photons will be only 1125 photons, and the signal/noise wil be 1 part in 33.5, just a whisker above the 30dB accepted as "good" IQ. Yes, that is pretty much consistent with experience when using an APS-C camera.
At 1600 ISO the signal/noise has dropped to 1 part in 17, which is certainly visible, though not catastrophic. Keep in mind that all of these examples are in the worst case where the image is viewed at 100%. When the image is downsized, as it usually is, the effective noise is less.
In summary, the numbers obtained from conventional theory "add up" pretty well and match everyday experience, so I am yet to be convinced about the supposedly horrific effects of "poisson aliasing" though my mind is always open. The number of bits used for digitizing of course has nothing to do with it, just provided the number of bits is chosen conservatively so that quantization error is negligible compared to other noise, which is the case.
Warren's predictions of only modest improvements in the future seems likely. Shot noise is fundamentally related to the number of detected photons, and there really are only a limited number of ways to detect more photons.
On the detector front, expect gradual improvement in QE, pixel fill factor, microlenses, electronic noise and signal processing.
On the camera front, fundamentally the amount of light (= total number of photons) collected and imaged onto the sensor is set by the absolute aperture (not fnumber) of the lens, and a larger absolute aperture essentially means a physically larger lens. Therefore, if you want to collect more light, the lens will need to be larger and heavier, and this is true independently of sensor size. In practice, increasing the absolute aperture also requires scaling up the sensor size, to avoid impractically small Fnumbers. Experience strongly suggests that the cost of making larger sensors will continue to fall, but keep in mind that a larger aperture (= bigger, heavier) lens is required to collect more light onto the bigger sensor, so there are no free lunches here. How many kg of lens are you prepared to pay for and carry? The absolute aperture also sets the depth of field, so gathering more light onto the sensor fundamentally leads to a decrease in DOF, ultimately placing a practical limit on collecting more light from the lens. Warren made this point also.
Cheers, Colin