Luminous Landscape Forum

Site & Board Matters => About This Site => Topic started by: imagico on July 13, 2009, 02:41:14 am

Title: Moore's law for cameras
Post by: imagico on July 13, 2009, 02:41:14 am
Reading Ray Maxwell's essay i have to say while i agree with the central idea that the pixel numbers of sensors will most likely not scale according to Moore's law in the future i do not think i fully agree with his reasoning:

The end of Moore's law has been postulated many times for semiconductors - for very similar reasons as Ray has given for camera sensors - namely the limitations of physics and the lack of actual need.  None the less the structure sizes in chip production continue to shrink since engineers found ways to work around this and economic pressure continues to favor faster and cheaper components.

The bottom line:  I think there is no reason why Moore's law should not apply to digital camera technology in general - with a somewhat different time scale possibly (this is not the same for all computer components either - HDs have a somewhat different scaling than RAM or processors).  But of course there will be focus shifts from some aspects (like maybe the pixel count) to others (like maybe sensitivity, dynamic range, metering).  The problem will be more how to measure the validity of Moore's law since it is not so straight away to measure the performance of a digital camera as it is for a computer.

The fact that the viewers of photos taken with various cameras can hardly tell the difference between different models and possibly even less so in the future is actually something that brings cameras closer to computers - the computer used to produce a certain digital product is usually not visible in the product itself in any way.
Title: Moore's law for cameras
Post by: feppe on July 13, 2009, 03:22:42 am
Hurdles computer chip designers ran into are due to materials used, so they can overcome them by changing the materials or their composition (to a certain extent, I'm not a chip engineer). The ultimate hurdle sensor and lens designers have to overcome is not of the lens or sensor, but of life itself - a fundamental physical limitation rather than an engineering problem.

Then again, there are already prototypes of negative refraction lenses even in visible wavelengths, which might change the ballgame just like changing the composition of chips did.

But I agree with the author that there are many more interesting things being prototyped and envisioned other than a hundred megapixel camera, for example variable DOF which can be changed in post. Also, while a hundred megapixels is certainly overkill for 99.99999% of applications, it allows for massive cropping and/or zooming into the image (think Zoomify).
Title: Moore's law for cameras
Post by: Rob C on July 13, 2009, 03:43:47 am
Quote from: feppe
Also, while a hundred megapixels is certainly overkill for 99.99999% of applications, it allows for massive cropping and/or zooming into the image (think Zoomify).




But isn´t that the point? Zooming in becomes useless if all you do is zoom into mush.

Rob C
Title: Moore's law for cameras
Post by: feppe on July 13, 2009, 04:33:41 am
Quote from: Rob C
But isn´t that the point? Zooming in becomes useless if all you do is zoom into mush.

Indeed it is - but my comment was given with the hope that negative refraction lenses become reality in consumer cameras.

I'm not an optical engineer (or any other kind), so I have no idea how feasible it is. But human ingenuity never ceases to amaze me, and I wouldn't be surprised if we come up with ways to get around pesky physical limitations.

One of my favorite examples is how telescopes get around atmospheric distortions: make a mirror which deforms on cue tens or hundreds of times a second, and use the changes in the position of a known reference star as basis to calculate how to deform the mirror, thus fixing the distortions. What was thought physically impossible just decades ago is reality.
Title: Moore's law for cameras
Post by: Nemo on July 13, 2009, 07:02:19 am
Ray Maxwell's main argument is correct.

However, it is referring to the resolution of an isolate point with a monochromatic sensor.

Image details are formed by overlapping Airy disks. Yo need at least 4 pixels for a complete resolving of the light intensity differences into an Airy disk (a peak and two valleys for an isolated disk). Even more than 4 pixels if you has a Bayer pattern...

http://luminous-landscape.com/tutorials/resolution.shtml (http://luminous-landscape.com/tutorials/resolution.shtml)

Better lenses means smaller Airy disks at wide apertures... so there is room for improvement. Optical improvement. It is quite expensive though, and it translates to big lenses.

Overall, Maxwell made a good point. Additional increases in resolution has a marginal decreasing impact in the final detail resolved...
Title: Moore's law for cameras
Post by: pegelli on July 13, 2009, 08:30:51 am
I think his essay nicely proves why it's not needed with current optical technology.

However only time will tell if that is sufficient reason for it not to happen  
Title: Moore's law for cameras
Post by: Tim Gray on July 13, 2009, 08:55:19 am
With respect to a sensor limited to the traditional 35mm, of course Ray is correct - the laws of physics being what they are.   However there's nothing written in stone that says a sensor needs to be limited to 35mm (as is the case for the expensive MFs).  I don't understand that the operation of Moore's law in the confines of physical limitation (or a looser interpretation of his "law") would preclude inexpensive larger sensors.  In any event, regardless of the price, the form factor probably precludes significant demand for, say an 8x10" sensor with the same density of a P65.  So, he's probably right in a practical sense as well.

BTW, It "Innovators Dilemma" not "Inventors Dilemma".
Title: Moore's law for cameras
Post by: bradleygibson on July 13, 2009, 09:26:02 am
I won't debate the physics of Ray's article, because I believe he is essentially correct.

The thing about cameras though, is that people don't buy them based on how efficiently they use thier airy disks--people buy them to meet an emotional need.  Marketers understand this, and that is why you can find 8MP cellphone cameras and similarly ridiculously-sized pixels in point-and-shoot cameras.

I see no reasons why the megapixel race will not continue for the forseeable future:
  * As hardware gets faster there is no apparent penalty in performance
  * As compression gets better, there is no apparent penalty in filesizes
  * As hardware gets cheaper, there is no apparent penalty in cost
  * As competitors eek out a higher megapixel count, and other companies dutifully follow so as not to be at a marketing disadvantage, engineers exploit the bayer mosaic to go below the limit implied by the Airy disk (discussed already above in this thread).

It is that last point which I feel will be the prime motivator for pushing the number of pixels beyond the Airy disk limit.

-Brad
Title: Moore's law for cameras
Post by: bjanes on July 13, 2009, 11:39:30 am
Quote from: imagico
Reading Ray Maxwell's essay i have to say while i agree with the central idea that the pixel numbers of sensors will most likely not scale according to Moore's law in the future i do not think i fully agree with his reasoning:

The end of Moore's law has been postulated many times for semiconductors - for very similar reasons as Ray has given for camera sensors - namely the limitations of physics and the lack of actual need.  None the less the structure sizes in chip production continue to shrink since engineers found ways to work around this and economic pressure continues to favor faster and cheaper components.

The bottom line:  I think there is no reason why Moore's law should not apply to digital camera technology in general - with a somewhat different time scale possibly (this is not the same for all computer components either - HDs have a somewhat different scaling than RAM or processors).  But of course there will be focus shifts from some aspects (like maybe the pixel count) to others (like maybe sensitivity, dynamic range, metering).  The problem will be more how to measure the validity of Moore's law since it is not so straight away to measure the performance of a digital camera as it is for a computer.

The fact that the viewers of photos taken with various cameras can hardly tell the difference between different models and possibly even less so in the future is actually something that brings cameras closer to computers - the computer used to produce a certain digital product is usually not visible in the product itself in any way.


This article from Stanford University Electrical Engineering, Moore meets Planck and Sommerfeld (http://www.stanford.edu/~pcatryss/documents/2005_SPIE-EI_Roadmap.pdf), discusses some of the limitations of applying Moore's law to sensors. One can scale the electronic part of the chip, but since imagers must interact with light, the implications of Moore's law are different for sensors than micro-electronic components. Planck (photon noise) and Sommerfeld (diffraction) limit the usefulness of scalilng. Read the article for details. I think that Ray's argument is sound.

Bill
Title: Moore's law for cameras
Post by: Michael LS on July 13, 2009, 12:37:06 pm
Mr. Maxwell made the statement, "We are very near the limit right now."

Therefore, how "near" is "very near"? That is, when using the current crop of pro-level lenses from the top camera makers on full-frame dslrs, what is the "limit"? Thirty-something mp? Forty-something? Fifty-something? A ballpark number is fine, since an exact number would be useless, and only theoretical, given manufacturing variations, etc.

And I'm speaking, of course, of current silicon, glass and noise-reduction technology- not vapor-ware like negative refraction lenses and other exotic and not-for-sale technology (which I look forward to seeing, but my wallet doesn't   )

So, any engineers, or armchair engineers care to take a crack at it?
Title: Moore's law for cameras
Post by: imagico on July 13, 2009, 12:46:21 pm
Quote from: bjanes
This article from Stanford University Electrical Engineering, Moore meets Planck and Sommerfeld (http://www.stanford.edu/~pcatryss/documents/2005_SPIE-EI_Roadmap.pdf), discusses some of the limitations of applying Moore's law to sensors. One can scale the electronic part of the chip, but since imagers must interact with light, the implications of Moore's law are different for sensors than micro-electronic components. Planck (photon noise) and Sommerfeld (diffraction) limit the usefulness of scalilng. Read the article for details. I think that Ray's argument is sound.

Please note i am not arguing the existence of physical limits that make it more and more inefficient to further decrease pixel sizes of sensors - both from the point of view of optics as from quantum mechanics.  What i argue is that this does not mean that Moore's law does not apply to digital photography technology in general as the essay says.

There is a well known example from computer history that comes quite close to the current megapixel problem - it is the megahertz race of processors from some years ago (but please don't overstress this analogy):  For quite some time the increase in computer performance was to a large extent accomplished by increasing clock speeds of the CPU.  When this inflationary increase of clock speeds found an end with Intel giving up the Pentium 4 design this was to an important part due to a hard physical limit they ran into: the ever increasing power dissipation with raising clock speeds leading to thermal power densities that could no more efficiently be handled (other aspects played a role as well of course).  But this did not stop the overall exponential scaling of performance according to Moore's law, technological development just switched from gaining performance by increasing clock speed to other areas (for example multi-core designs).

Please also keep in mind Moore's law does not apply to every single property of computers either - there are well known aspects of computer technology that did not scale exponentially in recent times at all, like for example memory access speeds, leading to significant changes in performance relations inside computer systems.  This does not mean though that you can say Moore's law is not a good approximation of scaling of computer technology as a whole.

Title: Moore's law for cameras
Post by: Alan Goldhammer on July 13, 2009, 01:02:16 pm
Wow, I guess I'm going to have to pull out my old physics textbooks out and dust them off to get back up to speed on these issues!  The trouble is that we need to look at the camera as a whole in terms of it capturing and image and then to everything we use post-image to process the information into a final print.  I don't think we can predict with any degree of certainty what changes in any one aspect of the process will do to the final outcome (a pleasing print).  Sensor size, lens design, new computer algorythmns and the Adobe engineers who give us the software.  The one key think that Maxwell notes at the end of the essay is can anyone really tell the difference?  At some level the answer is yes, if we are looking at extremely large magnifications, but is that the real world (maybe for spy satellites it is)?  For most of us who don't print enormous panaramas and try to get everything of interest into a single image and then print the full frame at a modest enlargement the answer is no.  I can take a nice image with my D300 (tripod mounted) and print it on 13x19 paper and doubt that the quality will be markedly different had I used a full frame DSLR or medium format camera with corresponding back.  I do know with certainty that the latter equipment will cost a lot more money and that my cost/benefit calculation will likely point to the extra money spent on a new camera not being worth it at this point (prices may come down in the future).  Do I need a Hasselblad or Phase One to do what I'm doing; not really.

Interesting topic nonetheless and the articles are provocative.
Title: Moore's law for cameras
Post by: ErikKaffehr on July 13, 2009, 01:24:03 pm
Hi,

More's law, as it is mostly known, is about shrinking component size. The component sizes now days are well below 100 nm, while it seems obvious that sensor pixel sizes much smaller than 5 microns don't make much sense. To me it seems that pixel sizes don't really depend on manufacturing technology but on other factors, like diffraction and well capacity. For that reason I cannot see that an exponential increase in sensor resolution would make any sense.

It is quite obvious that photographic resolution is limited by diffraction. The question is at which pixel size we get diminishing returns. The only way of reducing diffraction is to increase the optimum aperture of photographic lenses, but there is a problem with that approach, namely that depth of field will be very small. So we could have a very high performing lens, say a lens which is diffraction limited at f/2.8, but such a lens could only achieve it's maximum resolution essentially in a single plane. Add to this the need to have the sensor in exact alignment with the sensor, in no way an easy feat.

More's law is not just about shrinking component size but also about increasing die size. This is absolutely relevant for photography. It means that big sensors are going to be more affordable. This can also be seen with the Canon 5D (II) and also the Sony Alpha 900. The same trend may also affect large sensor sizes.

Finally, I can see a benefit of increasing pixel densities further. One reason is that we can eliminate the need for low pass (anti aliasing) filter if we make the airy ring about the same size as a pixel. I'd also guess that it's better to have more pixels than "uprezz" using interpolation.

Best regards
Erik

Quote from: Tim Gray
With respect to a sensor limited to the traditional 35mm, of course Ray is correct - the laws of physics being what they are.   However there's nothing written in stone that says a sensor needs to be limited to 35mm (as is the case for the expensive MFs).  I don't understand that the operation of Moore's law in the confines of physical limitation (or a looser interpretation of his "law") would preclude inexpensive larger sensors.  In any event, regardless of the price, the form factor probably precludes significant demand for, say an 8x10" sensor with the same density of a P65.  So, he's probably right in a practical sense as well.

BTW, It "Innovators Dilemma" not "Inventors Dilemma".
Title: Moore's law for cameras
Post by: ErikKaffehr on July 13, 2009, 01:33:47 pm
Hi Tim,

It would probably be possible to increase sensor size to 6x6 or even 6x8 cm. Todays large format sensors are stitched from smaller sensors, perhaps because today's steppers cannot expose a full frame sensor in a single exposure. Canon is supposed to have a stepper with that capability, but it is quite obvious that sensors in the Nikon 3DX and the Alpha 900 are stitched, and so are Dalsa MF sensor.

Problem is that you also need an ecosystem, lenses that are good enough, cameras which are aligned within a few microns and customers willing to pay premium dollars. There are probably some three or four letter organizations having that kind of needs and assets.

Best regards
Erik


Quote from: Tim Gray
With respect to a sensor limited to the traditional 35mm, of course Ray is correct - the laws of physics being what they are.   However there's nothing written in stone that says a sensor needs to be limited to 35mm (as is the case for the expensive MFs).  I don't understand that the operation of Moore's law in the confines of physical limitation (or a looser interpretation of his "law") would preclude inexpensive larger sensors.  In any event, regardless of the price, the form factor probably precludes significant demand for, say an 8x10" sensor with the same density of a P65.  So, he's probably right in a practical sense as well.

BTW, It "Innovators Dilemma" not "Inventors Dilemma".
Title: Moore's law for cameras
Post by: DaveCurtis on July 13, 2009, 05:22:01 pm
I gather that a "superlens" has been created with a negative refractive index  thus overcoming the so-called diffraction limit. Interesting stuff. Not sure if it's relavent to camera lenses though.

There seems to be several references to "Superlens" research on the net.

"A new superlens that could make it possible to film molecules in action in real time with visible light has been developed by HP Labs researchers.

The lens takes advantage of subwavelength details in evanescent components of light, which can propagate in a material with a negative refractive index. To achieve a record-breaking resolution of 1/12th of the wavelength of light, the researchers grew smooth silver film just a few tens of nanometers thick on a layer of germanium, forcing the silver to form a smooth thin film"

Title: Moore's law for cameras
Post by: pedro.silva on July 13, 2009, 06:47:30 pm
greetings!
allow me to leave photon noise aside for now, and concentrate on the damning diffraction.  i was under the impression that small pixels would not necessarily pose an insurmountable diffraction problem: with the information recorded by small enough pixels, one could process that information, by deconvolution or whatever, and actually get higher resolution than with the bigger pixels.  and it would seem that diffraction shouldn't be too hard to model.  of course, that would help escalate our computer expenses even more...
i am far off?
oh, in case it's not obvious... i'm no engineer!
cheers,
pedro
Title: Moore's law for cameras
Post by: BernardLanguillier on July 13, 2009, 08:24:34 pm
Quote from: bradleygibson
The thing about cameras though, is that people don't buy them based on how efficiently they use thier airy disks--people buy them to meet an emotional need.  Marketers understand this, and that is why you can find 8MP cellphone cameras and similarly ridiculously-sized pixels in point-and-shoot cameras.

I see no reasons why the megapixel race will not continue for the forseeable future:
  * As competitors eek out a higher megapixel count, and other companies dutifully follow so as not to be at a marketing disadvantage, engineers exploit the bayer mosaic to go below the limit implied by the Airy disk (discussed already above in this thread).

It is that last point which I feel will be the prime motivator for pushing the number of pixels beyond the Airy disk limit.

I guess that different companies have different approaches to this. Nikon seems to be clearly less interested in going for more megapixels, the D3 was a brave move in that it clearly showed that at least one camera company dares to do what most knowledgeable photogrpahers had been requesting: stop the megapixel race and go for more DR and lower noise levels.

I believe that consumer are not stupid and they are now willing to listen to sales people telling them that better pixels are more important than more pixels. So I am therefore not sure that - for DSLRs at least - the race forward will keep focussing on more pixels.

The only missing piece is a good metrics of pixel quality, a number that people can relate to as easily as pixels. My proposal would be to call it... "pixel quality"... and to compute it in a standardized way a la DxO. This is, by the way, what DxO is shooting for with their DxOMark thing, they are trying to have their name associated with the measure of pixel quality. They have foreseen a world in which "DxOMark" is written in camera tags in stores next to "resolution", "weight" and "price". One of the smartest move in the camera industry in years IMHO.

Therefore, I believe that the next thing is more features, starting with video, lenses... and better pixels.

Cheers,
Bernard
Title: Moore's law for cameras
Post by: AndyF on July 13, 2009, 09:02:09 pm
It may not be that Moore's Law doesn't apply to digital photography, but how and where it will manifest itself.  For example, I can see a strong argument for an 88 Mpixel sensor that would normally be diffraction limited at 22 Mp.  Why?  To take advantage of the diffraction disk!  

Assuming (and requiring...) that the essential advance is those "88 Mp" pixels have the same sensitivity and noise performance as today's 22 Mp pixels, you could then fit the entire Bayer RGB cluster of pixels into one airy disk.  The end result should be 22 mega sensor sites, where each site is an RGB Bayer cluster of pixels.  Due to the airy disk causing that cluster to see the same spot of information in the image, you'll have the full RGB information at that spot.

Another way of exploiting the diffusion provided by the airy disk and sub-airy disk pixel sizes, would be clustering pixels of different sensitivity.  With one exposure, the nominal 1.0 pixel would be exposed, and so could a 0.25 and a 4.0 pixel.  That will provide a wider dynamic range.

There are some further complications to exploiting this, such as the disk encompassing different pixels at different f stops but they can be solved.
Andy
Title: Moore's law for cameras
Post by: John Camp on July 13, 2009, 11:37:10 pm
What if someone designed a camera without a shutter -- say one that sampled photon quantities at each well site over some simultaneous period of time, like 1/125, or 1/250, etc.
Title: Moore's law for cameras
Post by: bradleygibson on July 14, 2009, 12:06:52 am
John:
That's effectively what's happening with most modern DSLRs in live view mode, or when recording video.  The new micro 4/3 cameras and the RED cameras are both mirrorless as well.

Personally, I think it's a step in the right direction.  But removing the mechanical shutter wouldn't address the diffraction limit Ray's article discusses.

Andy:
Your idea is an example of the concept I am referring to when I say 'exploiting the Bayer pattern to go below the Airy disk limit'.  But remember that the Airy disks are overlapping, so each pixel is getting information mixed from adjacent sites.  I don't have any idea of how sort that out, even in theory.
Title: Moore's law for cameras
Post by: bradleygibson on July 14, 2009, 12:40:44 am
Quote from: MichaelL
Mr. Maxwell made the statement, "We are very near the limit right now."

Therefore, how "near" is "very near"? That is, when using the current crop of pro-level lenses from the top camera makers on full-frame dslrs, what is the "limit"? Thirty-something mp? Forty-something? Fifty-something? A ballpark number is fine, since an exact number would be useless, and only theoretical, given manufacturing variations, etc.

And I'm speaking, of course, of current silicon, glass and noise-reduction technology- not vapor-ware like negative refraction lenses and other exotic and not-for-sale technology (which I look forward to seeing, but my wallet doesn't   )

So, any engineers, or armchair engineers care to take a crack at it?

Assuming an ideal lens ("perfect", not retrofocus or telephoto design), green light (520nm) at:
* f/8 gives a limit of 5.08 microns
* f/11 gives a limit of 6.97 microns
* f/16 gives a limit of 10.2 microns.

(Note that blue light will give a smaller limit and red light will give a larger limit.  And since our lenses ain't perfect, expect real world sizes to be larger as well.)

Typical photosite sizes on a 39-megapixel digital back are around 6.8 microns and a Nikon D3X (24 megapixel FF 35mm) is at 5.94 microns.  So, put another way, we're effectively 'there' now below f/8, and further decreases in sensel size represent diminishing resolution returns.

I believe that 5 micron sensel sizes might be a reasonable lower limit for high-end digital photography.  But that doesn't mean things will stop there, though...  

P.S.  There's a nice post on Airy disks/diffraction that's not too technical at http://www.cambridgeincolour.com/tutorials...photography.htm (http://www.cambridgeincolour.com/tutorials/diffraction-photography.htm).
Title: Moore's law for cameras
Post by: Jonathan Cross on July 14, 2009, 03:02:40 am
I agree with Michael up to a point.  We already have cameras with very high pixel densities.  Just try working out how many pixels a full frame sensor would have at the density of those in a Canon G10.  The manufacturers of cameras like the G10 get round the diffraction problem by limiting the aperture.  I have a 5dMkII and have kept my previously bought 40D with its higher pixel density.  Why?  I will use the 40D for wildlife as I can get 'closer' with a given lens.  For wildlife I do not need a small aperture, rather a fast shutter speed.  My 5DMKII will be used for my real love, landscapes, where I need a small aperture and the shutter speed does not usually matter.  So why do I not have a MF digiatl camera for my landscapes - cost of course!

Title: Moore's law for cameras
Post by: Nemo on July 14, 2009, 05:53:30 am
Quote from: bradleygibson
Assuming an ideal lens ("perfect", not retrofocus or telephoto design), green light (520nm) at:
* f/8 gives a limit of 5.08 microns
* f/11 gives a limit of 6.97 microns
* f/16 gives a limit of 10.2 microns.

 That is true for the resolving of an isolated point...

Details are formed by overlaping Airy disks... and then you need more than one pixel per Airy disk... Bayer patterns imply you need even more pixels per Airy disk...

Title: Moore's law for cameras
Post by: barryfitzgerald on July 14, 2009, 07:56:33 am
Things could change a lot over the years. I would tend to agree with the article, in that there are some limits. Also, Moore's law has a habit of being applied outside it's original area (it's open to debate if it even holds true in that), next we will hear Moore's law applies to dental technology or car engines ;-)

Leaving that to one side. I don't see Bayer sensors sticking around in the long term, I think we might see the move to multi colour layers sensors, which will help improve colour reproduction, and help reduce the pressure on sensor density. The game could change..and significantly. What we see right now tech wise for sensors, is going to be a joke in 10 years time!
Title: Moore's law for cameras
Post by: bradleygibson on July 14, 2009, 08:58:10 am
Quote from: Nemo
That is true for the resolving of an isolated point...

Details are formed by overlaping Airy disks... and then you need more than one pixel per Airy disk... Bayer patterns imply you need even more pixels per Airy disk...

Hi, Nemo,

Correct, to discuss resolution, one requires the ability to distinguish two distinct points.

The above calculations are the center-to-center distances of two Airy disks positioned such that the center of the first Airy disk occurs at the first minimum of the second (Rayleigh criterion for diffraction limits).  AFAIK, this is generally considered to be the limit of resolution.

-Brad
Title: Moore's law for cameras
Post by: samirkharusi on July 14, 2009, 09:39:18 am
Quote from: bradleygibson
Assuming an ideal lens ("perfect", not retrofocus or telephoto design), green light (520nm) at:
* f/8 gives a limit of 5.08 microns
* f/11 gives a limit of 6.97 microns
* f/16 gives a limit of 10.2 microns.
Let's try to make a huge leap, the one like 4x5 having given way to 35mm format over the past 60 years. The way I see the future is that the sensors will be much smaller (= lenses being much smaller, and easier to design at high quality). Hence the current (for 35mm format lenses) best performance at f8 will become best performance at f4. The relevant diffraction limit becomes 2.5 microns, and you can still have 20 megapixels on a 4/3rd sensor from which a 16x20" print shot at f4 is indistinguishable from a similar print made from a 35mm format sensor shot at f8. See, there is plenty of room for further development. Next, lenses get optimised for f2.8, sensors halve again in size and still have a useful 20 megapixels. You are by then running into seriously performing, pocketable cameras, approaching your own eye's physics. To me, it's the sensor size that will soon become the next target, Olympus sees that, but perhaps they are  a couple of years too early and thus they are still fighting an uphill battle. Canon and Nikon already have their crop cameras and are still dabbling in making seriosly good lenses in crop mounts (EFs and similar). Once they can get their crop cameras into the 20 to 30megapixel range I expect that they will be accompanied by leading-edge premium quality EFs lenses. The march can continue onwards for at least a decade in similar vein, without diffraction being the limiter. Recall that 16mm C-mount lenses have for many years being seriously fast (around f1.0). If such lenses are, say, optimised at f2.8, and they are accompanied by 16mm format, 20 MP sensors, wow... Depth of field matters take care of themselves. Who uses f64 currently on 35mm format? f8 will one day become overkill on such tiny sensors.
Title: Moore's law for cameras
Post by: Nemo on July 14, 2009, 01:34:58 pm
Quote from: bradleygibson
Hi, Nemo,

Correct, to discuss resolution, one requires the ability to distinguish two distinct points.

The above calculations are the center-to-center distances of two Airy disks positioned such that the center of the first Airy disk occurs at the first minimum of the second (Rayleigh criterion for diffraction limits).  AFAIK, this is generally considered to be the limit of resolution.

-Brad

Sensors cannot resolve points at the Rayleigh criterion... because the contrast is too low (9%). The Rayleigh criterion was established for the separation of stars in telescopes, not for the separation of points in digital photography.

For resolving a separate point you need a pixel with a diagonal equal to the diameter of the disk... but for resolving line pairs formed by disks you need at least 2 pixels per line pair. How large those pixels have to be? That depends on the separation of the disks... Separation means contrast. A minimum contrast is required by the sensor, for resolving the detail. Therefore, a minimum separation is needed as well. And therefore a particular pixel size is necessary for maximum resolving power of the lens + sensor team. And we are considering monochromatic sensors...

Things are a bit more complex than Cambridgecolor.com explains...



Title: Moore's law for cameras
Post by: dalethorn on July 14, 2009, 02:59:38 pm
The time scales are interesting from a user point of view.  On the one hand, DSLR's with much better image quality have gotten so large that a Leica S2 "MF" camera is now smaller than some of them.

Looking at best choices from the lower end, in April 2005 I could buy a Casio pocket camera with 35-105 equiv. zoom and 10 mp on a 1/1.8 sensor, then 4 years later buy a Panasonic with 25-300 equiv. zoom and 10 mp on a 1/2.33 sensor.

So what did 4 years of tech improvements bring?
Zoom, fantastic.  Opens up whole new worlds of opportunity.  For DSLR's, the smaller size and improved stability of the zoom lenses is good news.
Zoom motor, still poor.  Thankfully my zoom on the G1 is manual.  The GH1 is another matter.
Image stabilization, all good news.
Size of camera, slightly thicker, still pocket size.  DSLR's are now available in ever-smaller sizes.
Quality of image, same or slightly better, and no worse noise.  DSLR's with smaller sensors are improving a lot.
HD video, stereo wideband sound, all good.  HD video is moving into most DSLR's now.
Battery, about the same.  The bad news is, most of the smaller "DSLR" cameras like Pana G1 and Oly Pen have a short battery life.
Memory, all good news.
Transfer speed, much better.
Screens are much better now.
Flash, little or no better.  Not very practical to use external flash on a pocket camera.

DSLR's will continue to do some things that all-in-one cameras won't do anytime soon, like use special tilt lenses etc.  But most other features will float up or down more or less equally.  And even if sensor and per-pixel image quality don't improve a lot, the smaller and better-stabilized zoom lenses are bringing new worlds of opportunity to walkaround shooting.

Maybe it's time to say that DSLR's have become the medium format of the present, and the official MF designation has moved into some other territory.
Title: Moore's law for cameras
Post by: bradleygibson on July 14, 2009, 07:08:25 pm
Quote from: Nemo
Sensors cannot resolve points at the Rayleigh criterion... because the contrast is too low (9%). The Rayleigh criterion was established for the separation of stars in telescopes, not for the separation of points in digital photography.

For resolving a separate point you need a pixel with a diagonal equal to the diameter of the disk... but for resolving line pairs formed by disks you need at least 2 pixels per line pair. How large those pixels have to be? That depends on the separation of the disks... Separation means contrast. A minimum contrast is required by the sensor, for resolving the detail. Therefore, a minimum separation is needed as well. And therefore a particular pixel size is necessary for maximum resolving power of the lens + sensor team. And we are considering monochromatic sensors...

Things are a bit more complex than Cambridgecolor.com explains...

I will disagree that sensors cannot respond to a 9% difference in contrast; and re:CambridgeColor.org, things are always more complex than any one paper explains, but IMHO, the reference article serves as a good starting point.

Otherwise, I generally agree with you.  I felt a full theoretical analysis (which I wouldn't be qualified to do anyway) brings in too many variables, assumptions and thus will end up avoiding the question being asked ("how far away are we?").

The size of the Airy disks is what it is for the light, ideal lens and aperture selected.  As for resolving it, you are correct in that how best to do it (2x oversampling, or 4x with a bayered sensor?, or otherwise; establishing minimum acceptable contrast; what to do since no real lens is ideal, etc., etc., etc.) is more involved.
Title: Moore's law for cameras
Post by: ErikKaffehr on July 15, 2009, 12:50:21 am
Hi,

Just one practical observation. A Swedish monthly, "Foto", does pretty sold lens tests at the Hasselblad factory using their MTF equipment, and they have essentially seen that Olympus lenses trend to max out before f/8. Also they have seen that best aperture seems to be between f/5.6 and f/8 on resolution test targets. "Foto" seems to have difficulties finding good enough lenses for the newest cameras. My view may be:

- There is probably some good reason for higher pixel density but it may not be to increase resolution.
- En example is that with high pixel density diffraction may act as low-pass (AA) filter and perhaps allow for more extensive sharpening.
- Smaller pixels may be more noisy, but that may be overcome with binning and downsampling.
- Cost per sqaure centimetres is going down, this may make larger chips economically more feasible. This is a slow process. Now we have affordable "full frame 135".
- Regarding in camera processing capability Moore's laws apply fully. Processing 5 or more 24 Mpixel images in a second is impressive, especially considering how slow general purpose computers are at raw conversion.
- Development may slow down. Sensor technology may be near optimum.
- I would bet on Bayer matrix beeing around for a long time. Other technologies may seem advantageous but the Bayer matrix solution is quite flexible regarding filter choices.
- Carl Zeiss had some interesting info on MTF vs. resolution and they certainly say that 24 Mpixel technology is not lens limited. I'll try to dig up that article, it's public but not easily found.  

Here is the Zeiss Article:
http://www.zeiss.co.uk/C12567A8003B8B6F/Em...F_Kurven_EN.pdf (http://www.zeiss.co.uk/C12567A8003B8B6F/EmbedTitelIntern/CLN_30_MTF_en/$File/CLN_MTF_Kurven_EN.pdf)
http://www.zeiss.co.uk/C12567A8003B8B6F/Em...Kurven_2_en.pdf (http://www.zeiss.co.uk/C12567A8003B8B6F/EmbedTitelIntern/CLN_31_MTF_en/$File/CLN_MTF_Kurven_2_en.pdf) ( Resolution limit is discussed on Page 22)

The second article refers to separately downloadable images. I found them here:


http://www.zeiss.de/C12567A8003B8B6F/Graph...le/Image_01.jpg (http://www.zeiss.de/C12567A8003B8B6F/GraphikTitelIntern/CLN31MTF-KurvenBild1/$File/Image_01.jpg)
http://www.zeiss.de/C12567A8003B8B6F/Graph...le/Image_02.jpg (http://www.zeiss.de/C12567A8003B8B6F/GraphikTitelIntern/CLN31MTF-KurvenBild2/$File/Image_02.jpg)
...
http://www.zeiss.de/C12567A8003B8B6F/Graph...ile/Bild_10.jpg (http://www.zeiss.de/C12567A8003B8B6F/GraphikTitelIntern/CLN31MTF-KurvenBild10/$File/Bild_10.jpg)
This image compares 24 MPixel full frame (1) with scanned slides from 9x12 (2), 7x6 (3) and 24x36 (4) (using 100 ISO slide film and 4000 PPI)

http://www.zeiss.de/C12567A8003B8B6F/Graph...ile/Bild_13.jpg (http://www.zeiss.de/C12567A8003B8B6F/GraphikTitelIntern/CLN31MTF-KurvenBild13/$File/Bild_13.jpg)
This image compare "original" with 24 and 12 MPixels



Best regards
Erik

Quote from: bradleygibson
I will disagree that sensors cannot respond to a 9% difference in contrast; and re:CambridgeColor.org, things are always more complex than any one paper explains, but IMHO, the reference article serves as a good starting point.

Otherwise, I generally agree with you.  I felt a full theoretical analysis (which I wouldn't be qualified to do anyway) brings in too many variables, assumptions and thus will end up avoiding the question being asked ("how far away are we?").

The size of the Airy disks is what it is for the light, ideal lens and aperture selected.  As for resolving it, you are correct in that how best to do it (2x oversampling, or 4x with a bayered sensor?, or otherwise; establishing minimum acceptable contrast; what to do since no real lens is ideal, etc., etc., etc.) is more involved.
Title: Moore's law for cameras
Post by: Nemo on July 15, 2009, 06:29:08 am
I think Ray Maxwell is right, but the calculation based on pixel size versus airy disk size isn't a good reference. It depends on the Bayer pattern (or lack of) and it depends on the maximum detail to be resolved and its nature (linear detail, sinusoidal detail, high contrast detail...?), on the optical quality of the lens (at wide apertures), etc.

I belive we will see 35mm format cameras with 30 MP or so...

Title: Moore's law for cameras
Post by: charleski on July 15, 2009, 09:31:10 am
Quote from: ErikKaffehr
Finally, I can see a benefit of increasing pixel densities further. One reason is that we can eliminate the need for low pass (anti aliasing) filter if we make the airy ring about the same size as a pixel. I'd also guess that it's better to have more pixels than "uprezz" using interpolation.

Best regards
Erik
I was about to post almost the exact same comment. The current barrier to sensor resolution is the quality of the anti-aliasing filter. In most cases* that's a discrete element in front of the sensor. While there's a lot of research going into finding good alternatives to the standard birefringent filter used on most cameras, it's certainly true that one solution is simply to oversample the data.

The current crop of cameras actually lie in an uneasy middle-ground, where the anti-aliasing filter is required for shooting at wide apertures, but unnecessary once stopped down past a diffraction limit which is well within the range of commonly-used apertures. If the sensor resolution is increased such that the camera/lens combination is diffraction-limited at all apertures, then the anti-aliasing filter with its multiplicative MTF reduction can be dispensed with completely which may yield a noticeable benefit.



*Yes, I know there are some MFD manufacturers who don't use an anti-aliasing filter and claim they can deal with artifacts in software. And yes, images from their systems do look extremely sharp when there are no visual cues to reveal how much of that detail is merely false, aliased data (moire is clearly visible on high-frequency parallel lines because we know what the image should look like, in the absence of such a clear visual clue you can often get by with allowing a lot of high-frequency aliasing garbage into the image, but it's really just hf noise). Of course, once aliasing noise has been introduced into a set of data samples there is no way of removing it without also removing part of the signal - proper anti-aliasing must take place in the analogue domain.
Title: Moore's law for cameras
Post by: Ray on July 15, 2009, 10:03:47 am
I've compared the 10mp Canon 40D with the 15mp Canon 50D using the same Canon 50/1.4 prime on both bodies. The pixel density of the 50D on full frame 35mm would be 39mp.

The current 5D2 has the pixel density of the 8mp 20D. My comparisons between the 40D and 50D lead me to believe there could be a worthwhile benefit in a 39mp FF 35mm DSLR, not necessarily or not only in resolution at the plane of focus, but in depth of field.

For example, the 50D at F16 produces about the same resolution as the 40D at F11, at the plane of focus. Using both cameras at F11 results in a marginal resolution edge to the 50D, but not as great as the DoF edge of the 50D at F16 (compared with the 40D at F11).

Comparing the 50D at F11 with the 40D at F5.6 produces a more dramatic DoF benefit. My 50/1.4 is slightly sharper at F5.6 than at F8, but not by much. At the plane of focus, the resolution of the 50D at F11 is about equal to the 40D at F5.6, at least with my copy of the 50/1.4. However, the DoF of the 50D at F11 is substantially greater than the DoF of the 40D at F5.6.

I should mention that such differences have been examined on monitor at 100% and 200%, representative of very large prints.

Title: Moore's law for cameras
Post by: Nemo on July 15, 2009, 01:23:56 pm
Ctein about diffraction:

http://theonlinephotographer.typepad.com/t...arithmetic.html (http://theonlinephotographer.typepad.com/the_online_photographer/2009/07/reality-is-not-arithmetic.html)

Very interesting.

.
Title: Moore's law for cameras
Post by: Ronny Nilsen on July 15, 2009, 04:10:38 pm
Ctein:
Why 80 Megapixels Just Won't Be Enough... (http://theonlinephotographer.typepad.com/the_online_photographer/2009/02/why-80-megapixels-just-wont-be-enough.html)

Do I hear a 400 megapixel FF DSLR before the race is over?

Ronny
Title: Moore's law for cameras
Post by: Ray on July 16, 2009, 12:15:29 am
Quote from: Nemo
Ctein about diffraction:

http://theonlinephotographer.typepad.com/t...arithmetic.html (http://theonlinephotographer.typepad.com/the_online_photographer/2009/07/reality-is-not-arithmetic.html)

Very interesting.

Yes. It is interesting, and I tend to agree with Ctein that one can't apply simple mathematical formulas to describe reality. Calculations of Airy disc size equated to pixel size don't tell the whole story.

One concern I have, is that increasing pixel count on the same size sensor tends to increase total read noise, because more pixels have to be read. Without compensating improvements in other areas, such as increased quantum efficiency of the individual pixels, or new ways of arranging things, such as having all the processing transistors on the reverse side of the CMOS sensor, or dumping the Bayer-type array in favour of one which doesn't filter out any of the light, then the disadvantages of increasing pixel count may cancel out the benefits.

Imagine a Foveon type sensor made of meta-materials using nanotechnology such that the layers on the sensor that are sensitive to individual frequency bands (R,G&B) are completely transparent to the other frequencies that are not collected, imposing no loss of efficiency as the photons pass through to be collected on the layer(s) underneath.

I believe the Bayer-type arrangement filters out about half of the light that passes through the lens. That's a whole stop of sensitivity that's been wasted. Current Foveon sensors use materials that allow certain frequency bands to pass through to another layer of silicon, but with nowhere near 100% efficiency. There's considerable absorption by the silicon which results in noise.
Title: Moore's law for cameras
Post by: charleski on July 16, 2009, 03:03:03 am
Quote from: Ray
Yes. It is interesting, and I tend to agree with Ctein that one can't apply simple mathematical formulas to describe reality.
Well of course you can, in that force really does equal mass times acceleration (for example). I think Ctein is railing against the inappropriate use of Occam's razor ('Simplicity=Truth') in practical applications.
Title: Moore's law for cameras
Post by: Ray on July 16, 2009, 06:15:26 am
Quote from: charleski
Well of course you can, in that force really does equal mass times acceleration (for example). I think Ctein is railing against the inappropriate use of Occam's razor ('Simplicity=Truth') in practical applications.
 

Hhmmm! I'm whizzing around the sun at approximately 108,000 kms/hour. I weigh about 85kgs. I guess I must be very forceful    .
Title: Moore's law for cameras
Post by: Nemo on July 16, 2009, 08:08:07 am
Quote from: Ray
Yes. It is interesting, and I tend to agree with Ctein that one can't apply simple mathematical formulas to describe reality. Calculations of Airy disc size equated to pixel size don't tell the whole story.

One concern I have, is that increasing pixel count on the same size sensor tends to increase total read noise, because more pixels have to be read. Without compensating improvements in other areas, such as increased quantum efficiency of the individual pixels, or new ways of arranging things, such as having all the processing transistors on the reverse side of the CMOS sensor, or dumping the Bayer-type array in favour of one which doesn't filter out any of the light, then the disadvantages of increasing pixel count may cancel out the benefits.

Imagine a Foveon type sensor made of meta-materials using nanotechnology such that the layers on the sensor that are sensitive to individual frequency bands (R,G&B) are completely transparent to the other frequencies that are not collected, imposing no loss of efficiency as the photons pass through to be collected on the layer(s) underneath.

I believe the Bayer-type arrangement filters out about half of the light that passes through the lens. That's a whole stop of sensitivity that's been wasted. Current Foveon sensors use materials that allow certain frequency bands to pass through to another layer of silicon, but with nowhere near 100% efficiency. There's considerable absorption by the silicon which results in noise.


There will be a point from which more pixels will be bringing more problems than advantages. Then, new ways of image improvement will be interestiing. Foveon type sensors aren't really competitive right now. Remember when the easiest way of microprocesor's improvement was mhz increase. At this stage of the technology the best way of image improvement is increasing the number of pixels, combined with improvements in sensor architecture. Back-illuminated CMOS sensors with sophisticated electronics will be a large step forward. We will see 35mm sensors with 30 or more MP, soon. You will be able to use the full resolution potential or, by means of pixel binning, get more "quality" per pixel (dynamic range, noise). RAW images based on multiple exposure shots will be the norm very soon as well...

Current Foveon sensors need quite large pixels. They would be great competing against bayer sensors with the same number of photodetectors, but Foveons cannot do this. So is more easy and cost saving to get the same with a Bayer architecture and more pixels... I think this will change. I don't know when, but it will happen. Then we will have another huge step forward...

The true bottle neck seems to be the printing technologies... but, is photography based on prints any more?
Title: Moore's law for cameras
Post by: Ray on July 16, 2009, 08:13:58 pm
Quote from: Nemo
Back-illuminated CMOS sensors with sophisticated electronics will be a large step forward. We will see 35mm sensors with 30 or more MP, soon.

I think a step up from the 21mp of the 5D2 to just 30mp would be too little. 40mp would be better. If such a sensor were back-illuminated to enable the use of larger photodiodes, had no AA filter which would also reduce costs as well as improve resolution, had a few panchromatic pixels to further improve low-noise performance, I might not be able to resist buying such a camera, if the price were right.  
 
Whatever happened to that Kodak invention where half the pixels of the Bayer-type array were replaced with panchromatic pixels?
Title: Moore's law for cameras
Post by: bjanes on July 16, 2009, 10:25:57 pm
Quote from: Ray
Yes. It is interesting, and I tend to agree with Ctein that one can't apply simple mathematical formulas to describe reality. Calculations of Airy disc size equated to pixel size don't tell the whole story.

With regard to measurement and numbers, Lord Kelvin summed up the situation over 100 years ago:

"In physical science the first essential step in the direction of learning any subject is to find principles of numerical reckoning and practicable methods for measuring some quality connected with it. I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be." [PLA, vol. 1, "Electrical Units of Measurement", 1883-05-03]

Quote from: Ray
One concern I have, is that increasing pixel count on the same size sensor tends to increase total read noise, because more pixels have to be read.

Well stated, Ray. One way to reduce read noise is pixel binning where 4 pixels can be combined into one super pixel and read out with only one read noise. This can only be done in hardware and until recently has been limited to monochrome sensors. The newest Phase One cameras have Sensor+ (http://www.phaseone.com/Content/p1digitalbacks/Pplusseries/SensorPlus/SensorPlus2.aspx) technology which extends the process to color. With the 60 MP sensor, one can read out the pixels individually or use binning with the press of the button; with 4:1 binning, one still gets a very usable 15 MP.

Like all MFDBs, the Phase One is CCD and I do not know if this binning is possible with CMOS. One can down sample in Photoshop, but this is averaging and one still has 4 read noises rather than one when obtaining the data.

Bill
Title: Moore's law for cameras
Post by: bradleygibson on July 16, 2009, 11:30:01 pm
Quote from: bjanes
With regard to measurement and numbers, Lord Kelvin summed up the situation over 100 years ago:

"In physical science the first essential step in the direction of learning any subject is to find principles of numerical reckoning and practicable methods for measuring some quality connected with it. I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be." [PLA, vol. 1, "Electrical Units of Measurement", 1883-05-03]

That is a great quote, Bill, thanks--I'd not heard it before.

-Brad
Title: Moore's law for cameras
Post by: Nemo on July 17, 2009, 06:54:31 am
Quote from: Ray
I think a step up from the 21mp of the 5D2 to just 30mp would be too little. 40mp would be better. If such a sensor were back-illuminated to enable the use of larger photodiodes, had no AA filter which would also reduce costs as well as improve resolution, had a few panchromatic pixels to further improve low-noise performance, I might not be able to resist buying such a camera, if the price were right.  
 
Whatever happened to that Kodak invention where half the pixels of the Bayer-type array were replaced with panchromatic pixels?

There are problems with color interpolation. That design was aimed a small CCDs for phone cameras. In large  / low resolution sensors that design can bring severe problems related with color interpolation. Using pixel binning you get more light gathering efficiency, but you reduce the final image dimension (final number of pixels). This "panchromatic" design tries to get more light keeping image dimension untouched. That may be good for some applications, and not so good for others.
Title: Moore's law for cameras
Post by: dalethorn on July 17, 2009, 07:39:47 am
Quote from: bradleygibson
That is a great quote, Bill, thanks--I'd not heard it before.
-Brad

"The shortest distance between two points is a straight line - or the line that's straightest under the circumstances." - Henry Kloss
Title: Moore's law for cameras
Post by: Ray on July 17, 2009, 11:31:40 pm
Quote from: Nemo
There are problems with color interpolation. That design was aimed a small CCDs for phone cameras. In large  / low resolution sensors that design can bring severe problems related with color interpolation. Using pixel binning you get more light gathering efficiency, but you reduce the final image dimension (final number of pixels). This "panchromatic" design tries to get more light keeping image dimension untouched. That may be good for some applications, and not so good for others.

When the design was first announced, the problems of color interpolation were of course raised by many. Kodak countered this with arguments that more sophisticated algorithms would largely take care of this. I believe the arrangement of pixels was such that every panchromatic pixel adjoined, either by an edge or a corner (8 edges and corners to a square pixel) at least one of each of the 3 color-filtered pixels.

If half the total number of pixels on, say a 5D3 sensor, are panchromatic, then a 5D3 image, consisting of double the pixel count of a 5D2 image, would still retain the same amount of color information as a 5D2 image, regardless of any improvement in interpolation algorithms. Is this not the case?
Title: Moore's law for cameras
Post by: Rob C on July 18, 2009, 04:54:44 am
Quote from: Ray
When the design was first announced, the problems of color interpolation were of course raised by many. Kodak countered this with arguments that more sophisticated algorithms would largely take care of this. I believe the arrangement of pixels was such that every panchromatic pixel adjoined, either by an edge or a corner (8 edges and corners to a square pixel) at least one of each of the 3 color-filtered pixels.

If half the total number of pixels on, say a 5D3 sensor, are panchromatic, then a 5D3 image, consisting of double the pixel count of a 5D2 image, would still retain the same amount of color information as a 5D2 image, regardless of any improvement in interpolation algorithms. Is this not the case?




I´m trying hard, Ray, but what the hell are you guys talking about?

Rob C
Title: Moore's law for cameras
Post by: Eric Myrvaagnes on July 18, 2009, 10:12:38 am
Quote from: Rob C
I´m trying hard, Ray, but what the hell are you guys talking about?

Rob C

In plain english, Rob, I think they're trying to say, "It's crackers to slip a rozzer the dropsy in snide." I hope that clarifies it. 
Title: Moore's law for cameras
Post by: Ray on July 18, 2009, 11:47:43 am
Quote from: Rob C
I´m trying hard, Ray, but what the hell are you guys talking about?

Rob C

Why, Rob, we're merely trying to predict the possible benefits of increased pixel count. Some think we're close to the end of the road, and others think there's a way to go.
Title: Moore's law for cameras
Post by: Nemo on July 18, 2009, 02:19:36 pm
Quote from: Ray
When the design was first announced, the problems of color interpolation were of course raised by many. Kodak countered this with arguments that more sophisticated algorithms would largely take care of this. I believe the arrangement of pixels was such that every panchromatic pixel adjoined, either by an edge or a corner (8 edges and corners to a square pixel) at least one of each of the 3 color-filtered pixels.

If half the total number of pixels on, say a 5D3 sensor, are panchromatic, then a 5D3 image, consisting of double the pixel count of a 5D2 image, would still retain the same amount of color information as a 5D2 image, regardless of any improvement in interpolation algorithms. Is this not the case?

Right. That is a different possibility: increasing the total number of pixels in a "panchromatic" sensor you keep the color information. You have different combinations at hand. Fuji is experimenting with pixel binning, and Ricoh with multiple exposures... Sigma with the Foveons... Fuji has interesting patents on multilayer sensors... Lets see.

Title: Moore's law for cameras
Post by: Nemo on July 19, 2009, 03:35:54 pm
Quote from: Ray
I think a step up from the 21mp of the 5D2 to just 30mp would be too little. 40mp would be better.

I would bet for 30 + something, from Canon, with an improved CMOS architecture. Who knows, but they will jump over the 20MP mark...
Title: Moore's law for cameras
Post by: samirkharusi on July 20, 2009, 08:10:54 am
Quote from: Nemo
I would bet for 30 + something, from Canon, with an improved CMOS architecture. Who knows, but they will jump over the 20MP mark...
In high resolution planetary imaging there is a simple rule of thumb for capturing all the details that your optical system is capable of delivering. Use Nyquist Critical sampling, 2 pixels across Full Width at Half Max (FWHM). For small telescopes the FWHM is determined by the diffraction limit (the FWHM of the Airy disc), for larger telescopes it is determined by atmospheric seeing. For diffraction, Nyquist Sampling is achieved when the focal ratio (f-number) is 4 times the pixel width in microns (roughly). Lens is diffraction-limited at f8? Use pixels that are 2 microns wide. In practical planetary imaging one changes the focal ratio (using tele-extenders) to match one's pixels, rather than the other way around. Eg the Canon 600mm/4.0L IS lens is almost good enough to be called diffraction-limited on-axis. So, to Nyquist-sample it, one needs to tele-extend it so that it operates at f24 on 6micron pixels (I achieved f28 by using a 5x tele-extender + a 1.4x). Compares quite well with an astronomical telescope when shooting Saturn, comparo given here:
http://www.samirkharusi.net/televue_canon.html (http://www.samirkharusi.net/televue_canon.html)
Discussion on Nyquist sampling in planetary imaging given here, with examples:
http://samirkharusi.net/sampling_saturn.html (http://samirkharusi.net/sampling_saturn.html)
These principles have long been well established. So, the "ultimate" smallest useful pixel size, based purely on diffraction, will be roughly 2microns for lenses diffraction-limted at f8. That's around 200 megapixels on a 35mm format chip. We have a very, very long way to go, and that's for f8... When pixels get cheap, the rules change to overkill, and overkill for f8 diffraction-limited optics begin at about 200 megapixels on 35mm format. Will we ever get there? Do people actually "need" 200 megapixels to achieve their desired print sizes? A very few, yes. For most one would expect something under 50megapixels as adequate for A4 or A3 prints. Obviously for them, the vast majority, chips smaller than 35mm format, combined with superb, smaller lenses will make more sense. Will prints continue to be the end-game for consumers? Dunno. Perhaps an HD TV display (2 megapixels, achievable by a camera phone) will be good enough for Joe Public.
Title: Moore's law for cameras
Post by: bjanes on July 20, 2009, 08:45:41 am
Quote from: samirkharusi
In high resolution planetary imaging there is a simple rule of thumb for capturing all the details that your optical system is capable of delivering. Use Nyquist Critical sampling, 2 pixels across Full Width at Half Max (FWHM). For small telescopes the FWHM is determined by the diffraction limit (the FWHM of the Airy disc), for larger telescopes it is determined by atmospheric seeing. For diffraction, Nyquist Sampling is achieved when the focal ratio (f-number) is 4 times the pixel width in microns (roughly). Lens is diffraction-limited at f8? Use pixels that are 2 microns wide. In practical planetary imaging one changes the focal ratio (using tele-extenders) to match one's pixels, rather than the other way around. Eg the Canon 600mm/4.0L IS

I enjoyed reading this excellent analysis. However, for terrestrial photography under field conditions one is often limited by camera shake and focusing error. If you are hand holding and shooting in less than ideal conditons, how much resolution can be achieved? Noise (photon and read) is also a consideration. 2 micron pixels are used in camera phones and P&S cameras, but not in dSLRs or MFCBs where larger pixels yield better compromises among the factors discussed.

Bill
Title: Moore's law for cameras
Post by: feppe on July 20, 2009, 08:58:20 am
Quote from: bjanes
I enjoyed reading this excellent analysis. However, for terrestrial photography under field conditions one is often limited by camera shake and focusing error. If you are hand holding and shooting in less than ideal conditons, how much resolution can be achieved? Noise (photon and read) is also a consideration. 2 micron pixels are used in camera phones and P&S cameras, but not in dSLRs or MFCBs where larger pixels yield better compromises among the factors discussed.

Exactly. And as almost every review for a 20+ megapixel camera claims, we are already near or at the resolving capacity of current lens generation.

So while a 35mm equivalent sensor might theoretically produce 200 megapixels, the real-world max resolution seems to be somewhere between 20 and 40 megapixels with today's tech and real-world limitations. Especially lens tech has not kept up with Moore's Law.
Title: Moore's law for cameras
Post by: Nemo on July 20, 2009, 10:58:40 am
Quote from: samirkharusi
In high resolution planetary imaging there is a simple rule of thumb for capturing all the details that your optical system is capable of delivering. Use Nyquist Critical sampling, 2 pixels across Full Width at Half Max (FWHM). For small telescopes the FWHM is determined by the diffraction limit (the FWHM of the Airy disc), for larger telescopes it is determined by atmospheric seeing. For diffraction, Nyquist Sampling is achieved when the focal ratio (f-number) is 4 times the pixel width in microns (roughly). Lens is diffraction-limited at f8? Use pixels that are 2 microns wide. In practical planetary imaging one changes the focal ratio (using tele-extenders) to match one's pixels, rather than the other way around. Eg the Canon 600mm/4.0L IS lens is almost good enough to be called diffraction-limited on-axis. So, to Nyquist-sample it, one needs to tele-extend it so that it operates at f24 on 6micron pixels (I achieved f28 by using a 5x tele-extender + a 1.4x). Compares quite well with an astronomical telescope when shooting Saturn, comparo given here:
http://www.samirkharusi.net/televue_canon.html (http://www.samirkharusi.net/televue_canon.html)
Discussion on Nyquist sampling in planetary imaging given here, with examples:
http://samirkharusi.net/sampling_saturn.html (http://samirkharusi.net/sampling_saturn.html)
These principles have long been well established. So, the "ultimate" smallest useful pixel size, based purely on diffraction, will be roughly 2microns for lenses diffraction-limted at f8. That's around 200 megapixels on a 35mm format chip. We have a very, very long way to go, and that's for f8... When pixels get cheap, the rules change to overkill, and overkill for f8 diffraction-limited optics begin at about 200 megapixels on 35mm format. Will we ever get there? Do people actually "need" 200 megapixels to achieve their desired print sizes? A very few, yes. For most one would expect something under 50megapixels as adequate for A4 or A3 prints. Obviously for them, the vast majority, chips smaller than 35mm format, combined with superb, smaller lenses will make more sense. Will prints continue to be the end-game for consumers? Dunno. Perhaps an HD TV display (2 megapixels, achievable by a camera phone) will be good enough for Joe Public.

Photo lenses aren't telescopes. The example based on a Canon 600mm f/4 is good, but most photolenses aren't telephoto designs either. Wide-angle lenses, zooms, macro lenses, etc. put many problems on the table of the lens designer. Even more if those retrofocus or vario designs have to be "fast", considering size, cost and operation (AF) constraints. On the other hand, the type of detail and the capture device are very important. A bayer sensor introduces several constraints. Typical low contrast detail in photographs isn't like bright spots on a dark background. Etc.

A 200MP sensor is a possibility... sometime, in the future. But right now, say in a 2 years timeframe, what can we expect? In the photographic industry, I think the 35mm format will bring more resolution to the sensors. The MF marks are the point of reference here. Time ago 22MP was the exclusive territory of MF cameras, and then Canon jumped into the battle. So I expect competition from the 35mm format in the 33-39Mp domain, the current exclusive territory of MF cameras (from Canon at least). Does 50MP or 60MP cameras make any sense? Considering prints, yes, but only for a few professionals. It is a very small market, very, very small. Alternatives to prints? Web? TV? Cinema? Even lower resolution is needed there!

So, there are cost (supply) variables at play, technical considerations (like diffraction), but also demand considerations. For large parts of the market, professional photographic market (reportage, fashion, advertising)... How much is needed even considering a wide margin? A professional buys a Hasselblad 50MP if it makes some difference. So I think there is a near limit due to practical reasons based on demand considerations, not technical reasons. The argument is: we can increase pixels at no cost, free, so why not to do it? All we had this discussion years ago, but now the situation is different. Currently we have 50-60Mp cameras, and 20Mp cameras are normal in the prosumer segment (Canon 5D Mark II, Sony A900). So we are talking of further increases... The point is that it only does a marginal difference for the "product" the professionals sell to the clients (photos), so the industry will look for alternative ways of providing better tools, for a price. Maybe not more pixels, but the same number of pixels with more quality, more detail per pixel (the Bayer mosaic!), etc.
Title: Moore's law for cameras
Post by: BJL on July 20, 2009, 11:25:12 am
I agree with much of what Nathan Myhrvold says, in terms of reasons for having sensors that go significantly beyond the resolution limits of most or all lenses.

But I see no basis for this claim:
"Over time those sensors will get much cheaper and that will drop camera prices. ... A second effect is that Moore’s law also makes physically larger sensors cheaper."
I can find no evidence for this persistent claim that technological progress is driving a substantial downward trend in the price of making a sensor of a given size, like 24x36mm or 42x56mm. This is especially so for devices that are larger than that for which all recent fabrication equipment (steppers) is optimized. All the many stepper models introduced in the last five years or more have maximum field size of at most 26x33mm (most have exactly that field size), and this is too small for making sensors in the traditional "film formats" except with stitching. Of the two steppers with field size larger than 26x33mm ever offered, one is discontinued (Nikon made it) and the other is an old Canon model with minimum feature size of 500nm (c.f the new 34nm process!). This 500nm is too large for the pixel sizes of modern SLR sensors, as for CMOS sensors, minimum feature size needs to be about 1/20th or less of the cell width. Kodak might use that stepper to make its 50x50mm KAF-4301 and KAF-4320 sensors, but those are 4MP sensors with huge 24 micron pixels, for medical and scientific imaging.

Increases in sales volume and improved economies of scale have probably helped bring large sensor prices down compared to five or ten years ago, but even that trend seems to have slowed or bottomed out three years or more ago. The Canon 5D was $2700 in the US by early 2006, the Canon 5DMkII is no cheaper now, and that despite the added competition from Nikon and Sony.


P.S. I also doubt that there is much interest in images with the combination of 100MP+ imagery and the very low DOF coming from the large apertures needed to control diffraction. F/4 in 35mm format enlarged enough and viewed closely enough to see the details of a 100MP image will have far stronger visible OOF effects and far less DOF than F/4 viewed "normally".
Title: Moore's law for cameras
Post by: ErikKaffehr on July 20, 2009, 02:38:22 pm
Hi,

My take is that price may go down per square inch, but only slowly. We have essentially seen this with prices dropping on both APS-C and full frame sensor based cameras. I got the impression that an APS-C size sensor did cost a fortune six years ago and just a couple of hundred dollars today, but I don't see that price/performance changes at the rate we associate with Moore's laws, for the very same reasons you mention.

In my humble view we still get gains from shrinking the sensel sizes, but they may be diminishing in the sense that the rate of improvement is slowing down. We may see a trend to smaller sensor sizes like APS-C and 4/3. One problem with APS-C is that there are very few lenses really optimized for it, that is optimally corrected at large apertures, many decent lenses but very few really excellent ones. Olympus actually seem to make excellent designs for their 4/3 cameras, but it's said that their low pass filtering is a bit to aggressive.

Another observation is that we may also need better photographers. Utilizing the performance hiding in all those multimegapixel SLR-s takes some craftsmanship.

Best regards
Erik

Quote from: BJL
I agree with much of what Nathan Myhrvold says, in terms of reasons for having sensors that go significantly beyond the resolution limits of most or all lenses.

But I see no basis for this claim:
"Over time those sensors will get much cheaper and that will drop camera prices. ... A second effect is that Moore’s law also makes physically larger sensors cheaper."
I can find no evidence for this persistent claim that technological progress is driving a substantial downward trend in the price of making a sensor of a given size, like 24x36mm or 42x56mm. This is especially so for devices that are larger than that for which all recent fabrication equipment (steppers) is optimized. All the many stepper models introduced in the last five years or more have maximum field size of at most 26x33mm (most have exactly that field size), and this is too small for making sensors in the traditional "film formats" except with stitching. Of the two steppers with field size larger than 26x33mm ever offered, one is discontinued (Nikon made it) and the other is an old Canon model with minimum feature size of 500nm (c.f the new 34nm process!). This 500nm is too large for the pixel sizes of modern SLR sensors, as for CMOS sensors, minimum feature size needs to be about 1/20th or less of the cell width. Kodak might use that stepper to make its 50x50mm KAF-4301 and KAF-4320 sensors, but those are 4MP sensors with huge 24 micron pixels, for medical and scientific imaging.

Increases in sales volume and improved economies of scale have probably helped bring large sensor prices down compared to five or ten years ago, but even that trend seems to have slowed or bottomed out three years or more ago. The Canon 5D was $2700 in the US by early 2006, the Canon 5DMkII is no cheaper now, and that despite the added competition from Nikon and Sony.


P.S. I also doubt that there is much interest in images with the combination of 100MP+ imagery and the very low DOF coming from the large apertures needed to control diffraction. F/4 in 35mm format enlarged enough and viewed closely enough to see the details of a 100MP image will have far stronger visible OOF effects and far less DOF than F/4 viewed "normally".
Title: Moore's law for cameras
Post by: cmi on July 20, 2009, 03:14:28 pm
I would like to add, that the game could change with the advent of much more powerful processors and better realtime data processing pipelines. (There is stuff in the works, I just cant find the links or remember the names.) When you can aquire, store, and, most importantly, process a 1000 MP image in the blink of an eye and handle it like a todays 200KB jpeg, of course everybody would use it.
Title: Moore's law for cameras
Post by: Alan Goldhammer on July 20, 2009, 03:21:45 pm
I don't think we can predict cost with any reliability.  When the first PC was introduced over two decades ago, I observed that a good solid desk top unit cost about $2500-3000.  For several years after that new, improved models came out (more memory, 20 MB hard drives, etc.) but the cost was still in that neighborhood.  All of the sudden there were tremendous leaps in technology (when was the last time you had a hard drive fail?) and costs shrunk dramatically.  Now we talk about business desk top models in the $400-600 range with much more power, etc.  I suspect the same thing will happen in the sensor arena.  A more pertinent question to ask is what it will mean for photographers.  My Nikon D300 gives wonderful results for the work I do.  I don't print larger than 101/2 x 16 and the clarity of these images is outstanding.  If I was into panoramic printing I might want more out of a sensor.  Lens design as everyone has noted is limiting and to me the major problem we have is not being able to preserve the kind of quality when stopping down past f8.  This limits the control of depth of field when one would want to.  That limit aside, we are presented with great hardware and software technologies that allow us to go much further than in the days of wet chemistry photography.
Title: Moore's law for cameras
Post by: ErikKaffehr on July 20, 2009, 04:14:57 pm
Hi,

"when was the last time you had a hard drive fail?"

Three days ago. Actually I have had a lot of disc failures, like two, three a year. The last one that has failed was a LaCie external disk which has seen very little use. On the other hand I had very few disk crashes on OEM disks. I guess that computer manufacturers buy disks from series that are "proven" and tested. Keeping temperatures down is probably also very important. I had a RAID server running on six 250 MByte disks without a single failure, but I had three 5" fans in that box, temperature inside was always around 35 degrees C.

Bet regards
Erik



Quote from: Alan Goldhammer
I don't think we can predict cost with any reliability.  When the first PC was introduced over two decades ago, I observed that a good solid desk top unit cost about $2500-3000.  For several years after that new, improved models came out (more memory, 20 MB hard drives, etc.) but the cost was still in that neighborhood.  All of the sudden there were tremendous leaps in technology (when was the last time you had a hard drive fail?) and costs shrunk dramatically.  Now we talk about business desk top models in the $400-600 range with much more power, etc.  I suspect the same thing will happen in the sensor arena.  A more pertinent question to ask is what it will mean for photographers.  My Nikon D300 gives wonderful results for the work I do.  I don't print larger than 101/2 x 16 and the clarity of these images is outstanding.  If I was into panoramic printing I might want more out of a sensor.  Lens design as everyone has noted is limiting and to me the major problem we have is not being able to preserve the kind of quality when stopping down past f8.  This limits the control of depth of field when one would want to.  That limit aside, we are presented with great hardware and software technologies that allow us to go much further than in the days of wet chemistry photography.
Title: Moore's law for cameras
Post by: riverpeak on July 21, 2009, 12:51:19 am
I guess I'll add my 2 cents worth.  

I don't think that we have yet hit the limit in "Moore's law" for cameras with respect to pixel density in sensors.  We'll find a use for the extra pixel density, even if it doesn't necessarily increase the effective resolution of the final pictures.  

So it's not about just getting higher resolution pictures.  One very interesting and promising technology is something called a "Plenoptic" camera, which I think could become one of the biggest "killer-app" features in future digital cameras if they get it to work.  Such cameras will benefit most from high pixel density sensors.  If fact, a 100Mpixels sensor would probably be considered an "enabling technology".  A plenoptic camera, as I understand it, allows the user to set a picture's focus AFTER the picture is taken, using post-image-processing.  Pictures, in general, would likely always be taken at maximum or near maximum aperture (like f2 to f4) (which is less affected by diffraction at the higher pixel density; but diffraction is a problem that still doesn't go way).  Once the picture is taken, the user can then adjust the focus and depth-of-field by using post processing.   So here we may have a perfectly good new use for a 100Mpixel sensor on a DLSR.

Here are some links that describe the Plenoptic camera.  Most of what I have learned from these cameras comes from a paper published from researchers at Stanford University.

http://www.digitalcamerainfo.com/content/S...mercialized.htm (http://www.digitalcamerainfo.com/content/Stanford-Refocusing-Camera-to-Be-Commercialized.htm)
http://www.refocusimaging.com (http://www.refocusimaging.com)
http://graphics.stanford.edu/papers/lfcamera/ (http://graphics.stanford.edu/papers/lfcamera/)

If your want to really get technical, you can read the technical report:  http://graphics.stanford.edu/papers/lfcame...mera-150dpi.pdf (http://graphics.stanford.edu/papers/lfcamera/lfcamera-150dpi.pdf)

I won't pretend to understand the real details of this new technology, but a point I'd like to make is that with a 100Mpixel camera, the final picture may not necessarily be a 100Mpixes image, but may be something significantly less, like 10Mpixels (or less).  The extra sensor pixels would not give the user more usable resolution, but the ability to adjust focus and depth of field after taking the picture.  The picture would still have to be focused when taking the picture to something close to what the photographer wanted.   But this would be pretty awesome for sports photography, where one could set precise focus to the stitching of a baseball, or a players eyes, or both, at the photographer's discretion, after the fact.
Title: Moore's law for cameras
Post by: Wayne Fox on July 21, 2009, 03:15:56 am
Quote from: Alan Goldhammer
Lens design as everyone has noted is limiting and to me the major problem we have is not being able to preserve the kind of quality when stopping down past f8.

Perhaps current lens designs anyway.  There are other technologies being researched, including the ability to image the light without optics.  currently not practical, but who knows what the future holds.
Title: Moore's law for cameras
Post by: cmi on July 21, 2009, 05:03:26 am
Quote from: riverpeak
If your want to really get technical, you can read the technical report:  http://graphics.stanford.edu/papers/lfcame...mera-150dpi.pdf (http://graphics.stanford.edu/papers/lfcamera/lfcamera-150dpi.pdf)

I didnt understand it fully too, also we are getting more OT, but they essentially place a microlens array at a special distance determined by the maximum aperture in front of the sensor. From a distance the resulting image appears the same as a normal image, but if you zoom in you see tiny image circles on a black background next to each other, like an insects eye. This image now holds information about the direction of the light rays, thereby permitting all sorts of calculations, e.g. refocusing or parallax. As far as they write, this is by now the most flexible technology of this kind.
Title: Moore's law for cameras
Post by: Nemo on July 21, 2009, 07:26:12 am
The only practical use of a 100MP Bayer (or similar) mass produced camera I can think of is this: you can get 25MP RAW files with full color information for each final "pixel". For particular applications you could enable the 100MP bayer resolution... Or you may get some intermediate resolutions by interpolation. The question is if a Foveon type sensor of 25MP would be a better solution (from a market point of view, including costs and marketing, related to output and workflow). When the MP growth will stop and new designs (more efficient) start to flourish? It will happen? I don't know. Maybe the mosaic-based designs (Bayer or slighly different) is the simplest and more efficient design for the current state of the technology, and the foreseable inmediate future.
Title: Moore's law for cameras
Post by: Nemo on July 21, 2009, 12:53:59 pm
Quote from: Alan Goldhammer
Lens design as everyone has noted is limiting and to me the major problem we have is not being able to preserve the kind of quality when stopping down past f8.

In fact, most produced lenses are zooms, wide-angles and cheap kit lenses for inexpensive cameras. It is very difficult to minimize aberrations under those constraints (size, price...). You can get a diffraction-limited lens at f/4 on axis, but take a look at the price of a Leica lens or a fine telephoto lens from Canon.

You have a good example in the new Zuiko lenses for the micro 4/3 Olympus camera. The 25mm f/2.8 is reported to be not as good as it was expected. It isn't a superfast lens, the format is small... so, what is the problem? I think the cost/price may have been a serious constraint in the design of the lens, besides the size and maybe the contrast based AF of the camera (constraints on the total weight of the lens elements?). You can design and manufacture a much better lens than the Zuiko, but it is not easy if you work under cost limits, size limits, etc. Then, mass-produced lenses and cameras have a different set of constraints than cameras for special applications of even cameras for professional applications. Diffraction problems can be a limiting factor for most of the cameras produced.
Title: Moore's law for cameras
Post by: barryfitzgerald on July 21, 2009, 01:26:28 pm
And the debate rages on! I just read the article by Nathan Myhrvold
Some folks seem to forget that “Moore" was an Intel guy, and digital photography was not what he based the papers on,  computer processors yes..camera sensors no.

At this stage I think the "yes it does matter" debate was far more interesting, because at least it was related to photography and real images!
Also again, sad to see image quality yet again purely based on resolution, nothing ever about tonality or colour etc etc.

Sad to see..keep playing away folks, the nerds are having fun!