Pages: [1]   Go Down

Author Topic: Why Moore’s Law Might Apply to Digital Photography  (Read 2998 times)

afalco

  • Newbie
  • *
  • Offline Offline
  • Posts: 16
Why Moore’s Law Might Apply to Digital Photography
« on: July 13, 2009, 02:03:44 pm »

First of all I have to tell you that everything Ray Maxwell says about the diffraction limit and depth of field is true. I don't expect to see these obstacles eliminated any time soon. So increasing the resolution of the sensors indefinitely  today is meaningless. Still, there are some possibilities which may/might help us to overcome these limitations. And all of these methods would require larger resolution sensors.

The diffraction limit depends only on the aperture, the effective pixel spacing and the wavelength of light (it does not depend on the focal length of the lens). It is well known that  the wavelength of the light limits the size of the features that can be distinguished. But chip makers overcame this limit. From Wikipedia: "Matsushita and Intel started mass producing 45 nm chips in late 2007... Many critical feature sizes are smaller than the wavelength of light used for lithography, i.e., 193 nm and/or 248 nm. A variety of techniques, such as larger lenses, are used to make sub-wavelength features." This means that sub-wavelength resolution is theoretically and in some cases practically possible. And there are very special kind of new optical materials with a negative refraction index that in principle makes diffractionless optics possible. But it would require more pixels and more processing power for the in-camera processors and even some modification of the lenses themselves.

The depth of field which depends on the focal length and the aperture used limits the part of a picture that can be sharp. Longer focal lengths results in narrower DOF. But here's an article how this limitation may be overcome: Phase Plate to Extend the Depth of Field of Incoherent Hybrid Imaging Systems. In this scientific paper the authors describe a method which increases the depth of field of an optical system by an order of magnitude more than the Hopkins defocus criterion. To apply the method many source pixels must be combined to get one target pixel.

If these two advances can be combined there is hope yet for us photographers  But don't hold your breath!
Logged

fennario

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 61
Why Moore’s Law Might Apply to Digital Photography
« Reply #1 on: July 13, 2009, 11:07:41 pm »

Interesting article, however, I feel that while the absolute resolution threshold may be near, the next level (as we are seeing) is increased color and luminescence fidelity/sensitivity.
Logged

afalco

  • Newbie
  • *
  • Offline Offline
  • Posts: 16
Why Moore’s Law Might Apply to Digital Photography
« Reply #2 on: July 14, 2009, 02:01:48 pm »

This is also what I'm hoping for. But thinking about the pixel density a little more I discovered a very good use of much more megapixels. Most cameras today use a Bayer sensor array. If we replace every pixels by three smaller ones and put a filter of the three primary colors at the front of each of them then although it will not create more resolution, we can get rid of the Bayer mosaic, and color interpolation would become simpler. The effective resolution and "pixel count" remains the same but  color accuracy may be much higher. I think I will patent this idea...
Logged

neil74

  • Newbie
  • *
  • Offline Offline
  • Posts: 25
Why Moore’s Law Might Apply to Digital Photography
« Reply #3 on: July 14, 2009, 05:51:57 pm »

For landscape work I can see focus stacking becoming the route to maximizing resolution.  The likes of the D3x are already diffraction limited now and you may be better shooting 3 or 4 frames at f5.6 and using Helicon or CS4.
Logged

dreed

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1715
Why Moore’s Law Might Apply to Digital Photography
« Reply #4 on: July 17, 2009, 11:32:50 am »

Quote from: afalco
This is also what I'm hoping for. But thinking about the pixel density a little more I discovered a very good use of much more megapixels. Most cameras today use a Bayer sensor array. If we replace every pixels by three smaller ones and put a filter of the three primary colors at the front of each of them then although it will not create more resolution, we can get rid of the Bayer mosaic, and color interpolation would become simpler. The effective resolution and "pixel count" remains the same but  color accuracy may be much higher. I think I will patent this idea...

What comes to my mind when I read this is this....

If I have a point source of light and it produces a single "airy disc" on the sensor, then if it is the same size (or smaller) than the pixel then either it gets lost or its colour cannot be accurately recorded. Imagine it was a violet dot and it lands on a green pixel. You can't ever know what colour it really was.

Now if I have a 100MP 35mm sensor (it has twice the vertical resolution of the 21-25MP camera) then I should be able to compose a perfectly sharp 21MP picture - perhaps with better accuracy than we can today.

But maybe I'm missing something?

Logged

Chris Stomberg

  • Newbie
  • *
  • Offline Offline
  • Posts: 6
Why Moore’s Law Might Apply to Digital Photography
« Reply #5 on: July 22, 2009, 02:54:21 pm »

I've been thinking a bunch about this over the last several days, and I think there are perhaps many "parallels" with the computer industry to be explored.

A few years back, personal computers hit a performance wall. After years of marketing ever higher mhz then Ghz ratings as metrics for CPU performance, it just stopped being a relevant comparison. A host of other performance bottlenecks began to be the limiters (memory latency, disk I/O, bus bandwidth, etc). As one of those types who has actually bought CPUs from time to time, it is a pretty amazing thing to me that the range of advertised clock speeds has barely changed since 2005 - literally eons in computer time. But really this is just a manifestation of just how rapidly the focus shifted to other areas of progress - particularly the development of chips that feature multiple processing cores operating in parallel.

In the mid-1990's being able to do computations in parallel meant: 1) having access to a fancy parallel computer like the Cray t3-E (think super computer center at large university), 2) being capable of programming in either C or Fortran, and 3) having a problem that was actually amenable to the complete rethinking that parallel computations required. It was hard, and not many people did it, but the results were amazing - real time weather simulation, modeling atomic explosions, proteomic modeling etc. Fast forward to today and heck even my laptop has multiple cores. And it helps when I do sharpening in Lightroom.

My point is, that if you look around at the technology in photography today, it is clear that a similar process is well under way. Stacking and stitching shots is a pretty useful way to knock your way through many of the barriers that single-chip photography presents. For example you can pack your pixels almost as deep as you want (depending on your patience and focal range of your lens kit). By the way, isn't this one way around the diffraction limit: you just bump up the magnification factor and increase the virtual size of the sensor? You can also make focus as shallow or deep as you want it. You can boost dynamic range by multiple stops.  And, of course, you can pick your field of view on the fly.

Practicing this dark art, though, reminds me a lot of the early/middle years of development in parallel processing. It has become really easy for the technically inclined, and it can produce mind-numbingly good results when it works. And all of these things can be had with cheap stuff too: a workable pano head costs under $100, and the software starts at free (but the good stuff will set you back a few hundred too). My Canon trusty old 350D seems to work fine in this application.

But then there are the limitations. Forget about moving subjects (unless you are really creative and plan). Getting the captures right can be hard too - do any cameras out there have focus bracketing? Even just just remembering to get all the combinations can be a challenge: DOF x DR x X x Y gets to be a lot to remember.  And it can take real time and patience - though what's new, right? Also, though you no longer have to master the secret handshakes of Panotools, you still have to know what you are doing.  Oh, and there's one other investment - processing time (or a really fast computer) to make the stacks and stitches happen.

Of course some of the limitations are not really limitations, but the challenges associated with trying to figure out what to do with all the DOF/DR/FOV that you get to work with. Getting something that doesn't look bizarre out of HDR is a real trick, and just moving a modest multi-hundred megabyte hdr panoramic file around on a computer can be a chore. The creative opportunities are significant, but there's a price.

It's all really promising, but it's also clear there's still a ton of room for development. For example, there's more natural convergence on the software end of things waiting to happen (though this seems to be progressing quickly). And on the hardware end of things too - what could be done? What about a programmable head/camera combination that could walk through the combinations automatically? What if that could do a quick snap and stitch to preview your composition? Is there a workaround for the need to swing the lens around the nodal point such that multiple lenses/bodies could be triggered simultaneously?  Could multiple cheap chips/lenses be used simultaneously to break the time barrier that currently exists - i.e. really go parallel? Etc. etc.  And, probably not all photographic problems can be solved with these tools - even with more development.

The economics of multiple sensor photography bear consideration in this equation. It's just a ton cheaper to go parallel with your existing camera than to invest in a medium format back. That will probably not change. My understanding is that what has made modern computer processors so cheap is that increasing transistor density has mainly delivered higher per-wafer yields. Building big sensors will likely always be more expensive because covering a larger wafer area with one chip increases the chance of pulling in enough bad pixels to result in a failed unit. If you cover the same area with eight chips and only one of them lands in the area with the failure, then you still have three chips and much less waste.

I suppose that if you could figure out how to build a camera using an array of small chips, then it could be quite cheap.

Interesting stuff - anyone else out there have thoughts like this?

Chris

Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Why Moore’s Law Might Apply to Digital Photography
« Reply #6 on: July 22, 2009, 03:02:10 pm »

Hi,

I sort of feel that we need to think about some more down to earth things like focusing accuracy, alignment of lens and sensor camera vibrations and so on. The best technology today probably matches sensors quite well. But we normally don't use the best lenses, nor the best tripods and we don't use a microscope to focus.

Best regards
Erik Kaffehr
Logged
Erik Kaffehr
 

Slough

  • Guest
Why Moore’s Law Might Apply to Digital Photography
« Reply #7 on: July 23, 2009, 04:47:56 pm »

I thought there were some reasonable points in the 'essay' and the follow up though I don't think they say anything particularly interesting, or profound. However, in the follow up I read the following:

"This became hugely controversial because it challenged the conventional wisdom that the sharpest pictures come from stopping down to f/22 or even f/32"

Surely only someone with no knowledge of the physics of optics would hold such a stupid belief. Maybe by 'conventional' he meant 'stoopid' (sic). Or maybe that is the sort of belief common on dpreview ...
Logged
Pages: [1]   Go Up