Pages: [1]   Go Down

Author Topic: High-Quality Computational Imaging Through Simple Lenses  (Read 10300 times)

PierreVandevenne

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 512
    • http://www.datarescue.com/life
High-Quality Computational Imaging Through Simple Lenses
« on: October 01, 2013, 12:43:14 pm »

Sorry if this has already been posted here, but I find this quite impressive.

http://www.cs.ubc.ca/labs/imager/tr/2013/SimpleLensImaging/

Requires pre-computed PSFs though, but still...
Logged

Isaac

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3123
Re: High-Quality Computational Imaging Through Simple Lenses
« Reply #1 on: October 01, 2013, 12:56:44 pm »

Gosh!
Logged

BernardLanguillier

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 13983
    • http://www.flickr.com/photos/bernardlanguillier/sets/
Re: High-Quality Computational Imaging Through Simple Lenses
« Reply #2 on: October 01, 2013, 09:31:18 pm »

Impressive!

Cheers,
Bernard

xpatUSA

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 390
    • Blog
Re: High-Quality Computational Imaging Through Simple Lenses
« Reply #3 on: October 02, 2013, 12:08:32 am »

Impressive indeed!
Logged
best regards,

Ted

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: High-Quality Computational Imaging Through Simple Lenses
« Reply #4 on: October 02, 2013, 04:21:56 am »

Sorry if this has already been posted here, but I find this quite impressive.

http://www.cs.ubc.ca/labs/imager/tr/2013/SimpleLensImaging/

Requires pre-computed PSFs though, but still...

Hi Pierre,

Yes, we're getting there, step by step. They clearly demonstrate that spatially variant PSFs are required for the best results. Their cross-channel adjustment of course benefits poorly corrected lenses more than decent lenses, but even those will benefit some.

Indeed, a remaining issue is the required non-blind estimation of the PSFs, one would preferably be able and correct an image based on image content only (but that will fail on some image content). However, research in that field is also making progress, and we may see a combination of (optionally) user generated calibration and manufacturer generated generic calibration in the not so long term.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: High-Quality Computational Imaging Through Simple Lenses
« Reply #5 on: October 02, 2013, 05:16:35 am »

I believe that the appearance of good, practical deconvolution/lense correction could alter the lens design process. Instead of optimizing for (among other things) "minimum PSF", they could optimize for "minimum PSF after software correction".

Perhaps novel lens arrangements can lead to PSFs that are not pleasing in themselves, but that are relatively easily corrected (e.g. no deep spectral zeros).

For this to really fly, you need a good sensor that "oversamples" the spatial information and that provides sufficient SNR.

-h
Logged

Tim Lookingbill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2436
Re: High-Quality Computational Imaging Through Simple Lenses
« Reply #6 on: October 02, 2013, 12:01:18 pm »

Very interesting tech doc. But I do think it's asking quite a lot from software to compensate for optimum IQ.

The "Corrected" versions of the sample images downloaded and viewed at 100% zoom in Photoshop shows a very coarse stippled pattern of rather large faded dots covering the entire image where further sharpening & clarity adjusts will greatly amplify.

Maybe a half way point between less expensive lens design and letting the software de-convolution engineering take care of the rest would be in order.

It's a great example giving insight into the complexity of forming an image digitally. Had to do some searching on understanding the significance of PSF (Point-Spread Function) and found this tech doc on microscopy that fills in quite a bit scrolling down to the chapter 1.12 of the same term...

http://www.microscopyu.com/pdfs/DigitalImagingIntro.pdf
  
« Last Edit: October 02, 2013, 12:03:50 pm by Tim Lookingbill »
Logged

PierreVandevenne

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 512
    • http://www.datarescue.com/life
Re: High-Quality Computational Imaging Through Simple Lenses
« Reply #7 on: October 04, 2013, 07:42:04 pm »

Bart: the cross channel adjustment is what got me to the paper. Would be nice to have for RGB astro images rather than relying on kitchen recipes to fix RGB discrepancies. But then, diving into the paper and the code, I realized how essential measured (as opposed to estimated) PSFs were.

Tim: I don't think optimum image quality is the goal here. As Bart said, the better the lens, the lower the benefit. A fixed focal length well corrected apochromatic triplet will still beat their results by a wide margin. Yes, the images are far from perfect from a pixel peeping point of view, but don't forget they were acquired with a plano-convex lens! That's about as bad as it gets in terms of staring point :-) A lot of the other techniques that produce visually acceptable results on images from so-so kit lenses would fail badly dealing with such images.

There are examples from a cheap EF 28-105 if you follow the link at bottom of the page - easy to miss. To tell the truth I didn't know the EF 28-105 was so bad to start with, but the improvement is noticeable.

And yes, some kind of half way point could bring a lot of benefit. Adding something like an Intel Quark processor and 4 GB of ram to a lens is probably going to be cheaper (and lighter) than adding actual glass. This opens, as -h suggested the door to fundamentally new designs.
Logged

Glenn NK

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 313
Re: High-Quality Computational Imaging Through Simple Lenses
« Reply #8 on: October 05, 2013, 12:22:26 pm »

Very interesting - can't wait (I have some "soft" stuff). ;)

G
Logged
Economics:  the study of achieving infinite growth with finite resources

EsbenHR

  • Newbie
  • *
  • Offline Offline
  • Posts: 41
Re: High-Quality Computational Imaging Through Simple Lenses
« Reply #9 on: October 08, 2013, 02:08:10 am »

I believe that the appearance of good, practical deconvolution/lense correction could alter the lens design process. Instead of optimizing for (among other things) "minimum PSF", they could optimize for "minimum PSF after software correction".

Perhaps novel lens arrangements can lead to PSFs that are not pleasing in themselves, but that are relatively easily corrected (e.g. no deep spectral zeros).

For this to really fly, you need a good sensor that "oversamples" the spatial information and that provides sufficient SNR.

The design point for lenses have been shifting for some time now. Deconvolution is not usually part of the chain yet [1] except for a few special applications. For example, there are bar- and QR-code scanners based on cubic lenses (where the surfaces follows a x^3 form), which makes a really blurred image that should make any photographer run away screaming. However, this particular form happens to make the PSF independent of distance.

You can afford to put in deconvolution if 1) the end result does not need a lot of resolution, 2) you can avoid a mechanical focus system.

Currently, the industry trend goes towards correcting distortion in software. There are several fixed-lens cameras out there where the corners are outside the image circle at the wide end if left without correcting for barrel distortion. Any RAW-converter really needs to implement this.

That buys you a lot of latitude in the optical design. This can be traded for less weight, a larger zoom factor, better correction of other aberrations, improved rendering of details or a mix of these. As always, the typical choice is "reduced manufacturing cost" ;-)

One thing that is really hard to design around: even with the best coating, you have a substantial loss of light between air-glass surfaces (surfaces within groups are much better but still have the same issue). Some of that light still hits the sensor, usually in places where you really do not want it.

You really cannot build a big multi-group lens where glare will not affect deep shadows. A single element can do that. An sufficiently simple large-format lens can do that on a technical camera. Maybe that kind of design will be common again if we can get that mirror out of the way.


Regards,

Esben H-R Myosotis


[1] In a useless technical sense, sharpening is a form of deconvolution. It can be regarded is a way to compensate for, say, an$ anti-aliasing filter, so I guess it has been there from the first digital photos.
Logged

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: High-Quality Computational Imaging Through Simple Lenses
« Reply #10 on: October 08, 2013, 04:52:28 am »

Anybody know if the technique as described is implemented in any PS plug-in?
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: High-Quality Computational Imaging Through Simple Lenses
« Reply #11 on: October 08, 2013, 04:57:07 am »

Anybody know if the technique as described is implemented in any PS plug-in?

Hi Jack,

No PS-plugin implementation, yet.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==
Pages: [1]   Go Up