Pages: 1 ... 9 10 [11] 12 13 ... 18   Go Down

Author Topic: Deconvolution sharpening revisited  (Read 265978 times)

bradleygibson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 828
    • http://GibsonPhotographic.com
Re: Deconvolution sharpening revisited
« Reply #200 on: February 01, 2011, 10:42:00 pm »

David, what PSF did you use on your sample?  A simple Gaussian, or something more complex?
Logged
-Brad
 [url=http://GibsonPhotographic.com

David Ellsworth

  • Newbie
  • *
  • Offline Offline
  • Posts: 11
Re: Deconvolution sharpening revisited
« Reply #201 on: February 01, 2011, 11:28:46 pm »

David, what PSF did you use on your sample?  A simple Gaussian, or something more complex?

Brad, I used the same exact one that Bart used, both for convolution and deconvolution:

I have supplied a link to the data file. You can read the dat file with Wordpad or a similar simple document reader. You can input those numbers(rounded to 16-bit values, or converted to 8-bit numbers by dividing by 65535 and multiplying by 255 and rounding to integers). A small warning, the lower the accuracy, the lower the output quality will be. For convenience I've added a 16-bit Greyscale TIFF (convert to RGB mode if needed). I have turned it into an 11x11 kernel (9x9 + black border) because the program you referenced apparently (from the description) requires a zero backgound level.

I used the floating point data file, not the TIFF.

My algorithm can work on a PSF up to the size of the image itself, in this case 1115x1115... but I'd have to synthesize an Airy disk myself, and haven't learned how to do that yet. So I just padded Bart's 9x9 one with black.
« Last Edit: February 01, 2011, 11:33:04 pm by David Ellsworth »
Logged

MichaelEzra

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1146
    • https://www.michaelezra.com
Re: Deconvolution sharpening revisited
« Reply #202 on: February 01, 2011, 11:46:04 pm »

David, this looks really interesting, especially considering that algorithm is fast.

It would be lovely to see something like this in a tool like RawTherapee, which provides a great image processing platform.
If you would be interested, I could help with UI implementation there.
Logged

David Ellsworth

  • Newbie
  • *
  • Offline Offline
  • Posts: 11
Re: Deconvolution sharpening revisited
« Reply #203 on: February 02, 2011, 12:21:06 am »

David, this looks really interesting, especially considering that algorithm is fast.

It would be lovely to see something like this in a tool like RawTherapee, which provides a great image processing platform.
If you would be interested, I could help with UI implementation there.

Michael,

Thanks for your interest. I'm not sure my algorithm is ready for that, though!

When I said it's "very fast" I meant relative to other deconvolution algorithms.
It takes about 2.5 seconds on my Intel E8400 (3.0 GHz) with DDR2 800 to process a 1115x1115 image single-threaded. It probably scales roughly as O(N log N) where N=X*Y, the number of pixels, since most of its time is taken calculating FFTs.
It takes 39 seconds to process a 4096x4096 image (yep, that's O(N log N) all right). But with multi-threading it could be much faster, so maybe it could indeed be practical for raw conversion...

But first things first. I need to write some code to make it adapt its frequency cutoff to which frequencies are most garbled by noise, and I need to address the awful edge effects it currently has, and of course make it work with non-square images (EDIT: It now works with non-square images).
« Last Edit: February 02, 2011, 02:18:36 am by David Ellsworth »
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Deconvolution sharpening revisited
« Reply #204 on: February 02, 2011, 01:12:06 am »

Hi!

Your effort is really appreciated! Thanks for sharing ideas, results and code.

Best regards
Erik

Thanks Edmund, and thanks Eric. It is indeed simple, which makes me wonder why seemingly nobody else has thought of it. However I do think it has a lot of room for improvement, for example dealing with a noisy or quantized image — my current solution is to cut off frequencies that are noisy, but that results in ringing artifacts. Maybe I can add an algorithm that fiddles with the noisy frequencies in order to reduce the appearance of ringing (not sure at this point how to go about it, though). And there's of course the issue of edge effects, which I haven't tried to tackle yet.

I intend to post the source code, but it's rather messy right now (the main problem is that it uses raw files instead of TIFFs), so I'd like to clean it up first. Unless you'd really like to play with it right away, in which case I can post it as-is...

Meanwhile, I've improved the algorithm: 1) Do a gradual frequency cutoff instead of a threshold discontinuity; 2) Use the exact floating point kernel for deconvolution, instead of using a kernel-blurred white pixel rounded to 16 bits/channel.
The result: 0343_Crop+Diffraction+DFT_division_v3.jpg (1.2 MB)

David
Logged
Erik Kaffehr
 

Graham Mitchell

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2281
Re: Deconvolution sharpening revisited
« Reply #205 on: February 02, 2011, 02:34:16 am »

Great results so far. I look forward to playing around with it some day (assuming code is released :) )
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #206 on: February 02, 2011, 04:14:02 am »

I hope no one minds my entering this a bit late. I've written a small C program that does deconvolution by Discrete Fourier Transform division (using the library "FFTW" to do Fast Fourier Transforms). To me this seems to beat all the other deconvolution algorithms that have been presented in this thread... I'd like to see what others think.

Hi David,

The more the merrier, I'm glad you joined.

Quote
My algorithm currently only works on square images, and deals with edge effects very badly, so I added black borders to Bart's max-quality jpeg crop making it 1115x1115, then applied the convolution myself using his provided 9x9 kernel. I operated entirely on 16-bit per channel data. The exact convoluted image that I worked from can be downloaded here (5.0 MB PNG file). My result, saved as a maximum-quality JPEG (after re-cropping it to 1003x1107): 0343_Crop+Diffraction+DFT_division_v2.jpg (1.2 MB)

David, the results look too good to be true. Could you verify that you (your software) used the correct (convolved) file as input? It's not that I don't like the results, it's that all other algorithms I've tried cannot restore data that has been lost (too low S/N ratio) to f/32 diffraction. Maybe you didn't save the intermediate convolved/diffracted result, thus truncating the accuracy to 16-bit at best. Maybe you were performing all subsequent steps while keeping the intermediate results in floating point?

Quote
My algorithm takes a single white pixel on a black background the same size as the original image, applies the PSF blur to it, and divides the DFT of the blurred pixel by the DFT of the pixel (element by element, using complex arithmetic); this takes advantage of the fact that the DFT of a single white pixel in the center of a black background has a uniformly gray DFT. Then it takes the DFT of the convoluted image and divides this by the result from the previous operation. Division on a particular element is only done if the divisor is above a certain threshold (to avoid amplifying noise too much, even noise resulting from 16-bit quantization). An inverse DFT is done on the final result to get a deconvoluted image.

Yes, that's basic restoration by deconvolution in Fourier space.

Quote
This algorithm is very fast and does not need to go through iterations of successive approximation; it gets its best result right off the bat.

The iterations that were mentioned are part of the Richardson-Lucy algorithm, it's an iterative Bayesian maximum likelihood method. Regular deconvolution is faster, but poses other challenges. One of the things you can do to improve the edge performance is padding with e.g. mirrored image data, and making a final crop after deconvolution.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

David Ellsworth

  • Newbie
  • *
  • Offline Offline
  • Posts: 11
Re: Deconvolution sharpening revisited
« Reply #207 on: February 02, 2011, 01:12:01 pm »

Hi Bart,

David, the results look too good to be true. Could you verify that you (your software) used the correct (convolved) file as input? It's not that I don't like the results, it's that all other algorithms I've tried cannot restore data that has been lost (too low S/N ratio) to f/32 diffraction. Maybe you didn't save the intermediate convolved/diffracted result, thus truncating the accuracy to 16-bit at best. Maybe you were performing all subsequent steps while keeping the intermediate results in floating point?

Well in a way, it is too good to be true. It's VERY sensitive to tiny changes in the convoluted input, and the edges need to be completely intact and fade out to black. But my program is indeed using a 16-bit/ch data file as input (the PNG I posted), which I'm absolutely sure of as 1) the convolution and deconvolution are done in separate runs, with only files on the hard disk as input and output, and 2) I've tried messing with the convoluted input, which thoroughly ruins the deconvoluted output unless it is made less sensitive by increasing the frequency drop-off threshold, and 3) I've tried pasting your 16-bit/ch convoluted PNG on top of mine, and the deconvolution still looks good but has a bit of noise near the edges, probably because of the difference between the JPEG crop and the original crop.

Here's what happens if I pollute the convoluted input with just one white pixel, all else being equal: 0343_Crop+Diffraction+DFT_division_v3_whitepixel.jpg (1.5 MB)
I added the white pixel in an image editor, not using my program.

I would enjoy it if you posted another 16-bit/ch PNG of a convoluted image, without posting the original, but this time with edges that fade to black (i.e., pad the original with 8 pixels of black before applying a 9x9 kernel).

Best regards,
David
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #208 on: February 02, 2011, 02:24:12 pm »

I would enjoy it if you posted another 16-bit/ch PNG of a convoluted image, without posting the original, but this time with edges that fade to black (i.e., pad the original with 8 pixels of black before applying a 9x9 kernel).

Hi David,

Okay, here it is, a 16-b/ch RGB PNG file with an 8 pixel black border, convolved with the same "N=32" kernel as before:


Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

David Ellsworth

  • Newbie
  • *
  • Offline Offline
  • Posts: 11
Re: Deconvolution sharpening revisited
« Reply #209 on: February 02, 2011, 03:42:55 pm »

Hi David,

Okay, here it is, a 16-b/ch RGB PNG file with an 8 pixel black border, convolved with the same "N=32" kernel as before:

Tada:


But I really want to get this working with cropped-edge images. Right now that's the Achilles heel of the algorithm: having missing edges corrupts the entire image, not just the vicinity of the edges. I tried masking edges by multiplying them by a convoluted white rectangle, but that still left significant ringing noise over the whole picture. I'll try that mirrored-edge idea, but I doubt it'll work (EDIT: indeed, it didn't work). I have another idea, of tapering off the inverse PSF so that it doesn't have that "action at a distance", but that might remove its ability to reconstruct fine detail... it's a really hard concept to wrap my mind around. It seems that this algorithm works on a gestalt of the whole image to reconstruct even one piece of it, even though the PSF is just 9x9.

BTW, the edges can actually be any color, as long as it's uniform. I can just subtract it out (making some values negative within the image itself), and then add it back in before saving the result.

Cheers,
David
« Last Edit: February 02, 2011, 04:51:02 pm by David Ellsworth »
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Deconvolution sharpening revisited
« Reply #210 on: February 02, 2011, 03:48:27 pm »

+1

Great results so far. I look forward to playing around with it some day (assuming code is released :) )
Logged
Erik Kaffehr
 

eronald

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6642
    • My gallery on Instagram
Re: Deconvolution sharpening revisited
« Reply #211 on: February 02, 2011, 04:47:15 pm »

David,


Seeing the algorithm works, I think that some code in Matlab would significantly advance the topic at this point.
Maybe you could post the existing code, and then we could recode the thing in Matlab?

I'm a bit confused by the fact that you said you operate on the Raw; how are you doing the Raw deconvolution?

Edmund

Tada:


But I really want to get this working with cropped-edge images. Right now that's the Achilles heel of the algorithm: having missing edges corrupts the entire image, not just the vicinity of the edges. I tried masking edges by multiplying them by a convoluted white rectangle, but that still left significant ringing noise over the whole picture. I'll try that mirrored-edge idea, but I doubt it'll work. I have another idea, of tapering off the inverse PSF so that it doesn't have that "action at a distance", but that might remove its ability to reconstruct fine detail... it's a really hard concept to wrap my mind around. It seems that this algorithm works on a gestalt of the whole image to reconstruct even one piece of it, even though the PSF is just 9x9.

BTW, the edges can actually be any color, as long as it's uniform. I can just subtract it out (making some values negative within the image itself), and then add it back in before saving the result.

Cheers,
David
Logged
If you appreciate my blog posts help me by following on https://instagram.com/edmundronald

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #212 on: February 02, 2011, 05:20:30 pm »

Tada:


Pretty close. And here is (a max quality JPEG version of) the original before convolution:



Quote
But I really want to get this working with cropped-edge images. Right now that's the Achilles heel of the algorithm: having missing edges corrupts the entire image, not just the vicinity of the edges. I tried masking edges by multiplying them by a convoluted white rectangle, but that still left significant ringing noise over the whole picture. I'll try that mirrored-edge idea, but I doubt it'll work. I have another idea, of tapering off the inverse PSF so that it doesn't have that "action at a distance", but that might remove its ability to reconstruct fine detail... it's a really hard concept to wrap my mind around. It seems that this algorithm works on a gestalt of the whole image to reconstruct even one piece of it, even though the PSF is just 9x9.

As your experiment with the single white pixel shows, each and every pixel contributes to the reconstruction of the entire image, although the amplitude reduces with distance. The ringing at edges has to do with the abrupt discontinuity of the pixels contributing to the image. An image is presumed to have infinite dimensions, which can be approximated by creating a symmetric image, such as in the "Reflected" image padding example at:
http://reference.wolfram.com/mathematica/ref/ImagePad.html

This also allows to low-pass filter the larger Fourier transform with an appropriate function (a Gaussian is a simple one), but the highest spatial frequencies are sacrificed to prevent other artifacts.

It is also important to note that here the PSF is known to high accuracy, where in real time situations we can only approximate it, and noise can spoil the party.

Cheers,
Bart
« Last Edit: February 02, 2011, 05:23:26 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Re: Deconvolution sharpening revisited
« Reply #213 on: February 03, 2011, 09:19:29 am »

David, the results look too good to be true...

Inverse filtering can be very exact if the blur PSF is known and there is no added noise, as in these examples.

But what is really surprising is how much detail is coming back - it's as though nothing is being lost to diffraction. Detail above what should be the "cutoff frequency" is being restored.

I think what is happening is that the 9x9 or 11x11 Airy disk is too small to simulate a real Airy disk. It is allowing spatial frequencies above the diffraction cutoff to leak past. Then David's inverse filter is able to restore most of those higher-than-cutoff frequency details as well as the lower frequencies (on which it does a superior job).

To be more realistic I think it will be necessary to go with a bigger simulated Airy disk.

--typo edited
« Last Edit: February 03, 2011, 01:54:41 pm by crames »
Logged
Cliff

eronald

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6642
    • My gallery on Instagram
Re: Deconvolution sharpening revisited
« Reply #214 on: February 03, 2011, 09:44:22 am »

I'm vaguely following this - however, I would have thought that if the fourier transform of the blur function is invertible then it's pretty obvious that you'll get the original back - with some uncertainty in areas with a lot of higher frequency due to noise in the original and computational approximation.
Edmund

Inverse filtering can be very exact if the blur PSF is known and there is no added noise, as in these examples.

But what is really surprising how much detail is coming back - it's as though nothing is being lost to diffraction. Detail above what should be the "cutoff frequency" is being restored.

I think what is happening is that the 9x9 or 11x11 Airy disk is too small to simulate a real Airy disk. It is allowing spatial frequencies above the diffraction cutoff to leak past. Then David's inverse filter is able to restore most of those higher-than-cutoff frequency details as well as the lower frequencies (on which it does a superior job).

To be more realistic I think it will be necessary to go with a bigger simulated Airy disk.
Logged
If you appreciate my blog posts help me by following on https://instagram.com/edmundronald

PierreVandevenne

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 512
    • http://www.datarescue.com/life
Re: Deconvolution sharpening revisited
« Reply #215 on: February 03, 2011, 11:14:05 am »

I think (possibly wrongly) that the main problem is estimating the real world PSF, both at the focal plane and ideally in a space around the focal plane (see Nijboer-Zernike). Inverting known issues is relatively easy in comparison. This is only based on my experience trying to fix astronomical images but I suspect (photographically vs astrophotographically) optical issues are equivalent, tracking errors are similar to camera movement and turbulence errors are similar to subject movement. Depth of field should make things more complex for photos. And of course, one doesn't lack distorted point sources in astro images.

Apologies if I missed the point, I haven't re-read the whole thread.
Logged

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Re: Deconvolution sharpening revisited
« Reply #216 on: February 03, 2011, 01:47:41 pm »

I'm vaguely following this - however, I would have thought that if the fourier transform of the blur function is invertible then it's pretty obvious that you'll get the original back - with some uncertainty in areas with a lot of higher frequency due to noise in the original and computational approximation.

The frequencies above the cutoff frequency should be zero, so should not be invertible. (1/zero = ???)
Logged
Cliff

eronald

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6642
    • My gallery on Instagram
Re: Deconvolution sharpening revisited
« Reply #217 on: February 03, 2011, 03:29:52 pm »

The frequencies above the cutoff frequency should be zero, so should not be invertible. (1/zero = ???)

Speaking without experience, again, I would assume that is always the case. When you fourier transform or DFT or whatever, you will truncate at the cutoff or twice the cutoff or whatever - after that it doesn't make sense anymore.

Maybe I should go and try actually doing some computational experiments with Matlab and an image processing book, and refresh my knowledge,  rather than spout nonsense.
Edmund
Logged
If you appreciate my blog posts help me by following on https://instagram.com/edmundronald

David Ellsworth

  • Newbie
  • *
  • Offline Offline
  • Posts: 11
Re: Deconvolution sharpening revisited
« Reply #218 on: February 03, 2011, 05:19:57 pm »

I think what is happening is that the 9x9 or 11x11 Airy disk is too small to simulate a real Airy disk. It is allowing spatial frequencies above the diffraction cutoff to leak past. Then David's inverse filter is able to restore most of those higher-than-cutoff frequency details as well as the lower frequencies (on which it does a superior job).

To be more realistic I think it will be necessary to go with a bigger simulated Airy disk.

Wow, you are absolutely right. It makes a HUGE difference in this case; 9x9 was far too small to nullify the higher-than-cutoff frequencies in the same way the full Airy disk does. I tried a 127x127 kernel and indeed, my algorithm now cannot recover those frequencies at all. Also I notice that global contrast is quite visibly reduced using the 127x127 kernel, which didn't happen with the 9x9 for obvious reasons.

I used the Airy disk formula from Wikipedia, using the libc implementation of the Bessel function, double _j1(double). My result differed slightly from Bart's in the inner 9x9 pixels. Any idea why? Bart, was your kernel actually an Airy disk convolved with an OLPF?

I attempted to save the 127x127 kernel in the same format as Bart posted his 9x9 (except for using a PNG instead of a TIFF for the picture version):
Airy127x127_6.4micron_564nm_f32.zip (data file)
Airy127x127_6.4micron_564nm_f32.png (16-bit PNG)

Here's the original, convoluted and deconvoluted fourier transforms (logarithm of the absolute value of each color channel) — click one to see full DFT as a max-quality JPEG:


The residual frequencies outside the ellipse are due to the use of a 127x127 kernel instead of one as big as the picture.

Convoluted 16-bit/channel PNG: 0343_Crop+Diffraction127x127.png (5.1 MB)
Deconvoluted max-quality JPEG: 0343_Crop+Diffraction127x127+DFT_division.jpg (1.1 MB)

So as you can see, my algorithm doesn't do so well with a real Airy disk. It has significant ringing. Do the other algorithms/programs demonstrated earlier in the thread deal with this 127x127-convoluted 16-bit PNG just as well as they dealt with the 9x9-convoluted one?

BTW, Bart, do you have protanomalous or protanopic vision? I notice you always change your links to blue instead of the default red, and I've been doing the same thing because the red is hard for me to tell at a glance from black, against a pale background.

Cheers,
David
« Last Edit: February 03, 2011, 07:22:16 pm by David Ellsworth »
Logged

eronald

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6642
    • My gallery on Instagram
Re: Deconvolution sharpening revisited
« Reply #219 on: February 03, 2011, 06:05:26 pm »

I guess I can ask Norman whether Imatest could dump some real-world diffraction kernels.
Logged
If you appreciate my blog posts help me by following on https://instagram.com/edmundronald
Pages: 1 ... 9 10 [11] 12 13 ... 18   Go Up