Pages: 1 ... 10 11 [12] 13 14 ... 18   Go Down

Author Topic: Deconvolution sharpening revisited  (Read 265927 times)

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #220 on: February 03, 2011, 07:01:36 pm »

Inverse filtering can be very exact if the blur PSF is known and there is no added noise, as in these examples.

But what is really surprising is how much detail is coming back - it's as though nothing is being lost to diffraction. Detail above what should be the "cutoff frequency" is being restored.

Hi Cliff,

Indeed, it surprised me as well. I expected more frequencies to have a too low contribution after rounding to 16 bit integer values. However, the convolution kernel used is not a full (infinite) diffraction pattern, but is truncated at 9x9 pixels (assuming a 6.4 micron sensel pitch with 100% fill factor sensels). Since the convolution and the deconvolution are done with the same filter, the reconstruction can be (close to) perfect, within the offered floating point precision. Any filter would do, but I wanted to demonstrate something imaginable, based on an insanely narrow aperture.

Quote
I think what is happening is that the 9x9 or 11x11 Airy disk is too small to simulate a real Airy disk. It is allowing spatial frequencies above the diffraction cutoff to leak past. Then David's inverse filter is able to restore most of those higher-than-cutoff frequency details as well as the lower frequencies (on which it does a superior job).

To be more realistic I think it will be necessary to go with a bigger simulated Airy disk.

I don't think that's necessary, unless one wants an even more accurate diffraction kernel. I can make larger kernels, but there are few applications (besides selfmade software) that can accommodate them. Anyway, a 9x9 kernel covers some 89% of the power of a 99x99 kernel.

Cheers,
Bart
« Last Edit: February 03, 2011, 07:36:02 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #221 on: February 03, 2011, 07:09:23 pm »

I think (possibly wrongly) that the main problem is estimating the real world PSF, both at the focal plane and ideally in a space around the focal plane (see Nijboer-Zernike). Inverting known issues is relatively easy in comparison.

Hi Pierre,

That's correct. In this particular case it has been demonstrated that in theory, in a perfect world, the effects of e.g. diffraction (or defocus, or optical aberrations, or ...) can be reversed. So people who claim that blur is blur and all is lost are demonstrably wrong. However, the trick is in finding the PSF in the first place. Fortunately, with a combination of several different blur sources, we often end up with something that can be described by a combination of Gaussians. Many natural phenomenae cumulate into a Gaussian type of distribution.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #222 on: February 03, 2011, 07:30:00 pm »

I used the Airy disk formula from Wikipedia, using the libc implementation of the Bessel function, double _j1(double). My result differed slightly from Bart's in the inner 9x9 pixels. Any idea why? Bart, was your kernel actually an Airy disk convolved with an OLPF?

Hi David,

Yes, my kernel is assumed to represent a 100% fill factor, 6.4 micron sensel pitch, kernel. I did that by letting Mathematica integrate the 2D function at each sensel position + or - 0.5 sensel.

Quote
BTW, Bart, do you have protanomalous or protanopic vision? I notice you always change your links to blue instead of the default red, and I've been doing the same thing because the red is hard for me to tell at a glance from black, against a pale background.

No, my color vision is normal. I change the color because it is more obvious in general, and follows the default Web conventions for hyperlinks (which may have had colorblind vision in the considerations for that color choice, I don't know). It's more obvious that it's a hyperlink and not just an underlined word. Must be my Marketing background, to reason from the perspective of the endusers.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

David Ellsworth

  • Newbie
  • *
  • Offline Offline
  • Posts: 11
Re: Deconvolution sharpening revisited
« Reply #223 on: February 03, 2011, 08:14:57 pm »

Hi Bart,

I don't think that's necessary, unless one wants an even more accurate diffraction kernel. I can make larger kernels, but there are few applications (besides selfmade software) that can accommodate them. Anyway, a 9x9 kernel covers some 89% of the power of a 99x99 kernel.

But just look at the difference between 0343_Crop convolved with the 9x9 kernel and the 127x127 kernel. There's a huge difference which is visually immediately obvious: the larger kernel has a visible effect on global contrast. Admittedly it might not be as visually obvious if this were done on linear image data (with the gamma/tone curve applied afterwards, for viewing). But then, the ringing might not look as bad either.

There is of course another reason not to use large kernels. Applying a large kernel the conventional way is very slow; if I'm not mistaken it's O(N^2). But doing it through DFTs is fast (basically my algorithm in reverse), O(N log N). Of course the same problem about software exists.

Yes, my kernel is assumed to represent a 100% fill factor, 6.4 micron sensel pitch, kernel. I did that by letting Mathematica integrate the 2D function at each sensel position + or - 0.5 sensel.

I don't understand. What is there for Mathematica to integrate? The Airy disk function does use the Bessel function, which can be calculated either as an integral or an infinite sum, but can't you just call the Bessel function in Mathematica? What did you integrate?

No, my color vision is normal. I change the color because it is more obvious in general, and follows the default Web conventions for hyperlinks (which may have had colorblind vision in the considerations for that color choice, I don't know). It's more obvious that it's a hyperlink and not just an underlined word. Must be my Marketing background, to reason from the perspective of the endusers.

Well then, it's a neat coincidence that this makes it easier for me to see your links.

BTW, should this thread perhaps be in the "Digital Image Processing" subforum instead of "Medium Format / Film / Digital Backs – and Large Sensor Photography"?

-David
« Last Edit: February 03, 2011, 11:24:30 pm by David Ellsworth »
Logged

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Re: Deconvolution sharpening revisited
« Reply #224 on: February 04, 2011, 10:19:32 am »

There is of course another reason not to use large kernels. Applying a large kernel the conventional way is very slow; if I'm not mistaken it's O(N^2). But doing it through DFTs is fast (basically my algorithm in reverse), O(N log N). Of course the same problem about software exists.

One could process the image in blocks, say 512x512 or 1024x1024, with a little block overlap to mitigate edge effects; then the cost is only O(N).  

Quote
I don't understand. What is there for Mathematica to integrate? The Airy disk function does use the Bessel function, which can be calculated either as an integral or an infinite sum, but can't you just call the Bessel function in Mathematica? What did you integrate?

I would suspect one wants to box blur the Airy pattern to model the effect of diffraction on pixel values (assuming 100% microlens coverage).  The input to that is a pixel size, as Bart states.
Logged
emil

jfwfoto

  • Newbie
  • *
  • Offline Offline
  • Posts: 22
Re: Deconvolution sharpening revisited
« Reply #225 on: February 04, 2011, 11:04:09 am »

I can recommend the Focus Fixer plugin. I believe it is a deconvolution type program though they will not describe it as such to promote their exclusivity. They have a database of camera sensors so the PSF may be better than the ballpark options. The plugin corrects AA blurring when used at low settings and at higher settings can refocus images that are slightly out of focus. It will run on a selected area so if only a part of the image needs to be refocused it can be done quickly whereas running the plugin on the whole image takes a little time. DXO is another software that has AA blurring recovery designed for specific sensors and I think it works well too. It also will not describe its 'secret' as deconvolution but it is essentially what they are talking about.
Logged

David Ellsworth

  • Newbie
  • *
  • Offline Offline
  • Posts: 11
Re: Deconvolution sharpening revisited
« Reply #226 on: February 04, 2011, 12:10:37 pm »

I would suspect one wants to box blur the Airy pattern to model the effect of diffraction on pixel values (assuming 100% microlens coverage).  The input to that is a pixel size, as Bart states.

Oh, thanks. Now I understand — he integrated the Airy function over the square of each pixel. I made the mistake of evaluating it only at the center of each pixel, silly me.

What method does Mathematica use to integrate that? Is it actually evaluating to full floating point accuracy (seems unlikely)?

Edit: Now getting the same result as Bart in the inner 9x9 pixels, to 8-10 significant digits.
« Last Edit: February 04, 2011, 03:09:14 pm by David Ellsworth »
Logged

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Re: Deconvolution sharpening revisited
« Reply #227 on: February 04, 2011, 12:50:07 pm »

Oh, thanks. Now I understand — he integrated the Airy function over the square of each pixel. I made the mistake of evaluating it only at the center of each pixel, silly me.

What method does Mathematica use to integrate that? Is it actually evaluating to full floating point accuracy (seems unlikely)?

Good question.  Looking in the documentation a bit, it seems it samples the domain adaptively and recursively until a target error estimate is reached.  Usually the precision is more than you will need, but it is specifiable if the default settings are insufficient; similarly the sampling method is specifiable from a list of choices.
Logged
emil

Ernst Dinkla

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4005
Re: Deconvolution sharpening revisited
« Reply #228 on: June 27, 2011, 06:41:50 am »

Exported from another thread and forum but fitting here well in my opinion:


I then ran another test to see if altering my capture sharpening could improve things further. As I think you suggested, deconvolution sharpening could result in fewer artefacts, so I went back to the Develop Module and altered my sharpening to Radius 0.6, Detail 100, and Amount 38 (my original settings were Radius 0.9, Detail 35, Amount 55). The next print gained a little more acutance as a result with output sharpening still set to High, with some fine lines on the cup patterns now becoming visible under the loupe. Just for fun, I am going to attach 1200 ppi scans of the prints so you can judge for yourselves, bearing in mind that this is a very tiny section of the finished print.

John

John,

This is not the forum to discuss scanning and sharpening but I am intrigued by some aspects/contradictions of deconvolution sharpening, flatbed scanning and film grain. I have seen a thread on another LL forum  (you were there too) that discussed deconvolution sharpening but little information on flatbed scanners and film grain. I have a suspicion that on flatbeds diffraction plays an important role in loss of sharpness (engineers deliberately using a small stop for several reasons) while at the same time the oversampling as used on most Umax and Epson models keeps (aliased) grain in the scan low and delivers an acceptable dynamic range. In another thread Bart mentioned the use of a slanted edge target on a flatbed to deliver a suitable base for the sharpening. I would be interested in an optimal deconvolation sharpening route for an Epson V700 while still keeping grain/noise at bay. Noise too as I use that scanner also for reflective scans.





met vriendelijke groeten, Ernst
Try: http://groups.yahoo.com/group/Wide_Inkjet_Printers/
Logged

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: Deconvolution sharpening revisited
« Reply #229 on: June 27, 2011, 06:55:17 am »

How would one go about to characterize a lense/sensor as "perfectly" as possible, in such a way to generate suitable deconvolution kernels? I imagine that they would at least be a function of distance from lense centre point (radially symmetric), aperture and focal length. Perhaps also scene distance, wavelength and non-radially spatial coordinate. If you want to have a complete PSF as a function of all of those without a lot of sparse sampling/interpolation, you have to make serious number of measurements. Would be neat as an exercise in "how good can deconvolution be in a real-life camera system".

A practical limitation would be the consistency of the parameters (variation over time) and typical sensor noise. I believe that typical kernels would be spatial high-pass(boost), meaning that any sensor noise will be amplified compared to real image content.

-h
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #230 on: June 27, 2011, 10:18:35 am »

In another thread Bart mentioned the use of a slanted edge target on a flatbed to deliver a suitable base for the sharpening. I would be interested in an optimal deconvolation sharpening route for an Epson V700 while still keeping grain/noise at bay. Noise too as I use that scanner also for reflective scans.

Hi Ernst,

Allow me to make a few remarks/observations before answering. The determination of scanner resolution is described in an ISO norm, and it uses a "slanted edge" target to determine the resolution (SRF or MTF) in 2 directions (the fast scan direction, and the slow scan direction):
Photography -- Spatial resolution measurements of electronic scanners for photographic images
Part 1: Scanners for reflective media and
Part 2: Film scanners
In both cases slanted edge targets are used, but obviously on different media/substrates. These targets are offered by several suppliers, and given the low volumes and strict tolerances they are not really cheap.

I have made my own slanted edge target for filmscanners, as a DIY project, from a slide mount holding a straight razor blade and positioned at an approx. 5.7 degrees slant. This worked slightly better than using folded thin alumin(i)um foil, despite the bevelled edge of the razor blade. I used black tape to cover the holes in the blade and most of the surface of the blade to reduce internal reflections and veiling glare.

This allowed me to determine the resolution capabilities or rather the Spatial Frequency Response (=MTF) of a dedicated film scanner (which allows focusing), in my case by using the Imatest software that follows that ISO method of determination. It also allowed me to quit scanning 35mm film when the Canon EOS-1Ds Mark II arrived (16MP on that camera effectively matched low ISO color film resolution). I haven't shot 35mm film since.

There is however an important factor we might overlook. When we scan film, we are in fact convolving the ideal image data with the system Point Spread Function (PSF) of both camera (lens and film) and the scanner (assuming perfect focus). That combined system PSF is what we really want to use for deconvolution sharpening. The scanner PSF alone will help to restore whatever blur the scan process introduced, but it produces a sub-optimal restoration when film is involved. It would suffice for refection scans though, it could even compensate somewhat for the mismatched focus at the glass platen surface used for reflection scans.

Therefore, for the construction of a (de)convolution kernel, one can take a photographic recording of a printed copy of a slanted edge target, on low ISO color film. One can use a good lens at the optimal aperture as a best case PSF scenario. Even other areas of the same frame, like the corners, will benefit. For a better sharpening one can blend between a corner and the center PSF deconvolution. One can repeat that for different apertures and lenses. However, that will produce a sizeable database of PSFs to cover the various possibilities, and still not cover unknown sources.

Luckily when a number of blur sources are combined, as is often the case with natural phenomena, the combined PSFs of several sources will resemble a Gaussian PSF shape. This means that we can approximate the combined system PSF with a relatively simple model, which even allows to empirically determine the best approach for unknown sources. It won't be optimal from a quality point of view, but that would require a lot of work. Perhaps close is good enough in >90% of the cases?

So my suggestion for filmscans is to try the empirical path, e.g. with "Rawshooter" which also handles TIFFs as input (although I don't know how well it behaves with very large scans), or with Focusmagic (which also has a film setting to cope with graininess), or with Topazlabs InFocus (perhaps after a mild prior denoise step).

For reflection scans, and taking the potentially suboptimal focus at the surface of the glass platen into account, One could use a suitable slanted edge target, and build a PSF from it. I have made a target out of thin selfadhesive Black and White PVC foil. That will allow to have a very sharp edge when one uses a sharp knife to cut it. Just stick the white foil on top of the black foil which will hopefully reduce the risk of white clipping in the scan, or add a thin gelatin ND filter between the target and the platen if the exposure cannot be influenced.

Unfortunately there are only few software solutions that take a custom PSF as input, so perhaps an empirical approach can be used here as well. Topazlabs InFocus allows to generate and automatically use an estimated deconvolution by letting it analyse an image/scan with adequate edge contrast detail. That should work pretty well for reflection scans, because there is no depth of field issue when scanning most flat originals (although scans of photos can be a challenge depending on the scene). Unfortunately, the contrasty edges need to be part of the same scene (or added in the scan file) because I think InFocus doesn't allow to store the estimated solution, but it could save as a preset a normal deconvolution with optimal settings to optimize a PSF or a more simple detailed piece of artwork.

As for sharpening noise, I don't think that deconvolution necessarily sharpens the multi-pixel graininess/dye couds, although it might 'enhance' some of the finest dye clouds. It just depends on what the blur radius is that helps the image detail, and that isn't necessarily the same radius that some of the graininess has.

Sorry for the long answer. Been there done that, so much to explain and take into account.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #231 on: June 27, 2011, 11:15:06 am »

How would one go about to characterize a lense/sensor as "perfectly" as possible, in such a way to generate suitable deconvolution kernels? I imagine that they would at least be a function of distance from lense centre point (radially symmetric), aperture and focal length. Perhaps also scene distance, wavelength and non-radially spatial coordinate. If you want to have a complete PSF as a function of all of those without a lot of sparse sampling/interpolation, you have to make serious number of measurements. Would be neat as an exercise in "how good can deconvolution be in a real-life camera system".

Indeed, a lot of work and los of data, but also limited possibilities to use the derived PSF. Mind you, this is exactly what is being worked on behind the scenes, spatially variant deconvolution, potentially estimated from the actual image detail.

Quote
A practical limitation would be the consistency of the parameters (variation over time) and typical sensor noise. I believe that typical kernels would be spatial high-pass(boost), meaning that any sensor noise will be amplified compared to real image content.


There is some potential to combat the noise influence, because many images have their capture (shot) noise and readnoise recorded at the sensel level, yet after demosaicing that noise becomes larger than a single pixel. The demosaiced luminance detail though can approach per pixel resolution quite closely. So there is some possibility to suppress some of the lower frequency noise without hurting the finest detail too much. In addition, one can intelligently mask low spatial frequency areas (where noise is more visible) to exclude from the deconvolution.

Noise remains a challenge for deconvolution, but there are a few possibilities that  can be exploited to allow to sharpen detail more than noise, thus improving the S/N ratio.

Cheers,
Bart
« Last Edit: June 27, 2011, 01:50:10 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Ernst Dinkla

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4005
Re: Deconvolution sharpening revisited
« Reply #232 on: June 27, 2011, 11:57:48 am »


So my suggestion for filmscans is to try the empirical path, e.g. with "Rawshooter" which also handles TIFFs as input (although I don't know how well it behaves with very large scans), or with Focusmagic (which also has a film setting to cope with graininess), or with Topazlabs InFocus (perhaps after a mild prior denoise step).

For reflection scans, and taking the potentially suboptimal focus at the surface of the glass platen into account, One could use a suitable slanted edge target, and build a PSF from it. I have made a target out of thin selfadhesive Black and White PVC foil. That will allow to have a very sharp edge when one uses a sharp knife to cut it. Just stick the white foil on top of the black foil which will hopefully reduce the risk of white clipping in the scan, or add a thin gelatin ND filter between the target and the platen if the exposure cannot be influenced.

Cheers,
Bart

Bart,  thank you for the explanation.

That there is a complication in camera film scanning is something I expected, two optical systems to build it. Yet I expect that the scanner optics may have a typical character that could be defined separately for both film scanning and reflective scanning. The diffraction limited character of the scanner lens + the multisampling sensor/stepping in that scanner should be detectable I guess and treated with a suitable sharpening would be a more effective first step. There are not that many lenses used for the films to scan and I wonder if that part of deconvolution could be done separately. It would be interesting to see whether a typical Epson V700 restoration sharpening can be used by other V700 owners separate of their camera influences.

For resolution testing the Nikon 8000 scanner I had some slanted edge targets made on litho film on an image setter. The slanted edge parallel to the laser beam for a sharp edge and a high contrast. Not that expensive, I had them run with a normal job for a graphic film. That way I could use the film target in wet mounting where a cut razor or cut vinyl tape would create its own linear fluid lens on the edge, a thing better avoided. Of course I have to do the scan twice for both directions.

In your reply you probably kick the legs off of that chair I am sitting on.... I will have a look at the applications you mention.


met vriendelijke groeten, Ernst

Try: http://groups.yahoo.com/group/Wide_Inkjet_Printers/
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #233 on: June 28, 2011, 05:58:43 am »

Bart,  thank you for the explanation.

That there is a complication in camera film scanning is something I expected, two optical systems to build it. Yet I expect that the scanner optics may have a typical character that could be defined separately for both film scanning and reflective scanning. The diffraction limited character of the scanner lens + the multisampling sensor/stepping in that scanner should be detectable I guess and treated with a suitable sharpening would be a more effective first step. There are not that many lenses used for the films to scan and I wonder if that part of deconvolution could be done separately. It would be interesting to see whether a typical Epson V700 restoration sharpening can be used by other V700 owners separate of their camera influences.

Hi Ernst,

Well, since the system MTF is built by multiplying the component MTFs, it makes sense to improve the worst contributor first, as it will boost the total MTF most. I'm not so sure that diffraction is a big issue, afterall several scanners use linear array CCDs which probably is easier to tackle with mostly cylindrical lenses, and to reduce heat the are operated pretty wide open. One thing is sure though, defocus will kill your MTF very fast, so for a reflective scanner the mismatch between the focus plane and the surface of the glass platen will cause an issue which could be addressed by using deconvolution sharpening.

So I wouldn't mind makig a PSF based on a slanted edge scan, presuming we can find an application that takes it as input for deconvolution. What would work anyway, is to tweak the deconvolution settings to restore as much of a slanted edge scan (excluding the camera 'system' MTF) as possible, and compare that setting to the full deconvolution of an average color film/print scan (including the camera 'system' MTF).

Quote
For resolution testing the Nikon 8000 scanner I had some slanted edge targets made on litho film on an image setter. The slanted edge parallel to the laser beam for a sharp edge and a high contrast. Not that expensive, I had them run with a normal job for a graphic film. That way I could use the film target in wet mounting where a cut razor or cut vinyl tape would create its own linear fluid lens on the edge, a thing better avoided. Of course I have to do the scan twice for both directions.

Yes, for a scanner that allows to adjust its exposure that might help to avoid highlight clipping. I'm a bit concerned about shadow clipping though, because graphic films can have a reasonably high D-max, which might throw off the Slanted Edge evalation routines in Imatest.

Quote
In your reply you probably kick the legs off of that chair I am sitting on.... I will have a look at the applications you mention.

No harm intended, but sometimes we have to settle for a sub-optimal solution. It might work good enough when we're working at the limit of human visual acuity. For magnified output, we of course try to alow compromises in the workflow system as late as possible if unavoidable.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Ernst Dinkla

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4005
Re: Deconvolution sharpening revisited
« Reply #234 on: June 28, 2011, 10:44:26 am »

Hi Ernst,

Well, since the system MTF is built by multiplying the component MTFs, it makes sense to improve the worst contributor first, as it will boost the total MTF most. I'm not so sure that diffraction is a big issue, afterall several scanners use linear array CCDs which probably is easier to tackle with mostly cylindrical lenses, and to reduce heat the are operated pretty wide open. One thing is sure though, defocus will kill your MTF very fast, so for a reflective scanner the mismatch between the focus plane and the surface of the glass platen will cause an issue which could be addressed by using deconvolution sharpening.

So I wouldn't mind makig a PSF based on a slanted edge scan, presuming we can find an application that takes it as input for deconvolution. What would work anyway, is to tweak the deconvolution settings to restore as much of a slanted edge scan (excluding the camera 'system' MTF) as possible, and compare that setting to the full deconvolution of an average color film/print scan (including the camera 'system' MTF).

Yes, for a scanner that allows to adjust its exposure that might help to avoid highlight clipping. I'm a bit concerned about shadow clipping though, because graphic films can have a reasonably high D-max, which might throw off the Slanted Edge evalation routines in Imatest.

No harm intended, but sometimes we have to settle for a sub-optimal solution. It might work good enough when we're working at the limit of human visual acuity. For magnified output, we of course try to alow compromises in the workflow system as late as possible if unavoidable.

Cheers,
Bart

The V700 scanner lenses have a longer focal length than I expected, the path is folded up with 5 mirrors I think. The symmetrical 6 element lenses, either plasmats or early planar designs, are small. Yet there is still enough depth of focus, something you would expect of a wide angle lens.  To achieve that and an even sharpness over the entire scan width I thought a diffraction limited lens design would be a good choice, compromising on centre sharpness but improving the sides. The scanner lamp still has that reflector design which compensates the light fall off at the sides. There are two lenses, one scanning 150 mm wide for the normal film carriers that are in focus at about 2.5mm on my V700 and a lens that covers the total scan bed width to be used for reflective scanning and 8x10"film on the bed so focus should be close to the bed. So it has to be done for two lens/carrier combinations. I am using a Doug Fisher wet mount glass carrier with the film wet mounted to the underside of the glass and the focus can be adjusted with small nylon screws. All fine tuned meanwhile.

Your path to get there is the most practical I think. The opaqueness of the slanted edge film that I have is 5.3 D, I did discuss it at the time with the image setting shop. My probably naïve assumption was that it should be as high as possible and the edge as sharp as possible. When I compared my Nikon 8000 MTF results some years back they differed from other user tests but I assumed it was related to the tweaking of my Nikon wet mount carriers focusing etc.
http://www.photo-i.co.uk/BB/viewtopic.php?p=14907&sid=00cf64bad077b78d1f3f8bf70172afef
When later on I used another MTF tool (Quick MTF demo) on the same data the results were also different. Imatest is growing beyond my budget for tools like that.
So the target I made may not be the right one. Pity though, what could be better than a film target, a 5.3 D black emulsion less than 30 micron thick, a laser created edge :-)

met vriendelijke groeten, Ernst

New: Spectral plots of +250 inkjet papers:

http://www.pigment-print.com/spectralplots/spectrumviz_1.htm
Logged

sjprg

  • Full Member
  • ***
  • Offline Offline
  • Posts: 129
Deconvolution sharpening revisited
« Reply #235 on: October 11, 2011, 12:13:15 pm »

Adobe has probably used some of this discussion in a new prototype.
Shown at Adobe Max

http://www.pcworld.com/article/241637/adobe_shows_off_prototype_blurfixing_feature.html

Paul
Logged
Paul

eronald

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6642
    • My gallery on Instagram
Re: Deconvolution sharpening revisited
« Reply #236 on: October 11, 2011, 04:35:10 pm »

Adobe has probably used some of this discussion in a new prototype.
Shown at Adobe Max

http://www.pcworld.com/article/241637/adobe_shows_off_prototype_blurfixing_feature.html

Paul

They probably purchased a russian mathematician

The interesting question which Adobe need to demonstrate having solved is not whether images can be enhanced, it is whether they can be enhanced while looking nice.

Edmund
Logged
If you appreciate my blog posts help me by following on https://instagram.com/edmundronald

sjprg

  • Full Member
  • ***
  • Offline Offline
  • Posts: 129
Re: Deconvolution sharpening revisited
« Reply #237 on: October 12, 2011, 12:11:43 am »

Hi Edmond; Its been a long time.
I am glad to see Adobe at least working on it. At present Im using the adaptive Lucy-Richardson
from the program ImagesPlus.
Paul
Logged
Paul

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #238 on: October 12, 2011, 01:45:26 am »

Ive been using it since ver 2.82. Awesome program.
Logged

Schewe

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6229
    • http:www.schewephoto.com
Re: Deconvolution sharpening revisited
« Reply #239 on: October 12, 2011, 02:04:10 am »

They probably purchased a russian mathematician

The presenter's name is Jue Wang (not Russian which shows a bias on your part) and yes, it's based on a method of computing a PSF on a mutli-directional camera shake.

I saw something at MIT (don't know if this was the same math) that was able to compute and correct for this sort of blurring...ain't easy to compute the multi-directional PSF but if you can, you can de-burr images pretty successfully...up to a point.

It demos well but note the presenter loaded some "presets" that may have been highly tested and took a long time to figure out. Don't expect this in the "next version of Photoshop" but it is interesting (and useful) research for Adobe to be doing...
Logged
Pages: 1 ... 10 11 [12] 13 14 ... 18   Go Up