Pages: 1 ... 15 16 [17] 18   Go Down

Author Topic: Deconvolution sharpening revisited  (Read 265938 times)

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: Deconvolution sharpening revisited
« Reply #320 on: June 24, 2014, 01:27:47 pm »

I experimented with repeating unsharp masks and some other sharpening algorithm in Gimp and it's possible to get quite good results with that as well, but the examples you posted are better still, so I guess deconvolution actually works :)
Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #321 on: June 25, 2014, 03:56:01 pm »

I experimented with repeating unsharp masks and some other sharpening algorithm in Gimp and it's possible to get quite good results with that as well, but the examples you posted are better still, so I guess deconvolution actually works :)

It definitely works. Bart's is probably better than mine so I wont spend the time to play with it. One thing I will add, if I can give a help to Bart for once instead of always the other way around:

Many deconvolution methods work in the direction of increasing the luminance. You see the histogram shifting to the right. It is also the peaks of points that are moving. What I often do to eliminate these artifacts is average it back with an older version. This often gives a more natural look to the final image.
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Deconvolution sharpening revisited
« Reply #322 on: June 25, 2014, 05:41:35 pm »

Hi,

Just a small idea, you could test using a larger aperture for near optimal sharpness and use deconvolution sharpening on the out of focus areas. I have found that "smart sharpen" in Photoshop works well for minor defocus.

Best regards
Erik

I experimented with repeating unsharp masks and some other sharpening algorithm in Gimp and it's possible to get quite good results with that as well, but the examples you posted are better still, so I guess deconvolution actually works :)
Logged
Erik Kaffehr
 

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #323 on: June 27, 2014, 04:18:27 am »

I experimented with repeating unsharp masks and some other sharpening algorithm in Gimp and it's possible to get quite good results with that as well, but the examples you posted are better still, so I guess deconvolution actually works :)

Indeed, it works. Actually, it can almost work too good as shown in the attached versions of your crop, by 'restoring' some of the sensel structure (especially of non-AA-filtered sensor arrays).

The first attached version is with the same settings as the before versions with an assumed effective f/28 aperture on a 7.2 micron pitch sensor, but this time I switched-off the protection against noise amplification ('regularization'), which is difficult to see anyway in this crop as it can be mistaken for actual detail.

The second crop is that same image but now with some Topaz Detail enhancement added, to compensate for the overall loss of contrast due to diffraction and glare.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #324 on: June 27, 2014, 04:40:02 am »

Many deconvolution methods work in the direction of increasing the luminance. You see the histogram shifting to the right. It is also the peaks of points that are moving. What I often do to eliminate these artifacts is average it back with an older version. This often gives a more natural look to the final image.

Hi Arthur,

Strictly speaking deconvolution should have no significant bias effect on average brightness, but depending on the algorithm implementation, or execution, if might. The crucial thing is that deconvolution should really be done on image data in linear gamma space, or with more complicated calculations which do that gamma conversion and back on-the-fly.

What's then left is the possibility that what previously was the lowest/highest pixel value, now has a lower/higher value, which may lead to clipping. Afterall, reduced contrast flattens the local amplitudes/contrast, and restoration will restore those amplitudes.

A specialized application like PixInsight has an additional (to gamma linearization) provision (Dynamic Range Extension) for that which allows to avoid clipped intensities. This is also easier because that program can calculate with 64-bit floating point precision, which avoids most accumulation and rounding issues.

One can also prevent clipping by using Blend-if layers in Photoshop.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Deconvolution sharpening revisited
« Reply #325 on: June 28, 2014, 03:57:00 am »

Hi,

I made a test using small aperture, f/22 in this case, and sharpen extensively in Focus Magic, it worked amazingly well.

Using external tools with Lightroom breaks my parametric workflow, but I guess that I will use FM any time I print larger than A2 or have some defocus/diffraction issue.

Best regards
Erik


I experimented with repeating unsharp masks and some other sharpening algorithm in Gimp and it's possible to get quite good results with that as well, but the examples you posted are better still, so I guess deconvolution actually works :)
« Last Edit: June 28, 2014, 03:59:08 am by ErikKaffehr »
Logged
Erik Kaffehr
 

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #326 on: June 28, 2014, 12:25:48 pm »

Hi Arthur,

Strictly speaking deconvolution should have no significant bias effect on average brightness, but depending on the algorithm implementation, or execution, if might. The crucial thing is that deconvolution should really be done on image data in linear gamma space, or with more complicated calculations which do that gamma conversion and back on-the-fly.

What's then left is the possibility that what previously was the lowest/highest pixel value, now has a lower/higher value, which may lead to clipping. Afterall, reduced contrast flattens the local amplitudes/contrast, and restoration will restore those amplitudes.

A specialized application like PixInsight has an additional (to gamma linearization) provision (Dynamic Range Extension) for that which allows to avoid clipped intensities. This is also easier because that program can calculate with 64-bit floating point precision, which avoids most accumulation and rounding issues.

One can also prevent clipping by using Blend-if layers in Photoshop.

Cheers,
Bart

I checked out Pixinsight after seeing you use it. Your results seem better than what I can do with ImagesPlus. In IP you can watch the right tip of the histogram stretch right with iterations of adaptive R-L.

I would like a linear .tif export from raw therapee to use IP as you describe. On the color tab of RT, at the bottom, there is output gamma. As far as I can tell it does absolutely nothing. Whatever settings I pick do not seem to change the image at all. AMAZE is a much better demosaicing algorithm than the one in IP so I want to continue with RT first.

Another strange thing (tangents everywhere) in RT is that the color based noise routine seemed very powerful when they first developed it. In the last dozen updates it seems crippled. I can get better color denoise now going back to noise ninja 3. I use topaz denoise now which is far superior. It is all such a hodge-podge of tools that should be integrated with the raw processor. I do not like the RT deconvolution or the new noise system. Does pixinsight work in the raw data? The videos seem to show a complicated workflow.

Somebody will jump in saying lightroom is the integrated solution. I will avoid a monthy payment company like the plague. LR might not be in the cloud now, while there is lots of competition. It seems clear that is the direction Adobe wants for their products. No thanks. If it was by the hr that would be better for casual users.
Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #327 on: June 28, 2014, 01:46:37 pm »

I spent some time looking over the pixinsight documentation, as well as their forum. It seems you have hit the motherload.
They use DCRAW so they should have the same debayer options as raw therapee
They have very advanced sharpening better than images plus
They have very advanced noise reduction, better than the topaz denoise I just got recently. Probably better than DxO (I cant test that yet)
They do HDR
They do multi-frame mosaics
It seems to be color managed

This is huge.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #328 on: June 28, 2014, 02:52:02 pm »

I checked out Pixinsight after seeing you use it. Your results seem better than what I can do with ImagesPlus. In IP you can watch the right tip of the histogram stretch right with iterations of adaptive R-L.

Hi Arthur,

The formally correct way to do Deconvolution is in Linear gamma space. That means that in IP you should first reverse the gamma (e.g. of the L channel of an LRGB split), then Deconvolve that, then re-apply gamma to it, then recombine to an LRGB. IP uses floating-point calculations for its conversions, so losses should be relatively minimal, and not all that significant if based on 16-bit/channel images. That should result in amplitude increases that are independent on exposure level because gamma is linear. That may still lead to increased whitepoints or decreased blackpoints, and potential clipping if the original blurred data was stretched to maximum before deconvolution.

PixInsight is not for the faint of heart, it is a bit like the very professional version of IP, in the sense of being astrophotography oriented. But it is much more than an image editor, it's a software development and an image processing environment, fully color managed and with high scientific quality algorithms (using floating-point numbers of course). It is programmed by a group of astronomers, based in Spain. They recently developed a very advanced type of noise reduction (total generalized variation, or TGV Denoise), but it's not easy to use (requires lots of trial and error to optimize the settings), and they are also getting ready to release a novel Deconvolution tool (TGVRestoration) that looks extremely promising.

The biggest drawback of PixInsight is its poor documentation for many of its functions, but the documentation that is there is again of high quality. It takes time to write the documentation, and they are just too busy writing/improving the software itself. They hope that many of their customers will still find their way, especially if they have a scientific/academic background, and there are some users who prepared some very useful tutorials (although also astronomy oriented).

Quote
I would like a linear .tif export from raw therapee to use IP as you describe.

I haven't looked into that, but you can 'linearize' in IP by using a gamma conversion. I know, it's an extra step, but it does allow to do that, and the conversion losses in floating point should be minimal.

Quote
On the color tab of RT, at the bottom, there is output gamma. As far as I can tell it does absolutely nothing. Whatever settings I pick do not seem to change the image at all. AMAZE is a much better demosaicing algorithm than the one in IP so I want to continue with RT first.

I'd have to try that after reading the documentation first, but perhaps using an output profile with a linear gamma space would do the trick.

Quote
Does pixinsight work in the raw data? The videos seem to show a complicated workflow.

Yes, but for the moment the demosaicing options are limited to Bi-linear and VNG algorithms (which are not good enough for my taste), and a recently added Raw Drizzle approach called 'SuperPixel' that requires many Raw input files (incl. separate sets of Black frames and Offset frames) from slightly displaced (e.g. by atmospheric turbulence) undersampled images to reconstruct a full RGB image from Bayer CFA input. PI also allows to convert Bayer CFA data (e.g. a Dump from DCraw) directly, or use binning on White balanced CFA data (DCraw can do that directly).

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #329 on: June 28, 2014, 03:28:28 pm »

I spent some time looking over the pixinsight documentation, as well as their forum. It seems you have hit the motherload.



Quote
They use DCRAW so they should have the same debayer options as raw therapee

Should, but currently only Bilinear and VNG are implemented for specific Debayer operations, as well as a very complicated 'SuperPixel' Drizzle method. Maybe if they asked Emil Martinec, or the RT team, they could get permission to implement 'Amaze'.

Quote
They have very advanced sharpening better than images plus

Yes, although FocusMagic is pretty amazing as well. PixInsight is potentially going to be even better than it already is (TVG Restoration), 'soon' to be released after debugging and stability testing is finished.

Quote
They have very advanced noise reduction, better than the topaz denoise I just got recently. Probably better than DxO (I cant test that yet)

Topaz Denoise is easier to use, and to integrate into a Photoshop centric workflow, or via their own PhotoFXlab host program (can also be used as a PS plugin). I haven't tried DxO, but its latest Denoising method is supposed to be extremely good (and slow), but I don't like the Adobe RGB gamut limitations of DxO (even when tagged as ProPhoto RGB).

Quote
They do HDR

Yes, easy enough with floating point calculations, but the subsequent tonemapping is something completely different. Haven't experimented with it in PI much yet.

Quote
They do multi-frame mosaics

Yes, although their Star alignment seems to work best. On more terrestrial images PTGUI is more photographer oriented.

Quote
It seems to be color managed

Correct. Also allows to automatically do gamma linearization.

Quote
This is huge.

It'll take some work to get the hang of its specific workflow, but it does make a lot of sense (especially for astrophotography, but in many areas also for 'normal' photography). It's documentation is of high quality, but far from complete though.

Cheers,
Bart
« Last Edit: June 30, 2014, 12:05:33 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #330 on: June 30, 2014, 11:34:15 am »

For people not comfortable following the technical discussion on the other forum, they turned this:

http://pteam.pixinsight.com/decchall/bigradient_conv.tif

into

Try doing that with your sharpening software.
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Deconvolution sharpening revisited
« Reply #331 on: June 30, 2014, 03:17:07 pm »

The formally correct way to do Deconvolution is in Linear gamma space.<huge snip>

Bart, I've been doing some work with my camera simulator to try to get a handle on the "big sensels vs small sensels" question.

http://blog.kasson.com/?p=6078

I'm simulating a Zeiss Otus at f/5.6, and calculating diffraction at 450 nm, 550 nm, and 650 nm for the three (or four, since it's a Bayer sensor) color planes. The blur algorithm is diffraction plus a double application of a pilllbox filter with a radius (in microns) of 0.5 + 8.5/f , where f is the f-stop.

I'm finding that 1.25 um sensel pitch doesn't give a big advantage over 2.5 um with this lens. I'm wondering if things would be different with some deconvolution sharpening.

Now that you know my lens blur characteristics, can you give me a recipe for a good deconvolution kernel? Tell me what the spacing is in um or nm, and I'll do the conversion to sensels, which will change depending on the pitch.  I'd do it myself, but I have to admit I've just followed the broad outlines of this discussion, and would prefer not to delve much deeper at this point.

Warning: As this project progresses, I'll be asking for help on upsampling and downsampling algorithms as well. I hope I won't wear out my welcome.

Jim

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: Deconvolution sharpening revisited
« Reply #332 on: June 30, 2014, 03:44:31 pm »

If the convolution kernel is known (due to being a simulation), then that should narrow the list of deconvolution kernels quite a lot, should it not?

In the abscence of noise, would not a perfect inversion be optimal (possibly limited so as not to amplify by e.g. +30dB anywhere)? In the precense of noise, it might become a question of what kernel trades detail enhancement vs noise/artifact suppression (Wiener filtering?) in a suitable manner, which might depend on the CFA and the noise reduction.... ok, it is hard. My point is that is should be a lot less hard than the real-world case where the kernel has to be guesstimated locally.

-h
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Deconvolution sharpening revisited
« Reply #333 on: June 30, 2014, 03:57:00 pm »

If the convolution kernel is known (due to being a simulation), then that should narrow the list of deconvolution kernels quite a lot, should it not?

In the abscence of noise, would not a perfect inversion be optimal (possibly limited so as not to amplify by e.g. +30dB anywhere)? In the precense of noise, it might become a question of what kernel trades detail enhancement vs noise/artifact suppression (Wiener filtering?) in a suitable manner, which might depend on the CFA and the noise reduction.... ok, it is hard. My point is that is should be a lot less hard than the real-world case where the kernel has to be guesstimated locally.

You're probably right. Are you telling me that I'm just being lazy asking Bart to do the work for me?  :)

There is noise, and ever more noise as the sensel pitch gets smaller. I can post a link to some image files if anyone want to look at the noise.

Jim

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #334 on: June 30, 2014, 08:33:02 pm »

Bart, I've been doing some work with my camera simulator to try to get a handle on the "big sensels vs small sensels" question.

http://blog.kasson.com/?p=6078

Hi Jim,

I'll have to catch-up with reading of the latest developments. I've been following the DPreview discussion from a distance, so I do have some idea about the train of thought you're following.

Quote
I'm simulating a Zeiss Otus at f/5.6, and calculating diffraction at 450 nm, 550 nm, and 650 nm for the three (or four, since it's a Bayer sensor) color planes. The blur algorithm is diffraction plus a double application of a pilllbox filter with a radius (in microns) of 0.5 + 8.5/f , where f is the f-stop.

This is where I'd have to seriously adjust my thinking to what you are (or MatLab is) doing exactly, which may take some time. That's because what I generally do is, I take a Luminance weighted average of the (presumed) peak transmissions (450nm, 550nm, 650nm) of the R/G/B CFA filters (because Luminance is what most good (but generally unknown, unlike in your case) Demosaicing algorithms optimize for), and take that as input for a single 2-dimensional diffraction pattern at 564nm (564.05nm, if we use the weights of: R=0.212671, G=0.71516, B=0.072169), and I then integrate that diffraction pattern at each sensel aperture area over the surface of the fill-factor (usually 100%, assuming gap-less microlenses).

The reason that I reduce the problem to a single weighted average luminance diffraction pattern is because Deconvolution is usually performed on the Luminance channel (e.g. CIE Y in case of PixInsight, or L of LRGB in ImagesPlus). It can also reduce the processing time to almost 1/3rd compared to an R+G+B deconvolution cycle. I can understand that for your model you would need to keep separate diffraction patterns per CFA color.

A 2-D kernel that includes the third Bessel zero of the Airy disc pattern usually accounts for most (at least 93.8%) of the energy of the full diffraction pattern. This diffraction kernel is then used, e.g. as an image or as a mathematical PSF, depending on the required input of the deconvolution algorithm. Attached are 2 kernels in data form, one for 2.5 micron pitch and one for 1.25 micron pitch, both for a 100% fill-factor, weighted luminance at 564nm, and a nominal f/5.6 aperture.

Quote
I'm finding that 1.25 um sensel pitch doesn't give a big advantage over 2.5 um with this lens.

Off-the-cuff, I'd assume that's due to the size of the diffraction pattern, which will dominate at such small pitches.

Quote
I'm wondering if things would be different with some deconvolution sharpening.

I assume they would, due to the dominating influence of diffraction (+ fill-factor blur, and/or OLPF). How much can be restored remains to be seen and depends on the system MTF at the various spatial frequencies, but an OTUS would significantly increase the probability of being able to restore something, especially on a high Dynamic range sensor array.

Quote
Now that you know my lens blur characteristics, can you give me a recipe for a good deconvolution kernel? Tell me what the spacing is in um or nm, and I'll do the conversion to sensels, which will change depending on the pitch.  I'd do it myself, but I have to admit I've just followed the broad outlines of this discussion, and would prefer not to delve much deeper at this point.

From what I remember of reading the earlier development of your simulation model, I'd have to assume that a Dirac delta function that is convolved with your specific R/G/B CFA diffraction patterns, and subsequent pill-box filters, should provide an exact (for your model) Deconvolution filter.

If one were to follow my approach as described above, for f/5.6 (actually 4*Sqrt[2]=5.68528), at 564nm, for a 100% fill-factor, of a 2.5 micron and 1.25 micron sensel pitch (at infinity focus), I've added two data files with kernel weights per pixel. They need to be normalized to a sum total of 1.0, or converted to an image (e.g. with ImageJ, import as a text image and save to a 16-bit TIFF or a 32-bit float FITS), before using them as a deconvolution kernel.

Quote
Warning: As this project progresses, I'll be asking for help on upsampling and downsampling algorithms as well. I hope I won't wear out my welcome.

No problem, if in a thread that's more appropriate to that subject.

Cheers,
Bart
« Last Edit: July 01, 2014, 04:17:02 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #335 on: July 11, 2014, 11:14:11 am »

Attached is a shot at 3600mm. (1200mm scope, 2x barlow, Sony A55 1.5 crop factor) After processing it looks a bit like a PS layers job. The details follow.

After importing from the camera I just opened one of many shots, I did not check to see if is is the sharpest. Anyway, I converted in RT, it looked ok. Imported to images plus, revered the gamma, split the channels, deconvolved the luma channel. 5 @ 7x7, 10 @ 5x5, 30 @ 3x3, all adaptive R-L. I recombined, moved the gamma back to 2.2. Denoised - the background which was completely OOF to begin with, had started to looked "sharp" with a grain texture. The problem is the outline of the bird has the colors of the bird slightly outside it's area. The image becomes very paste job, which it is not, it is as shot. It feels like I need to deconvolve the colors to keep them in position with the luma data.

I have read in several places to deconvolve luma only. It seems a bit off. What are the issues with deconvolving the color channels?
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Deconvolution sharpening revisited
« Reply #336 on: July 11, 2014, 11:52:28 am »

Attached is a shot at 3600mm. (1200mm scope, 2x barlow, Sony A55 1.5 crop factor) After processing it looks a bit like a PS layers job. The details follow.

Hi Arthur,

It's either a very steady tripod you used, or good deconvolution (or both).

Quote
After importing from the camera I just opened one of many shots, I did not check to see if is is the sharpest. Anyway, I converted in RT, it looked ok. Imported to images plus, revered the gamma, split the channels, deconvolved the luma channel. 5 @ 7x7, 10 @ 5x5, 30 @ 3x3, all adaptive R-L. I recombined, moved the gamma back to 2.2. Denoised - the background which was completely OOF to begin with, had started to looked "sharp" with a grain texture. The problem is the outline of the bird has the colors of the bird slightly outside it's area. The image becomes very paste job, which it is not, it is as shot. It feels like I need to deconvolve the colors to keep them in position with the luma data.

I have read in several places to deconvolve luma only. It seems a bit off. What are the issues with deconvolving the color channels?

The L channel (from an L/RGB in ImagesPlus) is a (luminance) weighted average of the R/G/B channels. So the (sharpened) luminance weights are also redistributed to the original R/G/B channels upon recombining them, with a lower probability of over processing any of them. It is of course possible that one of the channels was significantly poorer (or better) than both others to begin with. In such a rare case, it could help to process the R/G/B channels separately with different settings.

So other than more control, but also more work per channel and 3x the processing time, the results could be quite similar. As long as in linear gamma, not that much can go wrong, it's like remixing the light itself.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #337 on: July 11, 2014, 08:13:21 pm »

It's mostly the mount. The 10" dob is a massive unit so you have to set it up near the road. This spot is at the side of the highway, with a creek/ small river at the bottom of a coolie (strange word for a ravine). The ospreys are on a very big power pole spanning the creek. I cover the dob with a silver thermal blanket to reflect the heat. That, and the incline down from the road, help shelter the unit from the wind. The A55 is very light with no mirror moving. The shutter is a tiny mass compared to the locked down dob. I carry everything over the steel road barricade, then set my folding chair with cable release. Apart from the effort of setup, it is really quite easy. You sit there watching your live view, pressing your cable release whenever you want. If you are interested in running your testing software on a raw to see the difference between the D600, A55, with and without a 2x barlow I can email you some raws. I think you will do a better deconvolve than I did. A sample of that would help me see how much more I need to work on it.
Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #338 on: July 11, 2014, 09:11:18 pm »

Here is an unsharpened sample opened at default settings in RT.

Some will say the quality is not like up close with a 200 f2 for example. Yes, no question about it. At the same time you would not get shots of chicks being fed if you were close enough to bother them. The father came back with a fish, dropped it off then the mother (I assume) started feeding them. The dad flew off to sit on a fence post watching me, watching them.

PS the flies buzzing the fish was a bonus. ;)
Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Deconvolution sharpening revisited
« Reply #339 on: July 12, 2014, 02:25:23 pm »

Off topic: This video shows the mount stability. 1200mm fl with APSC = 1800mm or 1.1 degrees of arc for the whole frame.

http://youtu.be/ZtWYOCiKSbw

Heat ripple is as big a problem as the mount stability.
Logged
Pages: 1 ... 15 16 [17] 18   Go Up