Apologies in hijacking this thread a little bit, but personally I'm just curious if de-convolution sharpening and the evolvement of computational imaging might eventually overcome much of the problem with diffraction.
No problem, in fact the question is quite relevant. The challenge is to address the effects of several sources of blur combined, in a simple interface, yet with lots of control over the process.
I would assume it would be much more challenging than resolving the issues from an AA filter, since it would require each individual lens design to be carefully tested then some method to apply the information to the file, and perhaps would require the data from every possible f/stop and with zoom lens specific zoom settings. But it seems the theory of restoring the data as it is spread to adjacent pixels isn't much different than what happens with an AA filter.
Correct, diffraction has a different PSF shape than e.g. defocus, yet in practice we need a mix of both (in addition to addressing residual optical and OLPF induced blur). The point spread function is just a mathematical description of the blur function which is used to reverse its effect.
I know I have many times stopped down to f/22 (or further) and smart sharpen seems to work quite well, even when printing large prints.
The difficulty with (deconvolution) restoration of a signal is 2-fold. First, there has to be enough signal to noise to have something to restore. If detail is blurred too far, i.e. it has fused with its surroundings, then it will be impossible to lift it up from the background. Second, we are always faced with noise, even light is noisy (photon shot noise). When signal levels get reduced down to the noise level, then the restoration needs to find a way and disciminate between signal to amplify, and noise not to amplify. That's a challenge.
Currently there are a few algorithms that can do such a task with reasonable success, but there are limits to what can be achieved. One algorithm that's popular, but not necessarily the best, is the Richardson-Lucy restoration algorithm. It was used to improve the Hubble Space Telescope images, and the adaptive variety of the RL restoration addresseses the noise amplification issue with visible improvement of the S/N ratio. One of it's drawbacks can be that it is processing intensive, therefore slow, and it's success also depends on a decent input as to what the PSF should look like. Other, so-called blind deconvolution algorithms, attempt to find the optimal PSF shape as part of the process, but they tend to have a difficulty in separating noise out of the enhancement.
Curious is good, it's the start of progress.
So, another attempt to address diffraction blur might be in order. Diffraction blur can actually help to reduce moiré, because it kills high spatial frequencies before discrete sampling takes place, but we are confronted with it mostly if we want to add DOF to a scene. Therefore it has both useful (artifact reduction and artistic control) and detrimental (diffraction blur of the focused micro-detail) effects. Wouldn't it be nice if the drawbacks could be reduced? Well they can (upto a certain level).
I'll prepare an image, and add an f/32 (let's up the ante) diffraction blur, and post it later. We can then see what the various methods can restore, and what the limitations are.