No, it won't look better.
But I want to make full use of all the megapixels, especially when shooting landscapes at longer focal lengths. And, if I need to shoot at f/32 to make use of all the available resolution, I'd like to be able to remove diffraction-related loss of resolution using software. Diffraction, after all, follows well-known laws of physics, so it can be done.
Hi,
I understand what you are saying, but allow me to underline a few issues and opportunities. A sensor with denser sampling, i.e. more photo-sites per unit area, will extract more resolution from a given lens. The diffraction for a given sensor surface/area will remain the same for a given aperture number, but the per pixel resolution will suffer from more diffraction blur (lower contrast and loss of resolution). Also issues like camera shake become more significant.
However, with more samples of the blur, there are better opportunities for deconvolution software to restore the original signal from the scene before the lens/aperture blurred it, and additionally it reduces aliasing artifacts. Nevertheless, there is a limit to how much diffraction can be restored, and that limit is due to physics that cannot be beaten or improved (unlike the blur).
Assuming Green wavelengths of, say, 555nm and a circular aperture, that means that for e.g. f/32 the physical resolution (where the MTF drops to 0 response) the resolution in limited at 1 / (0.000555 * 32) = 56.3 cycles/mm. That is equal to what an 8.88 micron sensel pitch sensor array achieves, so we might as well use a lower resolution camera, as far as resolution goes.
Cheers,
Bart