I have noticed this trend recently of over sharpening to the point of nasty artifacts and smooth areas taking on an orange peel effect. Why do people do this?
- 1. Because they can.
- 2. Because the sharpening tool doesn't warn them when they go too far.
I'm not trying to be a Smart Aleck here, this is serious. Nothing is stopping them, so why would they?
The follow-up question then becomes; If they knew when to stop, would they?
Can they not see what they are doing to their images?
Actually, that's part of the issue. When is enough, enough? The sharpening tool will not warn you, not how much more is needed, and not how much it is overdone. It could, but it doesn't. So we'll have to do it by eye, and we're really not that good in pinpointing the optimum setting by eye. One can gain experience by trying, but maybe that is not such an efficient method to find out.
Let's address the issue in an analytical way.
The maximum possible sharpness, without artifacts, is reached when two adjacent pixels (or an edge) jump(s) exactly from the minimum possible brightness to the maximum possible brightness, e.g. from 0 to 255 in an 8-bit channel. That assumes the original subject was such a sharp edge, with maximum contrast. No smooth transition, just an abrupt jump. That is only possible when these pixels or edges happen to be aligned exactly with the pixel grid. If we were to shift the image half a pixel, then the edge transition of brightness would go from 0% to 50% to 100%. That's consistent with the Shannon/Nyquist principle, that it takes more than 2 pixels to reliably reconstruct 1 cycle (~line-pair). Therefore the brightness of pixels on a slanted edge would transition from 0% to 100% depending on the contribution ratio of both pixels across the edge.
Natural images (taken with lenses, area averaging sensels, and demosaicing) will not be able to make such a steep transition, there will be a very slightly gradual transition. That transition (AKA Edge Spread Function or ESF) is at best equal to a Gaussian blur with radius 0.323, as plotted in the following chart:
Now, when we know that the theoretically best possible artifact free sharpness corresponds to a Gaussian blur of 0.323, then we can determine the maximum possible contrast between any pixel and its neighbor. For that we could take a single white pixel, say RGB[255,255,255], on a black background RGB[0,0,0], and blur (convolve) that with a 0.323 Gaussian blur (Point Spread Function, PSF).
That will set the central pixel to 196.7591, and its brightest neighbors to 13.6177, or a ratio of 14.45:1 for a horizontal / vertical neighbor. That is therefore the highest possible per pixel contrast, anything more is a signal that it is over-sharpened. That is for a perfectly aligned per pixel contrast, which will be halved when the edge falls exactly halfway two pixels. Therefore the optimal sharpness-zone is between a per pixel contrast of 7.23:1 and 14.45:1, but never more than the latter.
That should make it relatively easy for a modern software producer to create a signal zone in their sharpening dialog that indicates optimal/over sharpening. Unfortunately they instead seem to be hibernating (or worse).
For printed output, one might want to over-sharpen a bit at the native printing resolution, to pre-compensate for diffusion losses in the output medium and/or due to the specific printing technology.