I'm glad to see you on LuLa. Your ImagingTechnology site is excellent, but unfortunately the forum is not that active.
Hi Bill, I'm glad to be here. Just to make sure there is no misunderstanding, I'm not associated with Norman Koren's site (although we exchanged DSP info and other technicalities in the past). Norman has a great collection of knowledge available to all on his site, and he's a very nice person. His forum is just one method for maintaining contacts with mainly the imaging industry.
Jeff's own book states (page 84), "...the secret is to keep the size of the [sharpening] halos below the threshold of visual acuity at the intended viewing distance--this is where the size of the pixels on the output becomes a critical factor"
Yes, that's only too obvious (as long as you can't see it ...), although I'd prefer to avoid halos as much as possible in the first place. One cause for halos is repeated sharpening passes, combined with a poor resampling algorithm.
He then goes on to say that for smaller reproductions such as used in his book (no larger than 5 x 7 inches), he tries to keep the sharpening halos to 0.01 inch, whereas for larger reproductions one can go up to 0.02 inch. If one were viewing a larger print at the same distance as for the smaller one, then it would seem as one should use the same sharpening parameters as for the smaller print. One could cut away the peripheral portion of the larger print, as it would be outside the eye's field of sharp vision in any event.
I think that output sharpening should be steered by the characteristics of the output modality, not by tolerable halos as a guiding principle. The printing process (whether inkjet, photochemical, printing press, or whatever else) will add it's particular loss of detail (loss of MTF response). It's that loss that we strive to pre-compensate for. Compensating for lack of capture resolution is another subject, although in a parametrised workflow it can take place together with the final resampling and output sharpening, while avoiding cumulative artifact amplification.
The link to Norman Koren's site regarding spatial contrast sensitivity is most appropriate for further discussion. Although the eye can resolve 60 lines per degree of arc, the frequencies around 6 cycles per degree contribute most to perceived image quality and this is the basis of the SQF measurements that Mr. Koren comments on and has incorporated into Imatest. In modern imaging using MTF, it is not sufficient to state resolution without also specifying the contrast.
Exactly, and the notion of contrast sensitivity also leads to useful remedies to compensate for output losses, or even augment the shortcomings of the capture chain. In postprocessing we can manipulate the MTF of the final image around certain spatial frequencies which can result in a higher percieved resolution (without artifacts), as long as we address the correct spatial frequencies, the ones that matter. Viewing distance does matter and, although we cannot always cater for every possible scenario, we can optimize our final MTF for the most probable situations.
Just as with an aberrated camera lens, the MTF of the eye is improved by stopping down (smaller pupil size). The graph by Prof. Girod on Koren's link demonstrates that the MTF of the eye at 30 cycles per degree (60 lines/degree) is only about 17% at a pupil size of 5.8 mm but improves to around 38% at a pupil size of 2 mm. However the human visual system exhibits maximal spatial contrast sensitivity at about 6 cycles/degree.
Yes, this tells us not only that the viewing conditions matter, but also where the most impact of our postprocessing can be expected. It also tells us that sticking to a PPI centered compensation only, for sharpening losses, is somewhat problematic to say the least, because it also doesn't recognise the impact of viewing distances. BTW personally I focus on the 8 cycles/degree eye contrast criterion, also because there are people with better than average eyesight.
Since the visual response is so complicated, optimal output sharpening parameters are best determined empirically, and I understand that Bruce Fraser printed thousands of images to determine the best approach, which was incorporated into PKSharpener.
Although I don't dismiss empirical determination, I prefer to use that as a last resort. The problem with empirically derived solutions is that we can't learn about the underlying processes, and thus potentially overlook the important principles (the ones we can
use to our advantage).