Hi,
Yesterday, in the evening, I made a picture of a desert scene that was lit in such a way to provide lots of intricate texture. I assembled a panoramic stitch and the master file measures approximately 9600 pixels high by 29000 pixels wide.
I used Photoshop to down res the image to 2400x7200 pixels so I could make a JPEG preview copy, and found that the "details" became too "busy" and harsh. I do this sort of process often. but this time, because of frequency and lighting characteristics of the natural texture the image looked aggressively hyped. Viewing it made me uncomfortable.
I had used a bit of Deconvolution sharpening and some slight contrast tweaking, but no output sharpening on the master file.
I ended up using Topaz Detail to soften the small details after I down sized and before I saved to JPEG.
I have tried to keep up with up res technologies, but this experience led me to wonder; are there any more advanced down res processes?
Thank you.
Hi,
What Panorama stitcher did you use? PTGUI offers a choice of downsampling algorithms (Lanczos2 (Sinc16) is probably a good choice). Even if you edited the pano in e.g. Photoshop, you could try re-importing the final result back in PTGUI, reducing the output size there and saving that without any adjustments.
Alternatively, you could try using Capture One. I don't know if it has a maximum file-dimension limit, but if you import the resulting pano and save at a smaller size it will do a good job. C1 uses some of the insights that were gained by my tests here on LuLa with writing an optimal up/downsampling heuristics by using ImageMagick functionality (
https://forum.luminous-landscape.com/index.php?topic=91754.0)
Lightroom also has a good downsampling quality, but it has file-dimension limitations.
A crude way of downsampling in Photoshop would be to Gaussian pre-blur the image before doing downsampling with Bicubic. The blur radius would need to be dimensioned as
radius = 0.25 * downsampling factor (reducing to 1/3rd of the dimensions, would be a factor of 3).
Algorithms that do not use optimized filters will produce downsampling artifacts such as aliasing.
Cheers,
Bart