This sounds interesting. I guess you start by taking several image at each pano position, align them, reverse compute the displacement that had to be applied and somehow use this information to re-compute higher quality pixels by weighted averaging?
I knew you'd be interested
Super resolution exploits the tiny sub-pixel movements between otherwise identical images (shot from virtually the same position). Those movements are caused by tiny displacements due to camera movement (even on a tripod one can have them, check a magnified Live View image fragment and gently push the tripod in one direction to see it more clearly) and atmospheric effects (mostly at a distance). Handheld images will always have some small position changes.
The effect is that even tiny details that may fall 'between' two pixels on one image, may fall in the center of a pixel of another image. Spreading the same detail over one or over several pixels will make a big difference in rendered micro-contrast. Taking multiple slightly displaced samples will allow to statistically determine a more probable rendering of the original detail.
There are known methods from astrophotography (e.g. 'drizzle'
) and made simple to use in e.g. PhotoAcute Studio
, but I was wondering if we can approximate
the results with the tools we already have.
To allow the sub-pixel displacements to be aligned at the pixel level, we need to produce an enlarged/undersampled version of a stack of a few images (simply increase the output size of the stitched result), while aligning the images with mostly a translation and rotation correction (Yaw, Pitch, Roll), and some individual image shift ('d' and 'e' parameters), and then blend the images into a single result that then can be deconvolution sharpened. I'm looking into the best/simplest blending approach for maximum result.
The result could come close to the alternative of shooting with a 2x focal length, but without the need to stitch NPP rotated tiles for the wider FOV, and without the need to close down the aperture by a stop to maintain the same DOF (which may increase diffraction and subject movement or camera shake induced blur). Another benefit can be, depending on the blending method, that noise is reduced by averaging.
This just shows that Pano-stitching can serve many purposes.
Yesterday I made a reproduction of a large (1.6 x 1.4 metres) painting hanging at a hard to reach corner position in a poor lighting from one side situation, not what I would normally do for a more formal repro setup. PTGUI allowed (also using the Viewpoint optimization control to stitch a flat plane) to reach very decent impression which, with a bit more work on the uneven lighting, is not that far from a formal repro (see a small version, and a crop, as attached).