Has anyone got any experience of comparing pixel shift on a camera to get a high MP image versus using Adobe Super resolution?
I did some very limited testing after I acquired my Fuji X-T5. Over all, the differences seemed negligible.
Obviously, these are two very different techniques for increasing the linear resolution of an image. Pixel-shift actually
captures more detail by making a series of frames while moving the camera sensor minutely, then combining the frames to produce a composite image with higher resolution than any of the individual ones. Adobe's Super Resolution (like other tools based on machine-learning) analyzes the image for primitive visual elements that correspond to those that were in the neural network's training set, then generates a new,
derivative image with increased resolution that it creates from the visual elements it discovered during the analysis.
In an attempt to determine whether one or the other technique is inherently superior, I photographed a static, indoor scene—a bookcase with various contents—which I selected because I guessed it would be composed entirely of primitive visual elements that the Adobe software would be able to identify. (I should make clear that I have no association with Adobe and no knowledge of the design of its "artificially intelligent" software, and my selection of the scene was based entirely on my general understanding of how this type of neural network works.)
The attachments compare full resolution crops of the results of the Fuji and Adobe enlargement techniques. The first pair are essentially unprocessed except for combining the captures with Fuji pixel-shift application software—Fuji doesn't perform this operation in-camera—and creating the derivative image with the Lightroom
Enhance function. The second pair are the same images with some manual sharpening and tone adjustments. To my eye, at least, the differences between both the unedited and the edited pairs are negligible.
For something like product photography, where fidelity to the exact appearance of the subject is vital, I would use pixel-shift rather than a neural network to create the image in order to avoid the risk of producing artifacts. On the other hand, using pixel-shift requires a tripod-mounted camera and a static subject to avoid blurring or improper alignment caused by subject movement (although some manufacturers attempt to control for limited subject movement), so if you're shooting handheld or the subject isn't absolutely static, a technique like Super Resolution strikes me as a better bet.