Pages: [1]   Go Down

Author Topic: Pixel shift vs Adobe super reolutiion  (Read 1835 times)

Jonathan Cross

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 645
Pixel shift vs Adobe super reolutiion
« on: April 05, 2023, 02:21:14 pm »

Has anyone got any experience of comparing pixel shift on a camera to get a high MP image versus using Adobe Super resolution? 

Best wishes,

Jonathan

Logged
Jonathan in UK

Chris Kern

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2035
    • Chris Kern's Eponymous Website
Re: Pixel shift vs Adobe super reolutiion
« Reply #1 on: April 08, 2023, 02:52:30 pm »

Has anyone got any experience of comparing pixel shift on a camera to get a high MP image versus using Adobe Super resolution?

I did some very limited testing after I acquired my Fuji X-T5.  Over all, the differences seemed negligible.

Obviously, these are two very different techniques for increasing the linear resolution of an image.  Pixel-shift actually captures more detail by making a series of frames while moving the camera sensor minutely, then combining the frames to produce a composite image with higher resolution than any of the individual ones.  Adobe's Super Resolution (like other tools based on machine-learning) analyzes the image for primitive visual elements that correspond to those that were in the neural network's training set, then generates a new, derivative image with increased resolution that it creates from the visual elements it discovered during the analysis.

In an attempt to determine whether one or the other technique is inherently superior, I photographed a static, indoor scene—a bookcase with various contents—which I selected because I guessed it would be composed entirely of primitive visual elements that the Adobe software would be able to identify.  (I should make clear that I have no association with Adobe and no knowledge of the design of its "artificially intelligent" software, and my selection of the scene was based entirely on my general understanding of how this type of neural network works.)

The attachments compare full resolution crops of the results of the Fuji and Adobe enlargement techniques.  The first pair are essentially unprocessed except for combining the captures with Fuji pixel-shift application software—Fuji doesn't perform this operation in-camera—and creating the derivative image with the Lightroom Enhance function.  The second pair are the same images with some manual sharpening and tone adjustments.  To my eye, at least, the differences between both the unedited and the edited pairs are negligible.

For something like product photography, where fidelity to the exact appearance of the subject is vital, I would use pixel-shift rather than a neural network to create the image in order to avoid the risk of producing artifacts.  On the other hand, using pixel-shift requires a tripod-mounted camera and a static subject to avoid blurring or improper alignment caused by subject movement (although some manufacturers attempt to control for limited subject movement), so if you're shooting handheld or the subject isn't absolutely static, a technique like Super Resolution strikes me as a better bet.

Rand47

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1882
Re: Pixel shift vs Adobe super reolutiion
« Reply #2 on: April 09, 2023, 03:13:33 pm »

I did some very limited testing after I acquired my Fuji X-T5.  Over all, the differences seemed negligible.

Obviously, these are two very different techniques for increasing the linear resolution of an image.  Pixel-shift actually captures more detail by making a series of frames while moving the camera sensor minutely, then combining the frames to produce a composite image with higher resolution than any of the individual ones.  Adobe's Super Resolution (like other tools based on machine-learning) analyzes the image for primitive visual elements that correspond to those that were in the neural network's training set, then generates a new, derivative image with increased resolution that it creates from the visual elements it discovered during the analysis.

In an attempt to determine whether one or the other technique is inherently superior, I photographed a static, indoor scene—a bookcase with various contents—which I selected because I guessed it would be composed entirely of primitive visual elements that the Adobe software would be able to identify.  (I should make clear that I have no association with Adobe and no knowledge of the design of its "artificially intelligent" software, and my selection of the scene was based entirely on my general understanding of how this type of neural network works.)

The attachments compare full resolution crops of the results of the Fuji and Adobe enlargement techniques.  The first pair are essentially unprocessed except for combining the captures with Fuji pixel-shift application software—Fuji doesn't perform this operation in-camera—and creating the derivative image with the Lightroom Enhance function.  The second pair are the same images with some manual sharpening and tone adjustments.  To my eye, at least, the differences between both the unedited and the edited pairs are negligible.

For something like product photography, where fidelity to the exact appearance of the subject is vital, I would use pixel-shift rather than a neural network to create the image in order to avoid the risk of producing artifacts.  On the other hand, using pixel-shift requires a tripod-mounted camera and a static subject to avoid blurring or improper alignment caused by subject movement (although some manufacturers attempt to control for limited subject movement), so if you're shooting handheld or the subject isn't absolutely static, a technique like Super Resolution strikes me as a better bet.

In your examples, there is way less color aliasing in the Pixel Shift version.  I "think" that's more what Pixel Shift is about, rather than just higher resolution.

Thanks for providing these samples!

Rand
Logged
Rand Scott Adams

hubell

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1135
Re: Pixel shift vs Adobe super reolutiion
« Reply #3 on: April 11, 2023, 09:47:12 am »

Has anyone got any experience of comparing pixel shift on a camera to get a high MP image versus using Adobe Super resolution? 

Best wishes,

Jonathan

I would also suggest that you try a demo of Topaz Gigapixel AI, if your objective is to upscale a file to make a larger print at the native dpi of the printer, such as 300 or 360 dpi. I have seen visual comparisons that convincingly show that Gigapixel AI is superior to Adobe Super Resolution.

Jonathan Cross

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 645
Re: Pixel shift vs Adobe super reolutiion
« Reply #4 on: April 14, 2023, 01:40:00 pm »

I have now done some more testing of super resolution.  At 100% it looks good, at 200% I can detect a little aliasing, and at 300% there's definite aliasing.  This is with LR Classic 12.2.1 and camera RAW 15.2. 

I wonder if Adobe is still working on making it better, but the aliasing is probably acceptable as it is not visible at 100%. I can get rid of some of it with noise reduction.  I do not have a camera with pixel shift (yet!).

Jonathan



Logged
Jonathan in UK

chex

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 76
Re: Pixel shift vs Adobe super reolutiion
« Reply #5 on: July 26, 2023, 02:01:25 pm »

https://www.dpreview.com/articles/0727694641/here-s-how-to-pixel-shift-with-any-camera

There's this guide to hand held 'pixel shift' but my tests didn't give great results.
Logged

Wheathin21

  • Newbie
  • *
  • Offline Offline
  • Posts: 23
Re: Pixel shift vs Adobe super reolutiion
« Reply #6 on: August 17, 2023, 03:34:02 pm »

Has anyone got any experience of comparing pixel shift on a camera to get a high MP image versus using Adobe Super resolution? 

Best wishes,

Jonathan
I use pixel shift(HRM) on my s1r all the time for art repro. The main improvements in IQ I see when using HRM vs single shot is less noise and aliasing. Contrary to popular belief, pixel shift doesn't increase resolution
Logged
Pages: [1]   Go Up