After reading quite a lot of reviews on A.I. GP yesterday, including this thread and watching the YouTube video referenced a few posts up, I just finished demoing the product and ended up buying it. The YouTube video was a bit frightening to me, honestly, but my fears have been abated. I suspect some artifacts generated in that reviewer’s downsized resampled test image were to blame for the poor results, that and/or improvements made to the software during the last few months. I didn’t yet experience that kind of distortion.
I’ve used quite a few upscaling routines over the years, and happy to have this in the toolbox. I’ll also be looking forward to seeing it improved over the months and years as Topaz does with its products. I have an 8 core processor operating at 4.5 GHz and a mid/high range GPU that is supported and was not bothered with the processing times - generally a number of minutes for the more challenging jobs.
I do have a few preliminary observations and questions on when to use it and where it best fits in the workflow.
1. I hope it can be integrated soon into Photoshop.
2. It made me skim through my last 15 years of digital files wondering what would now print big well. I will doubtlessly spend more time doing that.
3. For 200% or less enlargement, maybe up to 400%, it seems to me there is little if any practical difference between using this product and Photoshop’s new preserve details algorithm or other premium enlargement programs. The Photoshop preserve details function I have found excellent, and I’ll continue to use that for small scaling jobs, at least until A.I. GP is integrated well into Photoshop and I become convinced it’s better there.
4. I wish there was some control over how aggressive the A.I. enhancement is. For example, in the hair and a beard of a man in an old 5MP photo I scaled up at least 10 times, A.I. GP recognized most of the hair as hair and rendered it well, but left large parts of the finer detail blurry. That can be fixed manually with a clone brush, so the picture I was experimenting with was workable, but I can imagine a slider might save quite a bit of work.
5. There are always trade offs on where to put things in the work flow. It seems this product may need to go very early on, earlier than others because it is replacing pixels instead of averaging them. I think it may go best after doing lens and abberation corrections, WB, and stretching out the histogram properly in a raw converter. I haven’t experimented where noise reduction fits in. I’m imagining some noise reduction at low levels might work best, with most everything else after enlargement, probably curves, contrast, clarity and color after enlargement as well. Especially sharpening, deconvolution or otherwise. Interested in other’s thoughts.
6. I noticed — I thought — superior performance on raw files as opposed to JPGs or TIFFs. Is that right? I’d rather not do that at least for white balance and histogram management reasons at least, and operate the software on TIFFs. Again, interested in other’s experience here.