All if this is nothing new. All post processing programs have these styles and effects from oil-look, pencil-look, etc. In any case, the programmer is the artist not the photographer.
Well, I’m an old-school 20th Century procedural computer programmer, and my understanding of the software architecture of these neural network applications is quite rudimentary. But I think machine-learning offers something genuinely new in the realm of image recognition, manipulation, and enhancement.
The reason for this is that when neural networks operate on images (or other inputs, for that matter, but we’re interested in pictures here), there is a
semantic component that reflects what they have learned from the exemplars they have been trained on. Feed a neural network image-recognizer a diet of paintings by van Gogh along with some that are not by van Gogh, allow it to assign probabilities based on the way you identify the paintings, and eventually it becomes proficient in distinguishing a van Gogh from, say, a Renoir. Not that difficult for thee or me, perhaps, but it’s quite impressive for a learn-by-example computer program.
That’s the recognition aspect.
Now, reverse the processing direction and feed one of your photographs to a neural network that has been trained to recognize van Goghs, and it will do its best to make your picture look like a van Gogh. Yes, you can argue that the result is just a parlor trick—or kitsch, if you prefer—but the software actually may “know” something about van Gogh’s style even though the only type of visual art the programmer was familiar with is anime.
That’s the manipulation aspect.
(I’m not certain how to characterize the website that displays realistic software-generated images of the faces of imaginary people, so I’ll defer that for future consideration.)
I don’t think image-manipulation is bad, per se. Although, needless to say, some programmatically-manipulated images are awful. I agree with Slobodan that several of the transformations of my photographs
are interesting—and for the same reason he gave: “the uniqueness of the manipulation style. . . . And the fact that they resemble a drawing or painting, for better or worse.” I don’t have the skills to make a
woodcut, or paint a
"Magic Kingdom" landscape, or sketch a
realistic street scene, but if I did, I wouldn’t be embarrassed to have produced the images emitted by the
Deep Dream website. Does painting from a photograph produce an inherently inferior image than painting
en plein air?
Finally, if you’ve managed to put up with my rambling so far, we arrive at:
The enhancement aspect.
Photographers have always made changes to their captures—well, most of us, at least—and in the years since the advent of digital photography our tools have become increasingly powerful. Mostly, though, they have been dependent on modifying the original pixels. There have been some exceptions, such as programs that use fractal interpolation to “enlarge” images, but my take on those techniques is that they are actually inserting artifacts rather than extrapolating from the actual pixels in the original image.
As I understand it, a tool like Adobe’s new
Enhance Details has been trained to “know” how to accurately reproduce fine details without introducing artifacts. Similarly, Lightroom’s current auto-tone control has become “smart”
by being trained to mimic the way skilled photographers make tonal adjustments.
I suspect more tools based on machine-learning will be appearing in post-processing software in the near future. What about an intelligent sharpening tool that automatically applies the optimal amount of edge contrast, detail, masking, and noise reduction to make an image (or a selected portion of an image) as sharp as it can be without introducing artifacts? Or color controls that recognize and then adjust for several artificial light sources that illuminate different areas of a night scene? Or resolution multipliers that analyze an image and then recreate it with many more pixels than the original? (The last of these are already starting to appear online; I’ve tried a couple, and they’re quite impressive albeit not perfect.)
As these “artificially intelligent” tools become increasingly capable, some of the craft that all of us have spent time mastering may indeed be replaced with automation. But you could say the same about automatic focus and exposure adjustments in cameras, and—at least, when they are used appropriately—I don’t think many of us consider them as undermining our creativity. If anything, having powerful tools to manipulate an initial capture in order to produce an interesting final image should make it easier for the photographer to concentrate—quoting Slobodan again—on “the art of seeing."