This is Shirley.
Aha! Got it now. That little dust-up completely passed me by. I just read the posts in the
Shirley thread and I presume it's clear to everyone reading
this thread that there is no correspondence between the way an analog emulsion responds to brightness and contrast, and the way a neural network recognizes features in a large collection of digital or digitized photographs that have been used to train a camera autofocus subsystem—features which in the case of the latter may not be apparent to human beings, including the programmers who built the software.
My purpose was simply to share a little of what I've learned during the past few months about neural networks because it seemed to be relevant to the issue raised by the original poster. As I pointed out, I don't claim to be an expert, and my grasp of the technology is no doubt considerably less sophisticated than that of some of the other participants on this site whose software development skills are more current than mine. But I suspect I'm somewhat more familiar with how it works than the majority of LuLa posters.
In addition to reading about their theory and design, I've pulled and examined some available free source code to understand how others have implemented neural networks—although, as I noted in an earlier post in this thread, I have not attempted to build any of my own software. I've also been experimenting with various hosted and online tools for using neural networks to make semantic identifications of images; sharpen, enlarge, and otherwise enhance pictures; and even transform photographs into something that is no longer photographic but which doesn't qualify as freehand art: a collaboration between the photographer and the software, you might call it. (
Some samples here.)
The point I was trying to make is that the way a neural network will act on its "training set" of data, while obviously influenced by the parameters chosen by the developers to mediate the network's behavior (there isn't much is this world that is more perfectly deterministic than software), often involves extracting features than are not visible to or, if visible, at least not identifiable by human beings, and that those features will be strongly if not always predictably influenced by the scope and scale of the image set.
Now that I've read that earlier thread, I understand why some readers here might have conflated neural network recognition with the use of a "Shirley card" in analog photography, but that's a mistake. If my explanation isn't sufficiently clear, there is plenty of explanatory information about the operation of neural networks available online, as well as some interesting tools (free and commercially licensed) and sample source code.
This machine-learning technique is important to all of us who make photographs, I think, because modern cameras aren't optical machines which instantaneously record light on a sensitive medium; they are dynamic software systems that are attached to optical and light-sensing input devices. Post-processing is also being transformed by this technology: look no further than the ability of Lightroom's
Enhance Details to sharpen raw camera sensor data or the the ability of the Topaz suite to take demosaiced images and in effect build new images (bigger, sharper, etc.) based on them.
Interesting stuff. I'm not sure where it will all wind up.
Hopefully not here.