but of course ... "deep imaging science" ...
Yes, indeed. Although, I suppose it's possible that phrase may refer to machine-learning based on a neural-network methodology. Unfortunately, there's no way for customers like me to know the underlying semantics of fuzzy terms like this.
The specific points that intrigued me were:
- the references to the characteristics of different color filters on the respective camera sensors, which I assume must be evaluated not only for what frequencies they transmit but also for how much light passes through them;*
- the way each sensor responds to "different lighting conditions," which I suspect means both different frequencies and intensities of light;
- and the effect of in-camera amplification ("different ISO values") on the rendering process.
Again, all of this may have been obvious to those who have actually been involved in the development of demosaicing algorithms, or who may have insider knowledge of Adobe's (or other companies') products. But I am dealing with black boxes—as, I suspect, are the majority of Adobe's customers. So any information about what goes on between the input and the output is helpful because it provides a conceptual model to help me interpret what I am doing when I fiddle with the end-user controls.
———
*Does Adobe do this through reverse-engineering, or do the camera or sensor manufacturers provide this information?