Also, I'm digging the geek level of this group. As a dog-eared Wyszecki & Stiles owner, I appreciate just how deep the rabbit hole can go.
Welcome, Steve. Here's the story of my first encounter with W&S. The time is February or March of 1989. I'd left Rolm and set up shop at the Almaden Research Center and was looking around for something to do. I messed around with HDTV and co-authored a report recommending that IBM stay out of that business. I worked with RAID architecture, and proposed
some strategies for improving performance. Neither project excited me much. Then some people from Kodak started knocking on the door asking if we were interested in joining together to create standards for interchange file formats for color images. I found out about it by accident and invited myself to an early meeting. At that time, the only thing I knew about color was how to put together filter packs for making C-prints.
Three guys from Kodak showed up for a two-day meeting: a manager in a product division and one of his engineers, and someone from Research. The Research guy opened his briefcase, took out a copy of W&S, and plunked it on the conference table, where it sat throughout the meetings. He never opened it, but seemed comforted by its presence. At one on the breaks, I asked if I could look at it, and he said, "Sure." I was seriously impressed as to how much there was to this color stuff. Over the next six years, I did my best to master my little part of the color science world, and was relieved to find that I could make some contributions while understanding a fraction of what's in that book.
While looking for the Kodak researcher's name (Eric somebody), I found a little riff on what I thought was important in an interchange color space a few months into what turned out to be a task force with Kodak (by the way, the task force never went anywhere because the PhotoCD folks preempted the Kodak people we were working with).
Anyway, from the time capsule:
Desirable characteristics for Device-Independent Interchange Color Spaces
A device-independent color space should see colors the way that color-normal people do; colors that match for such people should map to similar positions in the color space, and colors that don’t appear to match should be farther apart. This implies the existence of exact transforms to and from internationally-recognized colorimetric representations, such as CIE 1931 XYZ. Defining transforms between a color space and XYZ implicitly defines transforms to all other spaces having such transforms. A further implication is that a device-independent color space should allow representation of most, if not all, visible colors.
A device-independent color space should allow compact, accurate representation. In order to minimize storage and transmission costs and improve performance, colors should be represented in the minimum number of bits, given the desired accuracy. Inaccuracies will be introduced by quantizing, and may be aggravated by manipulations of quantized data. In order to further provide a compact representation, any space should produce compact results when subjected to common image-compression techniques. This criterion favors perceptually-uniform color spaces; nonuniform spaces will waste precision quantizing the parts of the space where colors are farther apart than they should be, and may not resolve perceptually-important differences in the portions of the color space where colors are closer together than a uniform representation would place them.
Most image compression algorithms are themselves monochromatic, even though they are used on color images. JPEG, for example, performs compression of color images by compressing each color plane independently. The lossy discrete cosine transform compression performed by the JPEG algorithm works by discarding information rendered invisible by its spatial frequency content. Human luminance response extends to higher spatial frequency than chrominance response. If an image contains high spatial frequency information, only the luminance component of that image must be stored and transmitted at high resolution; some chrominance information can be discarded with little or no visual effect. Effective lossy image compression algorithms such as DCT can take advantage of the difference in visual spatial resolution for luminance and chrominance, but, since they themselves are monochromatic, they can only do so if the image color space separates the two components. Thus, a color space used with lossy compression should have a luminance component.
The existence of a separate luminance channel is necessary, but not sufficient. There also should be little luminance information in the putative chrominance channels, where its presence will cause several problems. If the threshold matrices for the chrominance channels are constructed with the knowledge that those channels are contaminated with luminance information, the compressed chrominance channels will contain more high-frequency information than would the compressed version of uncontaminated chrominance channels. Hence, a compressed image with luminance-contaminated chrominance channels will require greater storage for the same quality than an uncontaminated image. If the threshold matrices for the chrominance channels are constructed assuming that the channels are uncontaminated, visible luminance information in these channels will be discarded during compression. Normal reconstruction algorithms will produce luminance errors in the reconstructed image because the missing luminance information in the chrominance components will affect the overall luminance of each reconstructed pixel. Sophisticated reconstruction algorithms that ignore the luminance information in the chrominance channels and make the luminance of each pixel purely a function of the information in the luminance channel will correctly reconstruct the luminance information, but will be more computationally complex.
A device-independent color space should minimize computations for translations between the interchange color space and the native spaces of common devices. It is unlikely that the interchange color space will be the native space of many devices. Most devices will have to perform some conversion from their native spaces into the interchange space. System cost will be minimized if these computations are easily implemented.
Note that I really missed the mark, because I defined the boundary conditions in a way that precluded what eventually turned out to be the most common solutions: various flavors of gamma-corrected RGB. The criteria are especially hard on RGB spaces with small gamuts, like sRGB.
Jim