I’m with Nick, I am interested in the way one sees and interprets colors. It is a fascinating subject. (I referred to www.colorsystem.com
earlier because it is a great history of color thinking, from Aristotle to CIE. If you enjoy color, you will enjoy this site.)
Claiming to improve the color accuracy of RGB is indeed an uphill battle, specifically because it raises issues of technology, psychology, physiology and philosophy. I hope that by providing a description of our method ab initio, as Peter requested, will give you insight to our philosophy as well as clear up any lingering misunderstandings. Because while it is true, there is nothing wrong with your digital camera, but it is also true that there are limitations to the RGB model used to render digital images.
To begin, as Nick points out, it is important to define what one means by accuracy. My definition of color accuracy is simple; the colors of an image should be visually consistent with the colors of the input. This is the basic philosophical difference between my definition of color accuracy and those that define accurate as a colorimetrically correct. Accurate, to me is a perceptual match based on visual analysis of image and input. Or simply: the picture should look like the thing.
As I stated earlier, my experience with digital photography is primarily in the field of fine art reproduction, where color matching is critical. I have had the benefit of shooting thousands of high resolution digital images in ideal shooting conditions, with the highest quality digital cameras and the luxury of comparing the image on perfectly calibrated equipment to the original art under the very light with which it was recorded. Through the course of my work with the Herbert F. Johnson Museum of Art at Cornell University 1998-2000, I began to notice inconsistencies between image and input. Curiously, the hundreds of thousands of dollars worth of equipment could not seem to reproduce certain colors accurately (by my earlier stated definition, some colors on screen did not visually match the colors of the original object). This was despite the efforts to properly expose and balance the image and viewing the image on a calibrated system. The difference was most notable in the deeper colors, such as violet and indigo.
As part of my job description, I was responsible for making color accurate reproductions and since this was new technology, I had to do some research as to why my recordings were not color accurate. The first step was to confirm my observations. The first confirmation of the color disparity that I found was by Robin Myers, in his article, “Color Accurate Digital Photography of Artworks” in which he observed the difficulty of reproducing cobalt blue colors (similar to my problem with deep violets and indigos). Since he is the creator of Colorsync, I knew I could take his word as a reliable source.
Later working as a technical representative for a European medium format camera-back manufacturer, I continued to observe the chromatic disparity between certain hues and their digital reproduction. This was problematic for me, because it was my job to introduce extremely expensive digital camera-backs to professional photographers shooting catalog work, and as we all know, one of the main reasons for returning merchandise bought online or in a catalog is that the color doesn’t match. As Mr. Nemo says, it may be true that in print production, the prepress house has the final say on color matching, but the photographer is the first step in the chain of production and often the only one with the actual article in front of him for comparison. Being that I was trying to sell a very expensive piece of photographic equipment, the photographers were very critical of the results on screen (yes, calibrated). So my research into the color problem continued out of necessity.
I found more confirmation of my original observation of the color differences between input and image by industry leaders (whom I have cited in earlier posts) including Robin Myers, Michael Stokes, and Charles Poynton. I observed a phenomenon, the disparity between image color and original; researched it to confirm my observations; and came up with a hypothesis as to why the phenomenon occurs.
As Mr. Rodney mentioned earlier, digital cameras do not produce color; they simply count (quantify) light input values. These input values, recorded spectral power distributions, are mapped to RGB tristimulus values using CIE colorimetry.
The hypothesis: Although CIE colorimetry is necessary for communicating color data between devices that produce colors, CIE colorimetry alone is not sufficient for characterizing light input data (camera file) such that a resulting RGB image is consistent with human color perception of the scene recorded. In other words, the picture doesn’t match the thing: colorimetry alone doesn’t fix the problem.
As to why this may be the case is the subject of color theory. All CIE colorimetry is based on the famous 1931 color matching experiment which was designed to test the laws of trichormaticity. It was believed, and rightfully so, that a minimum of three colors can be combined to form all colors. In the experiment, the intensity of three energy constant (single wavelength) lights was adjusted so that their combination visually matched a white reference light. Colors in this model are not based on the way we see colors, they are fractions of the whole, white light. CIE colors are mathematical abstractions plotted on a two dimensional graph. This graph serves as a very useful tool for comparing two measured colors between devices that produce colors, but as Mr. Rodney points out, digital cameras do not produce colors.
RGB is the model used to produce digital colors. RGB is an additive model of illumination using three colored stimuli. You can clearly see how CIE colorimetry can be useful for providing a white point (color temperature) since that is almost the color matching experiment to a tee and for determining the gamut of the RGB model by specifying the primaries; however, the RGB model is a model for producing illumination, not for characterizing human color perception and RGB illumination is very different than most other light we see.
Just as colors are a mathematical abstraction in CIE, colors of RGB illumination are equally spaced fractions of white light. I believe this is the difference between RGB and human color perception. Colors of RGB illumination are additively combined, whereas we see things by the subtractive process of absorption and reflection of light, or in the case of the sky, the process of refraction. In these two examples, colors are not equally spaced fractions of the whole as in RGB. There are color differences between two models: RGB illumination and the human response to what we can call for lack of better terms, real illumination.
RGB illumination breaks down in chromatically consistent intervals, meaning as an RGB color is reduced in value, the percent R:G:B remains constant. Real illumination breaks down in a very nonlinear manner. This may be why objects get deeper in appearance as they get darker, (unlike the colors of RGB). Just as the CIE experiment used energy constant light sources, RGB illumination is modeled to be independent of intensity; otherwise your computer screen would get hotter as it gets brighter. Most color we see, and therefore record with our digital cameras, are caused by light/heat energy, and as the temperature of the source of the illumination changes so to does the chroma of the hue illuminated.
We theorized that by varying the chroma of the hues of the RGB color model relative to their intensity, we could produce RGB images that were visibly more consistent perception of real illumination.
To test this theory, we set up a full blown psychophysical evaluation of color difference between recording, the image, and recorded, the object. We took great care to eliminate as many environmental variables as possible, such as the ambient light of the surround, and made sure to use equipment both in its default state and in a state calibrated to ISO standards. We identified commonalties in the color differences using multiple devices. We then verified that our observations were consistent in random tests, not subject to strict imaging guidelines.
This work led to a comprehensive characterization of color difference between RGB and human color perception. This characterization of color difference serves as the basis for our color difference model: the formula, which we have called DCF Full Spectrum. (A color difference model is a color appearance model that attempts to minimize the perceived color differences between two color systems.)
As has been pointed out in the course of this discussion, we enacted changes the RGB model using an image editor, such as Adobe Photoshop (so that we could offer our results as a Photoshop plug-in). The result of our color difference evaluation is a series of value modifications that modulates the chroma of RGB hues relative to their intensity. The tools we use to enact color changes, like any automate plug-in, are less important than the formula itself. The goal of DCF Full Spectrum is to produce a more accurate recording. Making a picture pretty so that you can hang it on the wall is still subject to your skill set and your aesthetic decisions (just like choosing Kodak film over Fuji film does not automatically make you a better or worse photographer).
My partners and I are very proud of our work and we are happy to offer our DCF Full Spectrum color model to digital photographers. If you have any questions please feel free to contact me at [email protected]