OK just as a reality check, I called Karl Lang, the color scientist who designed the Radius PressView and later the Sony Artisan about this so called “accuracy test”. His opinion was slightly different than mine. I said it’s mildly useful. He was more forceful (its useless) at least with respect to the 'accuracy of the profile'.
Lets talk true validation. We had this in the Artisan (Quick Calibration). A number of known color values are sent to the display after calibration and profile building. The instrument compares the values based on the measured data. The idea is to tell you if the device has altered its behavior to a fixed deltaE such that you should recalibrate. What more modern products have done is simply store this reality check and track this graph, telling you how far the device has deviated over time. This is useful in telling you that calibrating the device once a month isn’t frequent enough (from month 1 to month 2, your deltaE (say deltaE 2000) is 3, you then find that doing this process weekly provides results that are less than 1, its good indication you should do this more often).
In the case of the Artisan, it took 12 minutes for a full calibration. The Quick Calibration would take no more than 7 minutes. IF the deltaE was too high, it would instead run a full 12 minute calibration. Its a time savor and its useful to do before color critical work. This isn’t about accuracy because again, we’re using the same instrument, and software to measure a subset of colors. Otherwise, run the entire process and just build a new profile.
OK now onto ‘accuracy’. Lets see the definition:
The state of being accurate; freedom from mistakes, this
exemption arising from carefulness; exact conformity to
truth, or to a rule or model; precision; exactness; nicety;
correctness; as, the value of testimony depends on its
accuracy.
2: (mathematics) the number of significant figures given in a
number; "the atomic clock enabled scientists to measure
time with much greater accuracy"
In the case being discussed here, Jack (and to be fair, all other’s producing software to build profiles) often use the term profile accuracy. What’s it mean? To build a profile be it for a printer or a display or capture device, known color values are sent to or captured by the device. They are measured and a comparison of known and produced LAB values are provided. This allows one to build an ICC profile. In the case of a printer, one could send a known value to the output device based on the profile, measure it and compare the LAB values. But now you’re back to the issue of using the same device! The instrument has a fixed and specific illuminant that may be totally different from the illuminant under which the print is viewed. And heck, do you like the way the print appears in the lighting you’ve built the profile for based on how the image will be viewed? This goes back to the suggestion of just looking at images on the display and comparing them to the print. Do they match? Keep in mind that printer profiles are pretty complex. They have multiple tables for handling different rendering intents and they have to provide a soft proof as well. So there’s the output you get AND the values sent to the display profile for soft proofing. Makes discussing display calibration with respect to a soft proof a lot more variable and difficult. There are some tricks for examining the deltaE of printer profiles by comparing round trip errors going though the PCS. But ultimately you just send a lot of images through the profile, make prints and LOOK AT THEM. The Perceptual mapping is solely based on pleasing color. There is no fixed specifications for how a profile vendor can or should build a perceptual table. And try using an Absolute Colorimetric intent for output (which should in a prefect world produce absolute colorimetric accuracy) and you’ll see a print that’s pretty butt ugly.
The bit about profile accuracy for the display could be determined but NOT with the same instrument that built the profile as I’ve illustrated. If we send 50 solid patches to the display and measure them, how accurate are the resulting readings? Only when you use a known reference instrument that we KNOW has a higher level of accuracy (those significant figures given in numbers), can you know that the original 50 values are accurate and to what degree numerically.
So, how does measuring a small sample of patches, the case with all display profiling products, using the same instrument tell us the profile is accurate to the target values we’ve asked for? In a perfect world, we’d measure 16.7 million samples, one for each possible color. The profiles would be HUGE. It would take forever to measure. In the case of a printer, one can generally produce an acceptable profile using 900-4000 patches of colors. All the others are for lack of a better word, extrapolated to build the profile (which can define 16.7 million colors). For a display, far, far fewer patches are measured. So we have a lower number of samples to measure and we’re measuring it using the same device so there’s no way to measure the accuracy of the profile. We can measure the differences in each profile built over time to gauge device drift but that set of measurements may not be ‘accurate’ to a higher measured standard and that’s OK. As long as the device is consistent (and we assume they are), the inaccuracy over each group is fixed and what we’re trying to measure here is the difference over time, not the accuracy of the original or subsequent profiles.
Accuracy is a marketing buzz word. It’s used to sell stuff. And ALL the color management companies are guilty of doing this. This isn’t any more correct than years ago hearing color management companies sell their wares using the term ‘push button color’.
I’ve asked Jack a number of times how his process gauges accuracy based on the facts above. How does the instrument along with some sample of known and measured LAB values tell you how to set the target calibration (which on an LCD is limited to the intensity of the backlight). If the soft proof seems off, using validation CAN tell you that your current profile isn’t accurately describing the current behavior of the device. The device has changed so trash the profile and start again. But short of that, how can sending X number of LAB values tell you anything more? Where’s the accuracy? What’s the software supposed to be telling you? Still waiting on those answers.
Our job as consumers, (and educators) is to separate the facts from the fluff. To decide if functionality provide in a piece of software is useful or there as a feel good placebo ( an innocuous or inert medication; given as a pacifier or to the control group in experiments on the efficacy of a drug).