Needless to say, uV issues combined with paper OBs can produce even bigger differences.
Yes. Definitely. I only use non oba papers for matching work.
It's a good approach but CTP can be somewhat slow on a large image. CTP has a lot of useful features but speed doing dE comparisons isn't one of them.
Another approach is to convert each, then assign to the other profile. then you can compare those two images to the original to see how the prints made with one profile illuminant would look under the other.
I use Matlab with some scripts and functions I've written to look at these sorts of differences.
Converting to printer space then assigning the other profile does seem to work pretty well. The average difference across the test images between my viewing booth lights and D50 was 1.3, with the worst colors (light tans) maxing out just under 3. I did go ahead and print a few of the test images using the custom illuminant profile for a side by side visual comparison with the D50 version in the viewing booth. The DeltaE maps from ColorThink Pro for those images were good predictors of which areas would be noticeably different (though barely noticeable). Yay, I like it when the model works!
I've done a lot of reading and experimentation with taking a scene referred art reproduction image file and matching a print from that file in a viewing booth with it's image on a display. Here's a summary of my current understanding, and a question. Please point out any problems you see...
Standard observer XYZ values respresent stimulus response only. They tell us whether two different spectral stimuli will appear the same when compared side by side while isolated from all other visual stimuli.
CIE Lab extends that model with the definition of a white point and the use of chromatic adaptation functions to convert colors so they appear consistent, relative to each other, when the observer is adapted to different white points. Lab is the beginning of an appearance model, but it assumes full adaptation to the white point and doesn't take into account appearance changes due to other factors such as background, surround, and luminance levels.
The XYZ/Lab models work well for image capture and reproduction onto reflective media, but when we have a reflective viewing booth side by side with a self luminous display, we experience a breakdown in these models. We can strive to get measured XYZ, white point, and Lab values that match between the two, but even when the measured values do match, we don't actually perceive a match. This lack of a perceived match seems to go beyond what can be explained by instrument error or differences in individual observer.
In an effort to use the CIE XYZ/Lab models, we adjust the luminance of the viewing booth and/or the display to get a match while eyeballing changes to the image on the display until we perceive a white point match. Then we build calibration curves for those target values and create a profile for converting an image into display space. We can get a very close match between viewing booth and display using this technique, but that match is built on the eyeballing, so there's not really an accurate perceptual appearance model underpinning the effort.
CIE CAM02 takes into account levels of luminance, background, surround, and level of adaptation. Given a paper white background in a viewing booth, a custom illumiinant printer profile for that paper under those lights, and a display that's been adjusted, calibrated and profiled for a white point that matches the measured Yxy values of the viewing booth lights reflecting off that paper white, can CAM02 be used to create a perceived match between print and display without any eyeballing adjustments required?