That article doesn't make any sense.
Much of it doesn’t, I agree.
One of the biggest misconceptions in color management is the belief that you calibrate your monitor and printer to match each other.
What makes it a myth, I get very good visual matches between the two. They are not perfect nor 100% but they are very close. One could say its impossible to shoot a transparency and then make a print or reproduce it to have what the creator feels is a match. Lets throw up our arms and just forgo process control in terms of image capture, film processing and press processing, the two will never match.
Let’s start with an example from music. Consider this: If a guitarist and a cellist were playing the same piece of music, would you expect the guitar to sound like the cello?
No, but we could interpret the music as both as being the Brandenburg concertos. And we could say that the two sound far more similar than a cello and a base drum!
We all know that the character of instruments is different
Just as we know the character of a Polaroid and a transparency, or an emissive display and a print have differing characteristics but we can still extrapolate the effects of the two, just as we can say that the transparency and the printed piece do not match (or do match to our satisfaction).
The problem with the idea of making your monitor and printer “match” is that it forgets about the file itself…and the fact that the file is the most accurate representation of color.
That’s pretty nonsensical! The file is a big pile of numbers. Those numbers are “accurate” (a buzz word) to what?
They are the most accurate representation, even though they record color in a numeric form our eyes can’t see.
Yup, without a display, my eyes can’t “see” the numbers.
The problem with monitors and printers is that they are each limited in the colors they can reproduce.
The weak link being the gamut of the display, true. But an awful lot of colors can fall within gamut, so what about those numbers (and the colors we see from those numbers)?
Well, you can’t just feed a pixel value of 211R 0G 0B from the file into a printer or monitor and expect to get the right color red.
But you said the numbers in the file were accurate? But wait, farther down you say:
We’ve done side by side tests with challenging prints with out of gamut colors, comparing a print under 4700k SoLux to the monitor, and turning the softproof on and off, and the monitor was more accurate with softproof off.
It begs the question, how can a document in an output agnostic color space match the print closer than when you ask the CMS to attempt to better map out of gamut colors, contrast ratio and paper white? Seems something is wrong in this workflow. So all RGB working spaces, which differ greatly match all output devices better than using the profile that defines the output device along with the display used for a soft proof? Something is seriously wrong here!
You have to realize that devices like monitors do not represent the print 100% of the time so you can shatter the myth of the match.
Correct, they do not match 100%, so is 95% useful? Why calibrate the display at all if the idea is, without a 100% match, matching a print to a display is a myth and as stated above, just forgo soft proofing?
This is the important point. Color management IS NOT trying to make the monitor match the printer. Instead, it’s trying to make each device, independently of any other device, represent the file as accurately as it can, within its own limitations.
So this is only and always a display issue? Or those who proof (cross render), pull contract proofs and so forth are unaware that color management is not trying to produce matches of dissimilar media?
Now, I’m not recommending you throw away your color-accurate monitor, but to understand and work with its limitations.
Best point in the piece and one that could be stated without the rest of the article. Matching even two displays 100% is probably not possible. In fact, if you want to get anal, take two brand new Epson or Canon prints and send the same RGB values to both, measure a few thousand color patches and you’ll soon see, they do not produce 100% match. Is an Atomic Clock 100% accurate? If not, what does this tell us about the inability to tell time in an accurate fashion?
The entire article could have been a paragraph. Differing devices do not match 100% (and it be damn useful to define how you measure these metrics, the accuracy of the measuring device and process but lets skip that). The goal is to get the closest match the technology and price point allows. Its a bad idea to throw the baby out with the bath water. If color management reduces mismatches by 15%, is that useful to the end user, even if when viewing the two media in context, we don’t see everything matching perfectly.
The author should also be asking why, when viewing an RGB working space that has no relationship to an output device or the output device numbers, does he get a worse match when the CMS takes that output device into account with soft proofing. Why do so many users find better matches with soft proofing compared to the working space than he does? Could it be something in the profiles, display calibration, print viewing, environmental conditions or the users eyesight be suspect?