Jeff, thanks. What do i need to do to have an "accurate profile of the state of my display"? And how do i use this display profile in the print workflow? I thought I was just "calibrating" my display to 6500 2.2 (with a spyder)...
What you are saying makes sense to me but that's exactly where my question comes from - how do I know my printer profile and display profile are speaking the same language?
Loosely speaking, display calibration is the process of setting your display to a known standard, e.g. a 6500K white point, gamma curve of 2.2, white and black luminance levels, and - depending on which software you use - equal R, G, and B levels creating a neutral color. The goal of calibration is to create a consistent baseline. Also, accurate calibration is critical for matching displays for multi-monitor or user setups.
The aims of profiling are twofold. First is to refine the calibration. The whitepoint, gamma, and neutrality are adjusted. Additionally, profiling details the overall capabilities of the display. Either a matrix or look-up table (LUT) is created enabling color managed applications such as Lightroom, Photoshop, etc. to calculate what RGB value must be sent to the monitor to produce each color specified in your images.
The main advantage of having accurate display calibration is that the profile need to do less work in correcting the output. All color-munging by the profile comes at the cost of creating artifacts in what you see on-screen. An accurate calibration, particularly when performed in a monitor having high-bit internal LUTs, means less banding and posterization in your view of images.
Your display and printer profiles "communicate" through the color management engine used by your software, be it Adobe, Microsoft, or Apple. The colors in your image are converted into a device-independent color space (translation: a color space where the numbers themselves describe an actual, real world color). These values are fed into your display profile so the output on your screen is as faithful as possible to the image. Likewise, at print time the image colors are converted using the printer profile to produce a print that is as close to the original as possible.
The exact settings used for display calibration are open to (never-ending) debate. Here's my take: We want to see the best possible representation of our images on-screen, with the proviso that there should be correlation to how prints from said images appear. The exact calibration settings used depend on the hardware (monitor and printer) used and our personal color vision. Taking the major settings in turn:
- White point: The standard (in the US particularly) for print viewing is a D50 whitepoint. If you are not viewing your prints under a 5000K light source, all bets are off for screen to print matching with standard printer profiles. (Yes, it is possible to make printer profiles for other viewing conditions; the vast majority are made for D50.) Due to the relative efficiency of how monitors render white and quirks of human vision, most people see a closer visual match between a white on-screen image and a 5000K-illuminated piece of white paper when the monitor is set to a 6500K white point. There is no absolute answer here: compare a white square in Photoshop to a blank sheet of your paper stock in a viewing booth or station. If the on-screen white looks too blue, reduce the white level. One of my colleagues calibrates his screens to 5000K and swears about the "damned blue displays" the rest of us use. When I view images on his screens, I have difficulty getting past the dingy yellow cast. To each his or her own.
- Gamma: The recommendation for a gamma of 2.2 came about because this was close to the average native gamma for most CRT. Given that any changes in gamma from the native value, particularly on panels from a decade or two ago, created opportunity for banding artifacts, calibrating to the native gamma made sense. With LCD screens, it does not make as much difference. Native gamma is, for compatibility with CRTs, close to 2.2, so this still works. If you have a high quality monitor and calibration software that supports it, L* gamma has much to recommend it. The L* curve more closely matches human visual response, creating greater tonal separation where necessary to see image details. L* can create weird banding and crossover artifacts if the monitor hardware is not up to the task. My preference is to try both L* and 2.2, view test images such as this one to check relative performance in shadows and highlights followed by a grayscale ramp to look for artifacts. Choose the gamma curve that gives the best performance.
- White luminance: Here's a can 'o worms. Other then a few top-end monitors, most panels show gamut reduction and increased artifacts when set to luminance levels under 130-140 cd/m2. If your print viewing environment is sufficiently bright, running at a luminance of at least 130 improves your view into the image details and prevents time spent correcting image flaws that turn out to be calibration artifacts. Many photographers either do not have dedicated viewing booths or simply hold a print up to their screen using ambient light or a task lamp for illumination. This produces the "my prints are too dark" complaints Andrew noted above. You can either solve this problem by dialing down your screen luminance or using a brighter light for viewing prints.
- Black level: Most monitor calibration software does a decent job of determining the lowest usable black level. Yes, you can obtain deeper blacks, but this comes at the expense of plugged up shadows. For general use, I recommend getting maximum output contrast by setting black to the minimum usable level. Soft proof using the black ink simulation to make sure your shadow details will hold up. If you print to a very low contrast paper, like the newsprint job I should be working on now, dialing the display contrast down can help preview just how much your images will suffer in print and let you adjust accordingly.