Granted, I did oversimplify... So, I'll make it more confusing and much more likely to get criticized.
Zone System is A+B+C=D, not A+B=C
A is your exposure control.
B is your film/sensor characteristics AND development settings (chemistry, time, or lightroom raw conversion settings).
C is your paper/output characteristics AND development settings (chemistry, handling, printer profiles, RIP).
D is the final result.In Ansel Adams' trilogy book
"The Negative", he takes 50 pages to describe the Zone System in Chapter 4. I won't repeat all 50 pages here, by any means, but I will hit a couple more salient points and try to give an interpretation for the digital world--which is sure to be wrong in some regard, so I'll stick high-level and willingly accept the flaming arrows.
In the second paragraph of Chapter Four, AA writes:
"You may well ask why anyone should go to such pains to produce consistent negatives when we have printing papers available in several contrast grades and other printing controls that allow us to compensate for negatives of differing scales. While every such control has its uses, it is best to strive for the optimum negative to minimize dependency on printing contrast control, since the tones of the print may be best achieved with the use of normal-contrast paper. In particular, papers of higher-than-normal contrast make it increasingly difficult to control the refinements of the higher and lower tonalities. It might be preferable to work for a negative of extensive density range and print only on the longest scale papers, but there is then little additional tolerance when we desire softer results."My personal translation of this paragraph for the digital world would be this:
"You may well ask why anyone should go to such pains to produce consistent converted raw files, when we have inkjet printers, soft proofing and Photoshop to adjust contrast, exposure, clarity, gamma, curves and saturation. While every such editing tool has its uses, it is best to strive for the optimum image file to minimize dependency on Photoshop adjustments, since the tones of the print may be best achieved with the use of minimal tonal reassignment. In particular, major amounts of contrast adjustment make it increasingly difficult to control the refinements of the higher and lower tonalities, which require more highlight and shadow recovery. It might be preferable to work for an image file of full density range, but we run the risk of overcooking the results and it's hard to back off to a more reasonable image." (I editorialized)
In my A+B+C=D description, you will note that I lumped sensor/film and development together. This is an important point--if not THE critical point of this entire discussion.
In the classic film Zone System, the film's characteristics and development techniques (developer, processing time, handling) work together to form a specific response curve. In the digital world, the sensor and raw converter together are the equivalent. One does not exist without the other. To adjust these curves, the Zone System generally allows (depending on film types) anywhere from N-2 to N+2 development. This is an expansion or contraction of the contrast range of the original scene onto the storage medium.
Films have three exposure areas of interest: Toe, Shoulder, Straight-Line Section. With digital, we only have the Straight-Line Section. However, the Straight-Line Section of a digital sensor is FAR longer than the Straight-Line Section of almost every film ever made. (Ilford XP2, which is well-regarded for it's extremely wide exposure-latitude has no Straight-Line Section, but is entirely Shoulder and Toe). Development and exposure adjustments for film photography allows us to put more of the exposure range into the toe or shoulder of the film. The more you do it, the more tonal separation you lose. While film photography allows us to lean into the toe and shoulder to effectively give us more exposure range (which can be recovered with further manipulations through the printing process), digital photography is an all or nothing affair. There is no toe or shoulder to save your bacon. Of the B&W films that I use today (mostly Ilford films), I'm pretty safe in saying that I've got about 8 stops of tonal scale with reasonably normal tonal separation and a limit of another 8 stops of something lurking in the toe and shoulder. And that's only possible if I'm really good and have exposed and processed the film perfectly. In reality, my world usually ends at 12 stops. About the same as a modern digital sensor--and with the digital image I can fudge a shoulder and toe through the addition of dithering noise to the image.
The idea behind a "Zone System for Digital Photography" is to have as little tonal manipulation to the image file as possible to preserve tonal separation and integrity. In other words,
touch it once in regards to tonal assignment.
The premise behind this is that the time to get the basic contrast and tonal curves defined is at the moment of raw conversion. NOT afterwords in the editor. ETTR (Expose To The Right) is often times one of the worst ways to work because it may end up with the most extensive bit reassignments. However, for maximizing tonal scale, it is the best method and I would usually say that it is the preferred method for Zone System style photography--but not always. Remember, this is all about interpreting the scene and assignment of brightness values into a stored brightness value for processing down the chain. If I want something absolutely black, I'll make sure that I'm "climbing the wall" on the left side of the histogram and if I want something absolutely white, I'll make sure that I'm "climbing the wall" on the right side of the histogram. If my subject is decidedly middle-tone, why in the world would I ETTR and then go through the bit-bending to reassign values downward and resort to highlight recovery to keep the colors in the high tones from going wonky?
The classic Zone System isn't always just about maximizing tonal scale, but it is about shooting for the output with minimal manipulation.A little hint from a darkroom dog: When exposing the paper under an enlarger, we use almost the same exact exposure settings, regardless of the subject. Unless we are trying to be a hero, we don't take a thin, under exposed negative and pull exposure three stops to get a print, and we don't take a negative as dense as welders' goggles and blast the paper for five minutes. We try to have negatives that, regardless of the subject, is exposed and processed to a normalized setting where we are maximizing the straight-line section or where a desired tonality is printed with minimal adjustment. Yet, in the digital world, this is EXACTLY what we are doing--taking severely under or over-exposed images! We're taking an ETTR image and pulling exposure multiple stops! This isn't necessarily a bad thing, however, as the linear capture technology of digital photography piles all the bit depth on the top end (essentially into the "shoulder" of the sensor). Which now leads to my next point.
The sensor and raw processor are integral. You cannot separate the two. They are one. It is extremely critical that you not only carefully match the raw converter to the sensor/camera, but also get your tonal adjustments nailed down as close as possible at this stage. As much as we love Adobe around here, it is no secret that the ACR engine used in Photoshop, Bridge and Lightroom isn't the best converter for every sensor. I won't get into specifics, and I'm sure that I'm shocking some people, but there is a difference between raw converters and algorithms. Some converters use RGBG in the matrix, others use RGB in the matrix, others use RB(average GG) in the matrix, others use RG+BG in the matrix... An occasional one uses RGB-G... Basically, if you can envision it, there is a converter doing it.
Adobe's conversion engine is the best general-purpose converter available, just as D76 was the best general-purpose film developer.When converting the raw file itself, this is the point where you want to get the resulting file for editing as close to the exposure and contrast as possible. Do it before TIFF/JPEG storage and assigning a color space. (Lightroom is the variable here, since it doesn't really give much separation between the conversion and the editing--with mixed results depending on the image file in question--unlike most converters, it doesn't specifically give controls over just the raw conversion process). The raw conversion process is where you expand the contrast range to stretch the histogram to the desired high/low points and get the midtone positioned properly. This is where the general image gamma (conversion from a linear to non-linear exposure curve) is applied. Get this right and your job in the editor is much easier. Get it wrong and you're stealing bits from somewhere.
Yes, during the raw conversion process, you are moving bits around, but you are also doing it in a bit depth much greater than 8 or even 16. Even though most cameras store the RGBG data in 12-16 bits, the raw converter is using floating point calculations and other math techniques to effectively be giving 24+ bit depth processing. (This is a technique used in the professional audio world where we do everything possible in the A-D process as the effective bit depth can be as high as 128). Do most of the exposure and contrast adjustment at this stage because you'll never have this much effective bit depth available again. (More bit depth doesn't necessarily mean greater dynamic range, but usually means smoother gradients between tones--especially in the lower tones and near the thresholds of the exposure range).
So, this is where the Zone System comes into play again. Back in Ansel Adams' time, with some of the films available and in larger formats, he could pull development and push development. He had N-2, N-1, normal, N+1 and N+2 exposure and development settings. With digital, unless you dive into HDR, we have normal, N+1 and N+2 at our disposal. Actually, depending on the raw converter, we have as much as N+5 or so available. Not having N-1 or N-2 available with digital is not a problem because the digital sensor's Straight-Line Section is so long, and we can resort to HDR to give us N- numbers of mind-boggling proportions.
In reality, a camera, such as the Nikon D800 is already effectively at the N-2 point. (One difference, though, is that while the first two stops of shadow in a digital file is represented by a total of three bits and is either an all or nothing affair, film has more gradient potential in the shadows).
Let's assume a Nikon D800 file here, exposed ETTR, of a typical midday landscape. I'll be generous and use the commonly referenced 14 EV range. 14 stops of almost entirely straight-line section is about what an N-2 processed negative is going to get you. This means that almost without exception, we end up having to increase image contrast to get a full-range histogram that touches absolute black and absolute white. This stretching of the of the contrast is the development equivalent of going to N-1, normal, N+1 or N+2 development.
In most converters (to varying degrees and implementations), you can save basic conversion settings. You can create conversion settings for N-2, N-1, normal, N+1 and N+2.
These settings not only stretch the contrast, but also adjust the mid-tone placement. This applies your standardized gamma settings to the files AND you can also apply curves adjustments which create an artificial Toe and Shoulder. Remember, that this D800 file is 14 stops of Straight Line Section, compared to a B&W film with a maximum 8 stops of Straight Line Section and 4-8 stops of Toe and Shoulder. You really can take the middle 8 stops, keep them straight and the other stops and squish them a bit.
With a single click of the button, we can take a Zone System exposed camera file and get an image that is proof ready. Of course, you can always go back and do a custom raw conversion, but with a well-defined and implemented "Zone System for Digital Photography" you usually won't need to.
This is the equivalent to the careful selection of film+developer+processing technique for film photography.The key to this is testing, calibration and profiling your "digital film". In my copy of
"The Negative", Appendix 1 is titled
"Film Testing Procedures". While the specific instructions are film-centric, the general rules of testing are easily adaptable to digital photography. Appendix 2 shows film testing results that you can further compare to as part of your own learning and calibration process to know whether you are on the right track or not.
The third book in the Ansel Adams' trilogy is
"The Print". This book addresses the "C" part of my equation, but is not really as core to the Zone System as A and B. This book deals with today's equivalent of Photoshop processing, print calibration and other such issues.
"The Print" assumes that you've already gotten a usable negative that is reasonably close to being printable.
While no discussion of the Zone System should be without hitting the finer points of
Value,
Zone and
Placement, I'll chop this off here as if it goes any longer, it should be presented as an entire illustrated article, not just a post to a forum thread. Other famous and not so famous photographers and authors have written extensively about the Zone System and tried to adapt it to various other areas of photography, but I have found that most of them confuse issues or are selective in picking parts of the System but not all. I wrote this response with the perspective of keeping it as pure to the original Ansel Adams presentation of the Zone System as possible and translating it to the digital photography world.
Ken Norton