--convert to sRGB because most monitors cannot handle the full gamut of Adobe or Pro Photo
There are displays that can fully handle Adobe RGB (1998) gamut. But that’s somewhat moot because we are always working with color spaces that have gamut disconnects between what we edit and what we output. And if you think Adobe RGB (1998) is a wide gamut space on your sRGB like display, rethink the gamut of Lab which is HUGE. Assuming you are working with wide gamut capture to wide gamut output devices, displayed on an sRGB display, sure, there are colors you can’t see but you can print! Would you rather throw away colors you can see on the final output device just because extreme colors at the edge of the working space gamut cannot be seen on an intermediate device (the display)? There are few options here. Personally, the print is my final. I’d rather see the colors there and archive them so that in the future, as technology improves, maybe I’ll see them on a display. There are all kinds of other display versus output discontent like the huge differences in dynamic range. The display compared to the final print is simply an imperfect device. We have to live with that.
--make minimal adjustments, mainly exposure and fill light in ACR
You haven’t mentioned Dan M but others have so I’ll simply say that his suggestions are to zero out all ACR settings and render the raw data, then fix the resulting turd in Photoshop is
blatantly stupid.--open image and convert to LAB color, where I fix up the color cast, contrast, and saturation using multiple curves adjustments layers. Add clarity and noise reduction by converting the background layer to a smart object and applying high pass, luminance NR, and median (for color noise) filters.
You could do this in any color model (CMYK, RGB etc). The point is, why fix something that doesn’t have to be broken in the first place. Its like a photographer being totally sloppy on film exposure and then having the lab push his film 2 stops. It will work. Someone could probably make a print from it. But is it ideal and good working practice? No. As photographers, we would look down on an instructor who suggest we be sloppy with exposure and fix the issue later in the lab. This is Dan’s take on image processing. Sure, if you start with a turd (or in your case, an image with a color, contrast and saturation issue), you can make it look better after applying some of Dan’s techniques. Just like you can fix the exposure in the lab. But you could render (not fix, but actually create) idealized pixels from the raw to render stage in the first place. Its faster. Its fully non destructive. It provides a history that lives with the original raw data forever. It doesn’t make your files balloon to huge sizes because it simply metadata instructions (tiny text files). Prior to those capturing in Raw and using good raw processors, the ideas Dan proposes were the only option (or as I said above, make a good scan, not a crap scan and fix it in Photoshop). Dan’s got a workflow to sell and if you are caught with crappy, rendered data and no original (raw or film for a scan), his techniques ar
e very, very useful. But short of that, they are simply idiotic. Its like the lab tech that will teach you the intricacies of push processing film because that’s all he knows. Proper exposure simply isn’t on his radar. Look at the god awful originals Dan shows in the before examples and ask yourself, “Do I capture this kind of rubbish”? If so, stick with his techniques. If not, if you believe that GIGO:Garbage In Garbage Out is something to avoid, move on.
As a sidenote: I'd like to add that Deke McClelland has stated that converting betwee RGB and LAB is very marginally destructive.
Given just that sentence, I could say Deke is wrong. But since he hasn’t defined anything like the original color space, bit depth and the problem that needs to be fixed (and why), I’ll cut him some slack unless you can find the exact quote.
ALL image processing in Photoshop which alters numeric values is destructive. That’s why we work in high bit, use adjustment layers (which will introduce the numeric rounding errors at some point). A good workflow is one that gets you the desired goals as quickly as possible with the best quality data. I simply don’t see why anyone with the intelligence of Dan or those who think he’s a bloody genius would want to start any image processing workflow with anything but ideal data.