Read this:
http://wwwimages.adobe.com/www.adobe.com/p...renderprint.pdf
Great article. Actually, for those who've been over this topic many times, as well as newcomers, the clincher is right there on the first page:
"Camera manufacturers have applied years of research and development to the unique algorithms inside each camera. Given a scene, each camera will arrive at a different result."
Now the tricky part: Given *one* camera, arriving at a different result just by changing the file type (I know, there's more to it....), instead of a "photographic" parameter such as sharpness or contrast is disconcerting. Then to have another camera of the same brand and only slightly different specs not produce a different output with a different file type, what is that supposed to mean? I'm very close to saying that "there are no rules", despite what seems obvious to photo buffs, i.e. always process the RAW image.
Actually, I flop RAW's over to JPEG's with defaults with one camera, and shoot just JPEG's with the other, and in both cases, I'm getting photos that are equal to or better than what I got with my Leica M4-2 and M6 with fixed 35 and 90 mm lenses. In the B&W lab, I never did a "dodge and burn", which other photo mavens seem to have loved to do. I just decided never to do it. I still don't see any pressing reason to hand-process RAW's, unless I can automate the process to a large extent by getting default settings that are "really good" for 95 percent of the stuff I convert.
Most people here really minimize the effort they put in on their lab work. It's not just hand-processing a RAW, it's taking the photo, loading onto the computer, backing up, saving versions of the image, translating the RAW, making adjustments, and many other steps. Like Tom Hanks said in the movie, "what's fun about that?" And by that I don't mean doing all of that once, I mean repeating all of those steps thousands of times over, and over, and over.