And that gamma change would have been better done in RAW anyway.
I totally agree. It reinforces the idea that all the heavy lifting should be done in high bit, linear encoded Raw processing (despite the fellow who dismisses high bit editing and Raw processing).
The point wasn't to suggest otherwise, the point was to dismiss a silly 16-bit challenge that has been going on far too long. And the shocking result was the challenger saying the exercise was faulty due to edits made in an ultra wide gamut space.
Personally, I wouldn't use an image that needed that much processing as neither the 8 nor 16 bit versions have resulted in a good quality image.
And neither would I. This was, if memory serves, the default rendering of the converter. But the challenger this image was addressed to, suggests we SHOULD set the processor in such a default mode, then "fix" the rendered pixels in Photoshop (in 8-bit no less). He also said "anyone who knows what they are doing can fix a JPEG faster and better in Photoshop than a Raw in Camera Raw". Nonsense I say and when I challenged him to prove it, he dismissed this.
Read the original URL from Bruce Lindbloom about this 16-bit challenged, it sums up the nonsense that Dan has proposed from day one. A challenge that changes whenever he sees fit. Once again, the images I uploaded were simply to address this challenge not to suggest it was best practices. We should render the best possible quality from our Raw converters.
I think that there's a lot of hyperbole about this whole issue, and many people take the stance that you _must_ use 16 bits or you lose a whole lot of quality.
The potential to lose quality is there. We don't know when, we don't know why one edit may produce the damage. As I've said from day one, high bit editing is cheap insurance. The other side says "I challenged you to prove there's a benefit" not "I will prove there is no benefit" which is quite a different challenge. Worse, when someone does attempt to prove the point, either using simple math or an image, its dismissed. The math is undeniable. The printed results are not always so clear cut.
Only if you are interested in the absolute pinnacle of quality in the most demanding of applications, or if you need to make radical adjustments to an image in order to attempt to rescue it from the bin should you require the use of 16 bits - or if you don't mind using the space and just want to cover your bases.
If your goal is to produce a catalog of 1000 images of widgets on a white bkdng 3x3 on a 150 linescreen CMYK page, working in high bit probably isn't a good idea. I understand the need to get the job done quickly, based on the final reproduction requirements. If the work is for your portfolio, or a very important image you may not know how you'll ultimately reproduce, then high bit editing is simply good insurance with little penalty. That's not the mindset of the challenger of the 16-bit workflow. He states its simply not necessary. At least he did until some of us attempted to prove him otherwise and now he has modified his stance somewhat to say "sometimes" and points to those who use unnecessary (his words) ultra wide gamut, "dangerious" working spaces like ProPhoto RGB.
My approach to my students will be to tell them about editing in 16 bit, but state the facts - it's only necessary under very narrow circumstances, and if they can't afford the space that 8 bits is just fine.
I'd agree with you on the first part, that its probably necessary under some, narrow circumstances. I don't agree with "just fine" because I don't know what may be fine for you is unacceptable for me. And I don't know when just "fine" becomes not so fine. So, its far easier to simply keep the data in its original bit depth from the capture device and not worry about when "fine" becomes unacceptable.