Most of us have by now accepted the idea that exposing to the right, just short of clipping, is the way to go with digital.
The basic reason being that, sensors being nearly linear devices, the brightest stop of a digital capture contains half of the bits available for sampling the tonal range that the sensor is able to capture. Keeping a gap opened between the right of out histogram and our brightest points results in keeping some of these precious bits un-used.
However, this dogma is based on an assumption that I feel is potentially unproven, and might also be largly un-tested.
This assumption is that a digital sensor works perfectly well up to the brightness levels resulting in 254, 254, 254 RGB values. In other words, we assume that we are not reducing the ability of the sensor to distinguish close tones by over-exposing the image as long as we stay short of pure white (255, 255, 255).
On the other hand, we also see in our digital images that the transition to fully blown highlights is typically not as smooth as it used to be with film, which hints at the fact that sensors are in fact not very good at dealing with these near white areas when the exposure reaches a high enough level.
Building on this, I am starting to wonder if, all things considered, there is actual value in over-exposing too much an image, even if it stays short of full blown highlights.
-> Aren't we better off being a bit more conservative?
-> Isn't this very much dependant on the actual behaviour of each sensor?
My view is that this is independant of the curve applied during RAW conversion, since this curve will never be able to get rid of posterizations that might have been introduced at capture by artifical over-exposure.
Am I missing something here?