Since this subject seems to be back on topic somewhat I have a question that has had me a little curious, which can only be answered by those really understanding the 1's and 0's (I hope Andrew is still lurking).
The main difference I see between MFDB and DSLR's ... at least most of them currently, is the bit depth. Does a 14bit MFDB by nature capture more levels so without exposing to the right as far, you essentially gain a similar amount of data? Or are we talking about different kinds of levels here?
[a href=\"index.php?act=findpost&pid=151890\"][{POST_SNAPBACK}][/a]
Many MFDBs have 16 bit ADCs and output 16 bits per color. That means that the brightest f/stop has 32767 possible levels compared to the 2048 possible levels with a 35 mm style digital camera with a 12 bit ADC. However, the human eye can perceive only about 70 of these 32767 levels and the rest are effectively wasted. It his helpful to have more levels than the eye can perceive to provide a margin of safety in processing, but the above margin is far greater than necessary.
In real world photography, dynamic range is limited by noise rather than quantization and the quantization theory of the advantages of ETTR is overblown IMHO. The real advantage of ETTR is reduced noise and better dynamic range. If your exposure does not result in near full well photo sites, then you are not taking advantage of the capabilities of the sensor. If you reduce exposure by 1 stop and you lose 1 stop of DR. The signal drops by a factor of 1/2 (0.5) and the shot noise falls by 1/sqrt(2) or or 0.707. The resultant signal to noise (S:N) is 0.5/0.707 or 0.707 of what it was previously. Contrary to popular belief, noise is actually higher in the highlights, but the S:N is better in the highlights.
The Kodak KAF-39000 39 MP sensor used in many high end MFDBs has a full well capacity of 60,000 electrons and a read noise of 16 electrons, giving a dynamic range of 3750:1 or 11.87 f/stops. The data sheet states a DR of 12 stops. This is no better than the latest Canon sensors and I see no reason to believe that the principles of ETTR are different for the MFDBs than 35 mm style DSLRs.
A second question, if LR and ACR by default is non-linear, is it important to set up a linear default, normalize the EV, and then modify other parameters? For example if I have a medium contrast curve, and pull the EV down, is my result substantially different than if I eliminate the curve, pull the EV down, and then put the same curve back in? Does that make any sense? Just trying to understand the most optimum workflow when I am using ETTR.
[a href=\"index.php?act=findpost&pid=151890\"][{POST_SNAPBACK}][/a]
The adjustments are done on the linear data before the tone curve is applied, so the manipulations you suggest are not necessary. In my previous post, I held the white point constant, but let the black point vary. If you keep both of these constant, the tone curve does not change that much when you increase exposure in the raw converter.
In any event, you have full control over the tone curve in ACR, but the signal:noise is controlled by the actual in camera exposure, according to the number of photons captured. With this in mind, I can think of no reason for not exposing to the right, perhaps bracketing or else leaving a bit of headroom so as to not clip the highlight. Small amounts of overexposure can often be corrected with highlight recovery in the raw converter.
Bill