Sometimes I wonder if I am such a strange user to be praying for RAW histograms and automatic ETTR (Expose to the right) mode on my camera.
If many users shoot RAW, why after 4 years of DSLR we are still looking at JPEG histograms and clipped info? why are we doing endless trial/error shots to _try to achieve_ an optimum ETTR of the RAW channels instead of having the tools to easily _achieve_ it? why cameras measure light and calculate exposure to obtain a pleasant JPEG instead of optimised RAW data? why not allow both modes of operation? JPEG-oriented and RAW-oriented.
RAW HISTOGRAMSAll the mess of UniWB wouldn't make sense if just cameras would allow to display the undemosaiced
RAW histograms, which are simpler to calculate and display than the JPEG histograms. Why not allow the user to choose?
A RAW histogram (it could even be logarithmic arranging the info in real EV divisions) of the 3 channels would allow at a glance to find out how well your RAW was exposed.
This is a perfectly exposed (but just because of a lucky shooter) RAW on an Oly camera, easy to see and clear to understand:
Also the DR of the scene is easily evaluated thanks to it, and hence you can quickly know the amount of noise you can expect in the shadows without even looking at the image, and find out how many extra shots are needed to capture all DR. Is it so difficult to add that to your cameras?
And the same would apply to the clipped blinking information, it should be possible to refer it to the RAW data.
AUTOMATIC ETTR MODEWith
Live View cameras can evaluate the histogram of the scene in front of you in real time. Why don't use that valuable source of information to set up a mode for automatic ETTR? the camera would calculate exposure (aperture/shutter/ISO) to obtain a properly ETTR RAW file. A user setting could be % of blown pixels allowed in the RAW data.
If the Live View histogram is not precise enough (maybe it is calculated from an auxiliar sensor), why don't make then a
quick preview shot (it can be very high ISO, no problem, and it _must_ be underexposed in order not to clip any channel), it could even be transparent to the user. With that information, the corrected exposure values can be calculated in a fraction of second to achieve the perfect ETTR in the real shot right afterwards.
I know Sony cameras can simulate the histograms you would get after you shot; if you move the exposure wheel histograms change in front of your eyes. Why don't use that in a real time exposure calculation?
I did it on my software and it's trivial to calculate the needed exposure correction to automatically achieve ETTR on a given set of data:
Someone would sign for this wish list or add new ideas?
BR