Hi guys,
thanks a lot for your thoughts and input. There is something that I don't understand, possibly due to lack of technical knowledge:
Your idea is fine, but if it works you necessarily lose 1/2 of the photo sites for resolution.
Why would I use half the resolution? Let's think of an extremely simple scenario, where I am photographing a rectangle with a white upper half and a black lower half. My goal is to get an exposure, where both halfs are 50% grey.
Now I look at the liveview preview of my future camera and see what is actually in front of the lens: a rectangle, top white, bottom black. Now I work on my beautiful touchscreen display and define two sectors: top half and bottom half. From now on I want the camera to treat the upper half of the actual sensor individually from the lower half, but only in terms of how long each photosite will be active in order to record information.
In order to get overall 50% grey I have to expose the upper half quite short and the lower half quite long. Let's say the top for 1/500, and the bottom for 1/4. If I now take the picture the shutter will be open for the longest exposure that I have defined, i.e. 1/4 of a second. But only the bottom half photosites will actually be "on" for that long time, so that they can nicely receive the information I want them to receive (so that the black is recorded as grey (overexposed so to say)). The top half of the sensor will be turned off after 1/500 and for the rest of the exposure will not be able to record any further information (as if putting on a black mask). This way it is ensured that the white area of my subject is underexposed and will look grey instead of white.
As all this information is recorded into the same file, I should end up with a fully grey photo.
I'm not sure why I should lose half my resolution because of this, but I would be happy for an explanation.
For me this functionality would also be clearly made for very controlled setups where the user controls the settings. The camera cannot know how I want my shot to look like. Possibly I want certain areas to overexpose and others to underexpose. I can't quite picture how an automated function could do that, so this is not what I intend to propose. In the above scenario I may want to revert the picture and get the top half black and the bottom half white (a bit of an extreme example, but just for illustration).
Now, of course all that work can be done in Photoshop by blending different exposures etc. But in complicated situations this may be difficult or at least laborious.
So in the end it would be up to personal taste in which way to get the desired shot. But personally I find trying to blend several exposures cumbersome and not so pleasent. I would rather compose my near-perfect exposure combination right when I'm on location. And at least here in Greece I frequently encounter scenarios where I would just love such a feature.
In any case, thanks a lot for your thoughts and input!
Heiko