A bit dissapointed to read this:
It makes me think Steinmueller didn't really get that the point of bracketing for HDR will soon be unnecessary, and it does not participate in the definition of HDR itself. The only reason we have today for bracketing HDR scenes is that sensors are still too noisy to capture in a single shot the entire DR of many real world scenes.
Guillermo! I'm surprised to hear this from you, as a man who is as concerned about signal optimization.
What we are talking about here is a broader concept. Imagine just in one part, the idea of allocating bits /where the information is/ as a central concept. Think of that in terms of this one practical case.
Recently I did a portrait of someone in a church space, where the subject was lighted by a stained glass window. The inside of the church was dim, but beautiful. Now even with a D3x, a camera with good dynamic range at ISO 100, I could not capture any detail whatsoever on the inside of the church. It simply came out mostly RGB=0,0,0. There were a few single bit quantities, but nothing discernable.
It is important to ask here - why must the church be black? It doesn't look black to me. But it is arbitrarily the case, partly by virtue of the original physical chemistry employed, that each individual exposure has an implicit black point and an implicit white point, both of which are /false/.
What I could have done was (1) shoot the background in HDR, (2) do portrait takes, and (3) composite them. But this would just be a way of allocating bits to the relevant content. Audio encoding schemes allocate bits where the psychologically salient information lies. Photography should record visual stimuli where the salient information lies -- in absolute magnitude space.
By supersampling the scene inside the dim church, I could have collected that information. We're no more obliged than any painter to make the dynamic range of a source image correspond to the dynamic range of the output medium. The sun can burn orange, as viewed from the interior of a candlelit chamber through the window onto a blue sky peppered with cirrus clouds. And when you paint the candlelit interior, it will be detailed. Don't we, with our inherently diachronic visual system, kind of see it this way?
Now finally, imagine this. A camera to be could be capable of flexibly and adaptively supersampling a scene by allocating collection of information, locally as well as globally, over a given shutter interval. Under its control could be differential gain between pixels, and localized multiple exposure. Normalization and averaging could be done in camera. The reason this would have to be done in camera is because in practical terms, you are interested in events that last for only about 1/60th of a second. In order to carry out a complex "superexposure" program, you'd have to hand the task over to the camera.
I really believe this is coming. And this tells you something of why I think this is more than a fad involving cheesy special effects.