Maybe, Bob, the question is indeed about the attributes of a good definition.
(Scientifically) intuitively, I’m rejection your definition. It seems to me dependent on your individual experience and in particular dependent on the very current state of camera technology. Some lines above I had tried to contribute a more general definition, by describing a HDR-discriminator in post-processing, independent from how the data were collected. In your terminology, you would see it (all) as tone-mapping. But I would not exclude completely that it is just based on my experience…
I’m wondering, if there is any "definition theory"
to tackle the KPIs (key performance indicators) for the quality of a Definition.
Never came across this question.
Peter
Hold on for a moment, everyone.
Having spent years studying semantics as a grad student, I would press a point strongly here. There are some interesting things being discussed here. Unsurprising, given the bright bunch, some of whom are furthering our understanding of the subject. But nothing proposed so far in the way of the semantics of HDR has been anything but a non-starter.
If one is looking for a nominal essence here, something that might be used for a definition, I think it will be very hard, perhaps impossible, to find. Is there a single necessary set of conditions for being "HDR" or for saying that one is "doing HDR"? I suspect not. I think you will more likely find clusters of conditions. This is why I speak of the "commonplaces".
In the end, I think the theoretical significance of "HDR" /per se/ in a technical theory of photography is slight. But the things that we refer to when we say we are "doing HDR" have some very practical value, and help point the way to new theory.
There is nothing essential about bracketing, or "supersampling" of any kind. However, the practice of supersampling leads into some very useful techniques. By supersampling, you gain an increase in precision. Thereby you allocate more bits, and perhaps use floating point to even out the allocation of bits along the entire range of numerical values represented. With this increased precision, you have the opportunity to process image data having dynamic range that exceeds the dynamic range of your output media. Thereby you gain the opportunity to have variable white and black points, virtual re-lighting. All of this is facilitated by the gain in precision and the allocation of bits evenly across the range of values. The fidelity of the "low tones" is improved.
This describes widespread practices, but in no way suggests at anything essential or necessary in the semantics of "HDR".
The idea of "tonemapping" may turn out to have more theoretical significance, since as far as I can see, it just refers to the idea of mapping one set of tones onto another set of tones by whatever means. But we use this generally, and not just when "doing HDR." Sometimes we map tones having a greater dynamic range onto a set of tones having a lesser dynamic range. Sometimes not.