Suppose you wanted to take a picture of a skier on a snowy slope with a background of evergreen trees, and you want to preserve the tree-needle detail, the snow detail, the clouds in the sky, and the detail in the moving skier. In other words, you awnt an astonishing amount of DR. If you take this photo at a speed slow enough to pull detail out of the dark pine trees, you blow all the highlights. But the detail in those highlights once existed as information passing through the sensor. If you could count the photons that hit each pixel (which is what the software already does) and if you could assume, as I think you could, that the rate of photon hit on each pixel is constant over the time span of the shot (say 1/125) why couldn't you apply a little math to each of those individual pixels, reduce everything by, say, the exposure of four stops, and recover all the blown highlights?
Why would you do this, you ask? To develop amazingly wide DR with a single shot at a reasonably high action-stopping speed, would be my answer.
If this can be done, tell me, and let me share the wealth.