Doesn't sound the same. Sure there are HDR-esque features in smartphones now where it always takes two exposures and combine in-phone, but they often result in funky double exposures with moving subjects:
Flock Together by
Dr. RawheaD, on Flickr
So let's say your camera meters the scene to be 1/125s. Conventionally, the phone might take a second exposure at 1/500s (for a total of 1/100s+) and the two will be combined into HDR.
With this technology, let's say your camera meters the scene to be 1/125s and commences the exposure. Now, let's say some areas of the sensor will be overexposed, because you have bright spots in the scene. What happens then is that, say, after 1/250s those area become saturated due to overexposure, ONLY those pixels are reset and thereby commence a second exposure @ 1/500s.
Note that the entire process will be completed after 1/125s, which is the "original" exposure. Furthermore, since you're not "combining" two layers into one, but rather merging the double exposed pixels with the single exposed pixels, it will reduce a lot (though not all) the funkiness that can happen.
I think this is a very cool idea, and I've had similar ideas in the past: basically, that overexposure can, theoretically, be completely avoided with digital technology, since the system/pixels would be aware when it receives too much light: it just needs to stop exposing at that point :-)