+1, particularly since (it appears) the identification of the spots and the adjustment required for removal can be done completely automatically via algorithm.
Hi Alan,
That's correct, it's totally automatic. The dust map shot, that was shot through a diffuser, is an almost uniformely lit featureless image. The dust spots stand out as slightly darker, or if there are oil spots there will also be a somewhat lighter/darker ring.
That image, in linear gamma as it comes from the sensor, is normalized to a maximum of 1.0 (decimal number) for the brightest pixels. So now we have a data file with ones for the brightest pixels, and a range of slightly lower/darker values for the dust attenuated spots. The software now divides the actual image data by the dustmap data. That will leave the brightest image pixels unaffected, they get divided by 1.0. The slightly darker dust spot image pixels will be divided by slightly lower dustmap values than 1.0 (e.g. 0.98), and thus get amplified (1/0.98 = 1.0204).
Since that all takes place in linear gamma, colors and detail are not affected, other than that they get slightly amplified/brighter where they were attenuated by dust density.
An actual implementation has a bit more details to deal with, like noise of the dustmap shot, and vignetting/light fall-off across the image, and colorbalance, etc, but the principle is as explained.
So any image detail will get proportionally brighter where the dust made it darker, without altering the actual detail or resolution itself. And as long as the dust didn't move between the actual image exposure and the dustmap shot, the local amplification is pixel perfectly aligned at each sensor position.
Cheers,
Bart