Hi Bart,
OK, Andrew posted at 1:16 and you posted at 1:27 so presumably you saw what he wrote. Does this boil down to the proposition that only the tooth fairy can unscramble omelettes, or does A.I. really make it possible to reverse-engineer or parse what is an already heavily compromised data base into anything resembling what the original raw data would have been like?
As I said Mark, it is not possible to unscramble the omelet, or unbake Andrew's carrot cake.
The goal of A.I. is not to unscramble/unbake, but scramble/bake again starting with other/better Raw ingredients. The way that is attempted here is by using different Raw files/fragments and resulting JPEGs, trying many combinations of Raw conversion deterioration until one can produce the same effect, and stop doing it that way! When a better result is produced, one probably found a better Raw input that survived the deterioration process (lossy compression, reduced dynamic range, partially clipped highlight), or a way to undo the deterioration. That better recipe it then tried on other image fragments, until it more often than not also creates better output on other ingredients.
This is an extremely complex process (if it were easy it would have been done already), and it can (as TopazLabs have mentioned) take months/weeks and many example image pairs before something better is found.
Result is also dependent on the training data set. A small set is easy to train on, but the resulting recipe will only work well for that specific set, and fail on anything else. To achieve more general usefulness, many many representative images are required.
Cheers,
Bart