Andrew,
Bart can explain better all of this (and has), but I continue to think you are trying to force this product into some past analysis and insights you have had. Try with a fresh sheet of paper. Forget terms like jpeg, raw, Dng, and Tiff as you have understood them.
Maybe I can get down and dirty and such a description might help understand what Topaz is attempting.....
First, what I understand Topaz to be doing might be thought of as like PS's "content aware fill", which I think is also supposedly being allowed/enhanced by some of their own "A.I.". How does it work? PS "looks at" the pixels that are selected as acceptable replacement pixels. Base on that selection, I think (but do not know) that it then generalizes about the default selected pixel set and maps those acceptable pixels via a transformation matrix (which is what I think the A.I. generates) into the pixel space where the selected fill area has been selected. Sometimes it does it well and sometimes terribly. The new version from Adobe lets humans see which pixels the algorithm is allowed to use as acceptable substitute pixels and pixel patterns. When the user changes this acceptable area, think of that as "training" PS to use a different transformation matrix. Maybe the result is better, maybe not, maybe perfect, and the result will depend on the complexity of the "source" pixels and the target pixel space. Should we reject this and confine ourselves to only using a pure cloning mechanism? The new pixels are not "real". Do these improve images contain pink unicorns?
Again shooting in the dark, imagine a x by y pixel matrix, aka the jpeg you see on your screen. Imagine banding and other jpeg artifacts one commonly sees. What the Topaz paper says (I take some liberties in the interest of brvity and possibly clarity) is that they took many paired raw/jpeg examples, and did some "A.I" on them (think of it as Excel regression analysis), pixel coordinate by pixel coordinate for the entire set of images. Since each pixel coordinate has many, many attributes like tone, color, and others, the software goes to the first pixel and creates a data matrix of differences between the raw version pixel and the jpeg pixel. Now do this for every set of pixels on that particular image. Now find the next x thousand images and do the same. Now you have a boatload of data equaling the number of images by the number of pixels times the set of data differences. Now, "regress" all of this and boil it down into a very complex mathematical polynomial. Now invert this polynomial equation and try it on some jpeg image not in the analyzed data set. If it is not ok, do all of this again but twiddle with the knobs to get a different polynomial. etc, etc. It is certainly more complicated than this, but on the other hand, it is just trying to find repetitive patterns in the many Raw to jpeg conversions it analyzed. Each of you know how to describe what banding in the sky looks like, so all these equations do is try to "undo" that causation via mathematics.
Clearly, this has nothing to do with the pattern of data in a camera raw data set. Maybe, if it is good, it will create a pattern of data that, when stored and opened as, say, a DNG file format, it creates a pattern of pixels that better approximates what the original raw data set represented than the jpeg image does. If, in the process, it creates more gradations, say, between this tone and that, it gives you and me, the PS mechanics, more choices of, say, intermediate tones to tune to your taste. If so that would be or could be more "headroom."
Content aware fill, Topaz J2r. Use'um if you like, forget'um if you wish. Why their names matter in "image processing" continues to escape me. My recollection is that many "real artists" laughed at users of Photoshop as fake art and laughable. They are less sure now. Most importantly, A.I. is a basis to improve image processing, and it will get better and better, and no matter how well content aware fill and J2R perform now, odds are they will become better and better. Objecting to this as a possibility seems very odd to me.
All this literalism about the word, "Raw" recalls Alice in Wonderland when the Humpty Dumpty says about words that they meant whatever he said they meant. This seems even more odd than improving imaging processing. Nobody can patent word meaning. Businesses and communities routinely change the meanings of words and add meanings. Raging against it will not change anything.
Bill
PS. "Sep 10, 2018 - Everyone went nuts for Adobe's “content-aware fill” in Photoshop when it ... essentially an AI-powered clone stamp that intelligently brought in ..." from
https://techcrunch.com/2018/09/10/adobe-supercharges-photoshops-content-aware-fill-so-you-have-more-options-fewer-ai-fails/