Table base look-up (for the current lossy compression) is a single cycle; just a indexed lookup. Nothing matches that
I would not make that claim without knowing the exact hardware in question. Is it an ASIC? A FPGA? Some DSP? Doing a full 14-16-bit table lookup can easily be a relatively "expensive" operation on some platforms, since it involves a large amount of memory that must be accessed in a seemingly random (i.e. hard-to-cache) pattern for each individual pixel. Now, perhaps the standard does something clever to avoid this by e.g. having a smaller table that is accessed with some extra logic. E.g. "if value is small, encode it directly. If value is large, encode it using this (smaller) table".
I would guess that quantizing the value can often be considered "less expensive". My suggestion would be something ala "if value is small, encode it directly. If value is large, drop the N lsb bits".
If I were to make a bet, then I'd bet that if the next DNG spec has a new lossy compression capability, that it will be JPEG DCT based. Reason being that a lot of camera chips have hardware acceleration for JPEG DCT built in to support JPEG out, so doing JPEG DCT lossy compression will be "costless" in terms of CPU utilization for many modern cameras.
Sandy
JPEG usually can do only 8-bit YCbCr values, and its rate-perceptual distortion is depending on nonlinear gamma, sensible exposure, luma/chroma separation etc.. Fitting a 14-bit mosaiced bgrg signal into that in such a way that JPEG can efficiently compress it is nontrivial. Not saying that it cannot be done, but it is going to cost brainpower, cpupower, reduced compression/artifact efficiency, or all three.
-h