Erik - you say data is lost if you set black and white points at raw conversion time. What if you don't? What if you simply demosaic without adjusting black and white points and move straight into an RGB TIFF image editor (e.g. Photoshop) ? Is any data lost in that case? Is there anything I can no longer do (i.e. impossible as opposed to just less convenient) ?
Most conversions from raw to "developed" would include at least:
2. Color correction and white balance
3. Choosing black point, white point
4. Sharpening, Denoising
5. Nonlinear tonecurve/gamma
6. Quantization to 8 bits
7. Matrixing to YCbCr and downsampling chroma channels
8. Lossy encoding to e.g. jpeg
Optionally highlight clipping recovery, lense corrections.
Most of these steps are (or can be) non-linear and difficult to reverse. Some are throwing away information and are therefore impossible to reverse.
I am sure that it would be _possible_ to disable most of those steps in order to make a jpeg (or tiff) as close as possible to a raw file. I am not sure that it makes much sense, though. The point of a raw file is to retain as much information possible about the signal coming from the camera sensor. The point of a developed file is usually to "look good". What is the point in trying to make a bastard that is neither?
The "colors" obtained if raw sensor channels (post demosaic) are routed directly to "r", "g", "b" channels looks unsaturated and dull. In order to make for something pleasing (and realistic), the raw devloper needs to apply some color transform that (among other things) tends to increase saturation. Unless you have precise knowledge of what it did, it can be hard to apply another transform (based on measurements of raw file behaviour) at a later point. The applied transform could also cause clipping in an image (even if the raw data were not clipped). In that case, it will be impossible to figure the original values.
The Demosaic process can be hard or impossible to invert. If an image scaling operation is "interpolatory", then original valules will be unchanged, only new ones will be inserted. If that is the case, you may be able to invert demosaic if you know the original sequence of "rggb" vs "bggr" etc, and the offset into the raw sensor data used for output image. If the process is non-interpolatory (i.e. changes pixels at both "known" and "unknown" data sites), it is going to be very hard to invert it.
In the case of closed-source and proprietary raw developers (such as internal camera jpeg generation, Adobe Lightroom,...) we dont even know exactly how the forward transformation works (i.e. what, mathematically, happens when I import a Canon 20D image and drag the exposure slider by "1.2 stops"), much less know how to best invert those processing steps.
And so on...