[a href=\"index.php?act=findpost&pid=153144\"][{POST_SNAPBACK}][/a]
But it's obviously easy to map locations between the RAW and the JPG, since no cropping is involved.
I don't know how you can say its easy.
I'm saying that mapping
locations is easy. Given a pixel in the JPG preview (and I'm only talking the preview here), you can by simple arithmetic find
where in the RAW the original data that were used to make that pixel are.
Raw is Grayscale data. There's a recipe (a million) for converting that from scene referred non demosiced data into, demosiced color. The conversions in-camera are all done using very sophisticated and proprietary algorithms which are not handed off to a Raw converter (well it might be handed off to the manufacture’s converter but no one else).
I know that, and I'm not trying to reconstruct the algorithm, but merely an approximation of the result of the algorithm -- which we can see in the preview (and which was what the original poster saw in LR).
So you're assuming that:
A. Someone is shooting both Raw+JPEG (otherwise, there's nothing the Raw converter can look at, the embedded preview may or may not have all the processing).
B. The converter can somehow produce the same rendering scene for scene.
It probably is possible for Adobe to make a default rendering that's closer to an in-camera JPEG for some but probably not all scenes. Or you can do this yourself as I've suggested you try (and see just how simple it really is).
No, and no. I'm not saying we can make a one-profile-fits-all profile. I'm saying that given a
specific camera-generated thumbnail JPG, the raw file for the same image, and the set of operations that
our raw converter has, it should be
possible to get
close. To the thumbnail processing. For that image. The operation would have to be repeated for each image, since the camera may have done something different in the next image.
This is at the moment a pure gedankenexperiment. Back when I used UFRaw for my RAW conversions, a program doing this for me would have been a god-send, as UFRaw has awful defaults. With LR, I rarely have problems. Hmmm... UFRaw can take a specific white balance and base curve. Maybe I should take a shot at doing this. Guessing the white balance would be a good test.
Note that I'm not talking about finding one gold standard default rendering that matches the camera, but setting the initial settings based on what's seen in the thumbnail. I realize making a single default to match what the camera does for all images is close to impossible, by machine or by hand.
Its totally different. When you profile, you send known RGB values to the device and measure them with an instrument, ideally a Spectrophotometer. With the camera, you're capturing Grayscale data and you've got nothing to measure the rendering from.
Even if you shot say a Macbeth target, got the JPEG and measured it, you might do a decent job of matching that with the Raw converter but move into a different illuminant and all bets are off. This is why digital camera profiling generally sucks. You can't treat a digital camera, no less one that captures Raw data like a scanner that captures RGB data using the same illuminate, dynamic range etc.
Which is why I said one of the first steps should be to figure out the white balance applied -- which should be doable by finding some greytone areas in the JPG and seeing what raw sensor data they were created from.
And in the end, the reason we shoot Raw is do produce a color rendering we desire, not necessarily one that matches what the camera manufacturer thinks we desire.
I realize that, and I do that myself. But given varying images, any given preset will *in some cases* be a worse starting point than what the camera did. It can be mighty frustrating to see that the camera figured out something close to what you wanted in a split-second but you have to spend minutes getting close. As a computer scientist, I am morally opposed to throwing away precalculated data only to try to recreate it by hand:)
-Lars