My only point is that if a) the rendering I see on the LCD screen is a reasonable approximation to the scene itself (assuming I am looking at both at the same time and am therefore not suffering from memory lapses) and b) what I subsequently see on the monitor is a reasonable approximation to the LCD screen and c)the ACR raw rendering is not, then it seems logical that the original rendering is "more correct" (what ever that means).
There is no question the camera manufacturers go to great lengths to produce an LCD preview and a JPEG most users would accept as a reasonable presentation of the scene. But...
Considering the LCD isn’t a calibrated device, considering its not easily possible in most cases to view that LCD, the hopefully calibrated display (showing a vastly different size preview) and the scene all together, its kind of a stretch to buy into the idea they all correlate. That the LCD is a JPEG in sRGB, displayed presumably on something like an sRGB gamut LCD, that this data is rendered a fixed way with little or no option to tweak it, that visual adaptation is taking place, again, its difficult to agree its representing the scene, any more than agreeing that shooting the scene on one or more transparencies stocks match the scene. Reasonable approximation is what everyone would expect, otherwise no one would purchase the cameras and view the LCD screens (if they were way off, you’d hear most customers screaming). Is a Polaroid a reasonable representation of the film? Even if on one day you shoot Kodachrome and the next Velvia? Is the machine print you get from your color neg a reasonable representation of how you remember the scene? If it were always way off, again, customers would be screaming. In the end, its all very much like the saying “close enough for horseshoes and hand grenades”.