Many thanks for explanations and for your comments.
Largely agreed and/or accepted so that it would be redundant to quote everything.
... However, there's certainly an argument in favor of getting better starting points to help users improve their workflow. .
I don't know about 'easier'. I guess it would depend on how easy that 'look' information is to access. The problem if you just treat DPP as a black box and grab the final image as a starting point is that, well, you're no longer starting from a raw file! You're editing beginning with an output-referred image in which case some of the raw benefits would be lost.
Without intending to insist, the initial idea was as follows:
Referring to Camera Raw and the processing stage of linear-gamma ProPhotoRGB (1.0 pRGB) which ACR was reported to use as an intermediate working space, there should be access to scene-referred image data, freshly derived from “native” Raw by basic operations such as demosaicing and conversion to this 1.0 pRGB. It is after Scene Reconstruction and before Creative Processing, to lend these terms from Simon Tindemans’ essay [a href=\"http://21stcenturyshoebox.com/essays/scenereferredworkflow.html]here[/url].
Blending of such a scene-referred image (in 1.0 pRGB) with an already processed “JPEG” from one single shot (and brought into same 1.0 pRGB and at high bit depth) could be seen as a creative tool and part of Creative Processing on the way to an output-referred i.e. preferred rendition. Nothing mandatory just an option.
There are for sure pros and cons with this approach. FWIW, the main advantage I see is to let the specific “look” e.g. from a Canon Picture Style shine through, without a variety of adjustments needed - at least for a start. And while keeping key Raw advantages such as highlight details. The term “shine through” could refer to a normal RGB blend at reduced but tunable Opacity, or, maybe there are better ways e.g. via HSL blend modes.
Another possible advantage concerns very saturated colors. While the pRGB pathway is certainly suited to preserve a major part of scene-referred color saturation, many of them still can not be printed, so that the user often has to go through a second round of editing in order to get such oog colors towards or inside the designated output gamut while keeping the appearance as far as possible. Purists will of course appreciate the flexibility with this approach, but it might not be everyone’s case.
Whereas camera manufacturers’ seem to have their own proprietary ways to compress saturated colors into tiny sRGB while preserving “vividness” and image details, again, as far as possible. At least, for me their recipe doesn’t look like a straight RelCol pathway which tends to produce more channel clipping and posterization.
By means of said Raw + JPG blend in 1.0 pRGB as described above, absolute color saturation would be “buffered”, so that the de facto gamut as occupied by the image can already be closer to the designated output gamut (in an global sense).
To raise a last aspect; aside from suggested “Raw + JPEG” amalgamation, requested capability to blend images in ACR at the level of 1.0 pRGB could also be particularly useful e.g. for HDR imaging from “Raw + Raw” by doing it “properly” in a linear-gamma space and before Creative Processing.
Makes sense, or perhaps not enough?
Let’s see …