I think it doesn’t really matter whether you apply a curve in C1 or in later in Photoshop.
That's something I've been wondering about. Bart suggests that it would be better to apply Topaz Clarity (for instance) before applying the tone curve in the case of shadow recovery. I think you're saying that it should not matter once we keep to 16-bits.
Well, in my opinion Bart provides excellent info and I have yet to see a contribution of Bart I couldn’t agree with.
However, it also depends on the workflow…. I process „Master-Images“ from my RAWs. If you want so a 1:1 copy of my intended RAW-conversion. I also store these 16bit TIFs together with my RAW files in my long time archive. Any further editing is done in Photoshop on layers. When finished I do store the layered TIF and a non-layered copy of the final edit. This single layered 16bit TIF is what I call my „photo“. (Also the single layered copy of the final editing goes into my long time archive.) In such a „Master File“ (both the unedited and the edited version) you don’t want sharpening and/or clarity applied. These files should be as „clean“ and as „detailed“ as possible … well, at least for me.
In my workflow clarity and sharpening first come into play with regard to a specific output. And as far as clarity goes I have never needed it to edit my actual „photos“ … I only use it occasionally when preparing my files for print. But maybe this has also to do with the cameras I use (they all lack an anti-aliasing filter) … but maybe also has to do with my esthetical perception. Finally - at least referring to my serious works - my files are prepared to be printed pretty large (120cm / = ~47’’ on the short side) and I try to avoid any kind of „harshness“ before uprezzing. Some images go through a (very) mild deconvolution sharpening before uprezzing, though.
To cut a long story short, I think it depends on your particular workflow and on how you are used to do things…
How important is it to edit colors as few times as possible? In theory I would assume that the less the better, but even if everything is done in C1 or LR, presumably each of the adjustments are applied one after the other rather than as one combined adjustment? In other words, let's say that there is a global saturation adjustment and a local saturation adjustment, presumably the global adjustment would be done first followed by the local adjustment.
In a parametrical workflow it shouldn’t matter…
Just a question: do you think that using the embedded camera profile is a good idea?
yes, I do. I do not recommend it to anyone since those table based input profiles have limitations when used as editing spaces in Photoshop. But we talked about this in another thread some time ago, no? :-)
I would have thought that if you want to keep as large a color space as possible that it would be better to go to a large working space like ProPhoto, rather than stay in the camera profile.
Now, I really don’t want to go into yet another color management debate. But if you use Capture One ProPhoto-RGB is not needed at all. I would recommend to process to ACES, ACEScg or Rec2020 (if you don’t want to embed the camera profile).
When we had said talk about camera profiles in the other thread a user of this forum dropped me a line about the use of the camera profiles. He compared various captures processed with the camera profile embedded and processed with ProPhoto-RGB set on output. He then compared all the respective TIFs in Color Think Pro and analysed the number of unique colors contained in each version of the same images. Independent of the contrast and/or the saturation of the scenes the ProPhoto-RGB output contained always way less unique colors than the images processed with the camera profile embeded. I’ve verified his findings and even compared random images with different source profiles (sRGB, Adobe-RGB etc.) converted to ProPhoto-RGB and converted to ACES, Rec2020 and ECI-RGB in Photoshop (all conversions performed in 16bit - all target profiles used feature Gamma 1.8 ). The ProPhoto versions constantly contained remarkably less unique colors than the other color spaces. Now… there are rounding errors and possibly other factors that affect the results of Color Thinks readout of unique colors. However, it was apparent that - independent of the source profile and the image content - the images converted to ProPhoto showed always the lowest number of unique colors (around 10% less and sometimes even more). Since ProPhoto RGB is a pretty dated color space (back then designed with film based workflows in mind) and since it doesn’t encompass all the colors digital cameras can capture it has no particular merit (other than being pretty large … and being the internal color space in Adobe RAW softwares).
Back to the initial question: you are effectively editing in the camera’s color space in Capture One. Whether you convert to any other color space right on output or later in Photoshop (with the help of color warning and the info palette) doesn’t make a difference (in theory and in practice). Since Photoshop provides much more control I embed the camera profiles on output when processing from Capture One. Mostly I also edit in the camera’s color space.
As to the film curves vs. the linear curve… attached the capture from above loaded in C1 as is (no adjustments as far as exposure, levels or contrast or so goes).
Top left: Standard Film Curve
Bottom left: Linear Curve
Top right: Standard Film Curve & HDR: shadows and highlight recovered both by 100%
Bottom right: Linear Curve & HDR: shadows and highlight recovered both by 100%
As you can see even from these small screenshots you can achieve the same amount of details/differentiation with the Standard Film Curve. It just looks more „natural“.
To finalize the image according to your creative intend you have to edit it differently… but technically you don’t lose anything when using the Standard Film curve at this stage.
But that’s easy to show with this example. There are other scenes that are more complicated to edit with the film curve in conjunction with "ETTR"…