Hi all! My first post here. This ended up being way longer than intended but I struggled to boil it down as there’s so much to set up before my questions at the end. Hope that’s okay!
First off, it’s a little daunting for me to post here as I come from a background of motion picture post-production (by day I’m a VFX compositor) so my understanding and implementation of colour management and colourspace conversion workflows (at least on a professional level) takes place in node-based software like Nuke and Resolve, which does things slightly differently to PS. Also, my specific job doesn’t involve input device profiling - so please excuse any gaffes and/or concepts in this regard that might not quite translate or be worded in the same way, or where my lack of expertise in this regard will show! I’m not trying to pass off anything to do with this as fact - just sharing my findings in order to ask for professional advice where things get a little murky.
I’ve seen some recent threads online about this topic that prove just how slippery things can get when it comes to specific language 😬
With all that said, something has been playing on my mind lately regarding input device profiling with a translucent it8 target (be it for traditional scanners or camera scans) specifically when it comes to scanning colour negatives. And note - I’m not talking about profiling the neg stock being scanned; but the capture device + light source. In other words, the ability to capture accurate scene data of the negative as a positive transparency before inversion work can begin, purely to correct device errors in colour rendition.
I must’ve read and seen just about every article, technical paper, blog or forum post on the topic I could find dating back to the early 00’s, and can see very logical arguments made for both profiling and not profiling devices when it comes to CN. Most of the things I read don’t go into a ton of detail and others are so old that I wonder if the industry has maybe adopted new/more accurate workflows since.
In the NO PROFILING camp; several people seem to suggest that due to the orange mask and how narrow a space the positive neg colours occupy in that target space, any profiling effort is pretty meaningless and can cause more harm than good in certain cases. Especially with anything other than a matrix-only profile - since a nonlinear curve bending ever so slightly wrong through the neg’s range could be catastrophic once inverted and scaled into the positive image. I tend to agree with this logic and based on my experiments find it to mostly be the case. But this is very anecdotal and I am very open to this being the wrong conclusion!
The downsides to No Profiling are that there’s no real guaranteed consistency across scanners. Differences in device sensitivity will yield wildly different results, and the same neg will look a million different ways on a million different scanners. This tracks with what I’ve seen at work when it comes to even very very expensive projects - scans of the same negative reels by different A-list motion picture film labs can come back looking so different that they may as well be different film stocks altogether. But again, anecdotal.
In the YES PROFILING camp, people say that it would be silly not to use corrections for the native defects/offsets in the individual’s scanner’s reproduction, which when left unchecked will only make correction of the negative harder by compounding one problem on top of an already very complex operation. Downsides as I understand would be what I mentioned above; one microscopic wrong assumption in the correction profile and it could be game over anyway, since the profiling data must be so much sparser for the colours a neg can represent in its positive/orange state. It’s not as simple as profiling for normal scenes or even slides, since those images are much better spread across the it8 data, and likely to inherit the profile more successfully.
So maybe allow me to share where I’m at with my workflow, with which I’ve had most success to date (I must have tried them all by now). I still have concerns that it gets a little hand-wavy, but overall very happy with the results. And if anyone spots problems with what I’m doing, I’d appreciate a hand!
1. Scan using either the scanner’s software with colour management disabled (eg. Epson scan, which generates a Gamma 2.2 Tiff in what I assume is the device’s own RGB space) or Vuescan’s Raw linear TIFF with ‘Device RGB’ set as the colourspace.
2. In my image processing software, apply a Gamma conversion from 2.2 to 1 in the Epson case, or nothing in the already linear Vuescan file.
3. Execute the neg inversion maths, whatever that might be. I have a method I developed in Nuke that I’m testing on PS too, based on maths from the well documented kodak/cineon system and Arri papers. It inverts using a log curve + specific gamma values & parameters. Long story short I feed the algorithm a linear scan and receive a roughly scene-linear positive, on which I can apply RGB gains to white balance. I validated this linear response by comparing with linearised digital exposure brackets of the same test scene I shot side by side with some film rolls - and it’s pretty much a fit for for most of the dynamic range. So let’s assume that’s working and valid for now!
4. Take my scene-linear film image to anywhere between gamma 1.8-2.2 for display purposes and apply a shoulder compression curve to round off the highs. Now I have a correctly white balanced image that was produced using scene-referred tools. And depending on the device/scanner, it could be lacking saturation. I’m assuming (and please stop me here if this is wrong) this is because even though it’s now a positive image it still behaves as though it’s in the the scanner’s wider RGB space, because I believe I did not break scene-linearity. And this is where it gets pretty hand-wavy:
5. Simply add chroma to taste via LCH or HSL... or -
6. Assign a profile that makes the image look good. Depending on the device it could be srgb, it could be Adobe RGB, some J Holmes profiles I like, or it could even be the feared it8 calibration profile. Which has sometimes worked okay for me in fairness... I am again assuming (!) this is because of the technically well-inverted process that doesn’t break linearity up to now, so the matrix could still be valid to reach the intended primaries for the scanner. Does this even make ‘colour management sense’? Whether or not it does, it produces good results 😅. Note I’m only mentioning assigning a profile this late on - but it could be done at any stage. But can only be judged once the underlying base inversion/correction has been done.
7. After the neutral base inversion and ‘finding’ the right profile to assign, can then convert into a bigger working space if needed (or not) for further creative work before final conversion to srgb/web.
What I don’t like about this, is the slight guesswork when it comes to which profile to tag the scanner RGB as... But I find it hard to argue with the results I get this way. What I do like, is that I’m working in the scanner’s linear RGB data/space for the inversion instead of some arbitrary converted-into working space, and no matter what profile I apply or chroma treatment I add once we’re out of linear land, the base inversion & white balance is always done and always looks correct underneath (albeit desaturated due to its wide-ish colourspace).
The alternate workflow is:
1. Either assign the device RGB or the it8 calibration profile.
2. Convert to a working space (Adobe, Prophoto, Ektaspace, ACEScg etc)
3. Invert using x method, e.g linear workflow as mine above.
What I really dislike about this method is that depending on which working space you convert into before inverting, you get wildly different results. Overall I find a space just big enough to hold all the film’s colours (but not much bigger) works well. Like Dcam3 from J Holmes or his free Ekta Space, and sometimes AdobeRGB. The huge spaces just produce way too much colour contrast and hue shifts. Overall feels incredibly arbitrary to pick one of these before the inversion as the working space. Which also now has to be done from scratch per colourspace ‘trial’ since the original colours get changed due to the conversion.
My questions (sorry it took so long!)
- Is there anything monumentally stupid about my first pipeline? Any suggestions to decrease the guesswork? It works for me, but sadly each device seems to need its own profile tag and/or chroma recipe... but there usually is at least one. Doesn’t seem very colour-science-y to fish for profiles, but then again you’d be surprised by the not very colour-science-y things we sometimes do to get things into cinemas and no one complains. 🙃
- Does it make sense for the it8 calibration matrix profile to somehow be valid when used in this way, or is it luck/coincidence when it does? I’m still very suspicious, but since it’s not touching the data underneath... if it looks good then it is good.
- How do commercial labs deal with this for their devices & consistency between scanners for negs? I.e. do they assign an input profile and work in device native like my workflow, or do they convert from device to working space up front. And if so, which working space is usually the norm if such a norm is even possible?
Apologies for the long post, and for any stupidity I have subjected you folks to.
Cheers!
Miguel