How has it been determined what colors the camera is able to 'see'?
Isn't that the purpose of DCamProf, to allow encoding colors that the camera can see? Whether and how that translates into humanly visible colors is another exercise, and whether and how our output modalities reproduce them is yet another. If we cannot adequately encode the input, there is no hope for getting good output.
I'ld have to assume this optimized for digital sensor color target would have to comprise a certain gamut shape and size that would be tuned to the math that can map colors to the display. Not sure if the shape of the gamut is more important than the size. The 3D model of most inkjets show some colors go outside the AdobeRGB gamut where most others are well within.
Yes, that's why an inkjet printed target may not be the best basis. Remember the exercise that Bruce Lindbloom did to find the parameters for his
BetaRGB colorspace. It comprises many important colors that may need to be encoded, but as few more extreme ones as possible, to improve the quantization step precision.
Can distinction between subtle differences of colors be improved by using a custom target that's more tuned to the real color gamut shape of a digital camera 'sees' color?
Not all colorspace coordinates represent humanly visible colors, and they are thus by definition not 'colors'. What matters is that a camera can capture and distinguish between as many 'colors' as possible. This is not going to be perfectly possible due to the Luther-Ives condition (more
here), so some form of perceptual mapping will ultimately be needed.
The goal of creating a camera profile is to get a solid basis for the conversions that are to follow. Accurate for important colors, smooth transition to intermediate colors, able to reduce metameric and color constancy issues. But that is more an input or scene referred profiling than the ACR centric output referred profiling that gives us hue twists and other trouble.
I prefer clean scene referred profiling, making use of an adequate but not overly large working space, and a perceptually based output processing, e.g. using a CIECAM like color appearance model for output. I am not sure if an inkjet print will be challenging enough for a camera sensor, although it does allow to produce some saturated Cyans, Yellows, and Magentas. Apparently those dyes or pigments are available in nature, so we should also be able to encode those.
Cheers,
Bart