In addition to the known problems of higher dimensionality, the sparsity of data points, which is more common in higher dimensions, also starts causing problems. Any clustering operations need to be come with a metric; a notion of distance. However, L_p distances, for p>2 start to degenerate, for dimensionality as low as around or over 10. Even, L_1 and L_2 (Euclidean) and min. mean square as its variants, have problems. Control engineers have known this fact for a long time. Database people have come to realize that nearest neighbor index searches in relatively higher dimensions degenerate in performance to a brute force search that goes over all items!
However, that does not mean that sky has fallen down. With the right structure in the higher dimensionality things can be worked out. Among other things, dimensionality reduction is always an option, however, starting in the first place directly with 3D color data is also a form of dimensionality reduction, which is already in place.
Bottom line is that as far as statistical reasoning go the problem statement is simple. Here is Canon color data and there is Nikon color data. Just find a transformation that converts one to the other. We may not need to bother about the frequency response of the CFA. The problem can be treated as being agnostic to that fact. But, as mentioned before, as the dimensionality of points starts to increase finding that transformation becomes more elusive, as the notion of distance, sparsity of the space, and the right number of training samples, start causing problems.