Even though a matrix may seem relatively crude ...
The apparent simplicity of a linear model such as a matrix may turn out to be more useful in higher-dimensions (and Emil wants 30 ;-)) than 3D color, because of the possible correlations among color data dimensions. Linearity can help reduce certain requirements on training samples from being exponential to polynomial (for e.g., for 3D data such as RGB, if the data model is taken to be linear combinations of R,G,B, R^2, G^2,B^2, RG, GB, RB, R^3, ..., etc., note: the problem is still linear as it is linear in coefficients), and may even further reduce to linear in dimensionality if powers such as R^2, RG, etc., are not considered. It might happen that higher dimensional data is distributed with a lower intrinsic dimensionality and therefore linear models may start having more appeal.
(In many domains the SNR gain due to number of samples, N, and the number of parameters to be estimated, p, is given as N/p.)
As far as LUTs are concerned, which perhaps may be embedded in profiles, what is the complexity of constructing a 30-dimensional LUT? Note, the partitions of the LUT (hypercubes if it has a regular structure) may not be of equal volume to cover the data correlation, intrinsic dimensionality and sparsity of the 30-dim color space better.
All of this discussion regarding higher-dimensional color is not academic hair-splitting. The problem is real, since hyper-/multi-spectral imaging is beginning to show its power in fields such as medical imaging, material inspection, remote sensing, etc., and would spill over to digital cinematography and photography domains also. What are companies such as Adobe, Apple, doing to prepare their products for such color data?