Doug,
I expect that the behavior you describe is related to the underlying fitting algorithms.
Argyll mainly relies on regularized splines in fitting measurements to reference points. Regularized splines attempt to meld the strengths of thin plate splines and thin membrane splines. A key point is that the fit does not require reference points be distributed in any particular manner. Having closer spacing of data points at the edges of the gamut space vs. than in the center helps but is not essential.
Thin plate splines [TPS] model slices through the data as deformation of a thin sheet of metal. An increasing tension - or bending penalty function - is imposed as points move away from the plane surface. The penalty function is zero if the fitting function is affine, but increases exponentially as deformations occur. The drawback to TPS is huge overshoots in the case of noisy data - these visually appear as discontinuities in what should be relatively smooth output gradients. Thin membrane splines [TMS] impose a roughness penalty function based on the bending energy of a thin membrane. This may be either a quadratic potential function which fits well but loses rotational invariance. This has the effect of increased error at sharp discontinuities in the measured data, but does not over or under-shoot like a TPS fit.
Regularized splines have a tension/penalty function that tunes the weights all orders of in the smooth seminorm. This has the effect of switching the behavior of the fitted spline from thin plate to thin membrane based on the relation between data and reference. I have not dug into the guts of Argyll's implementation to see whether the penalty basis functions behave only on the distance from the plane or are calculated anisotropically; i.e. if the data behave differently depending on direction, does the penalty function apply scaling coefficients and rotation to the data to weight the tension parameter directionally? If a LuLa reader has some hours to spare, perusal of the Argyll code could answer this or perhaps Graeme could chime in.
Turning to i1Profiler, all we can go by is the behavior as there is no source code available. In the pre-GMB/X-Rite merger, MonacoProfiler required regularly-spaced reference data for all RGB profiles. i1Profiler behaves largely as a combination of Monaco's basic profile with ProfileMaker's ability to handle arbitrary data. Looking at the profile data from regularly spaced input points, it certainly appears that i1Profiler's main profile chops use B-splines (or similar) with some form of penalized smoothing. Splines of this type require regularly gridded data - i.e. evenly spaced steps in the reference data. The smoothing parameter controls the relationship between fitted profile and input data roughness. The Euler-Lagrange equation shows that the penalty function of degree 2x - 1. For 2-dimensional data fitting (x = 2), this implies a cubic spline.
As the input data deviate from regular spacing, nonparametric function estimation becomes less accurate. In this case, one can still use B-spline fitting with narrow basis penalty functions around small grids of input. In this case, to avoid intractable computations, the basis functions are usually equally spaced even if the input/reference data are not. Nevertheless errors increase as the input is off a regular grid. In years past we used this approach for the basic profile creation. We calculated a penalty function for the regular grid data from a Fourier expansion of the measurement data. The penalty function was then applied in the frequency domain rather than on the data directly.
Doug: Have you checked whether the default i1Profiler behavior of generating 16-bit reference data but only 8-bit targets affects error rates? Alter the reference values by +/- 0.5 but keep the output at 8-bit resolution. I can't fathom why i1Profiler behaves this way unless it internally rounds 16-bit data to 8-bit. I recall older versions (1.x) of i1P truncated 16-bit values rather than rounding.
A hint for aspiring profile software creators: Our experience is that
barycentric rational interpolation with no poles provides excellent fitting for arbitrary input data. This allows targeting areas where printer output is discontinuous without adversely affecting fitting in more stable regions.