Done a little bit of glare correction coding, and now a little bit of review of test shots. I'm not yet having satisfactory results, it breaks more than it fixes.
In a glossy homemade target shot indoor in a dark room with one StdA lamp outside family of angles, I get 5.57 stops between brightest and darkest color (not black, but super-deep purple, had no blacks in that target) with SSF virtual process (ie glare free) and 5.41 stops in the real shot, that is a 0.16 stop difference. Not too big DR difference, but the darkest patch is 1/3 stop too bright compared to what it should be. Measurement error in the instrument (if it's glare there too) may contribute to a smaller number than it should be.
Anyway, from the initial glare correction code tests it seems like the glare error may be too small compared to the uncertainties in the glare correction model.
Currently the code searches for neutral patches in the target by looking how flat the spectra is, then makes a reference value from provided illuminant spectrum+greenSSF (an "typical" SSF if no SSF is provided), and compares that with the actual value. This serves as input to a spline contrast correction curve. The intermediate points are estimated with TPS. Illuminant mismatch, SSF mismatch, deviation from spectral flatness, distance to nearest control point all contribute to error. However, I suspect the simplistic model of glare, ie just a contrast spline curve, contributes the most error. I'm not satisfied with that model, but haven't come up with anything better.
I've not given up yet though, I have a few things left to test.
I does look tempting though to just skip glare correction and instead relax lightness correction in ultra-saturated violets-purple as that's where the errors seem to be located.