Lars is talking about encoding. I don't think it is directly applicable to monitor rendering (though indirectly it is, but conclusion could be different).
I have used Coloreyes to calibrate an older model Imac 24" to 90 cd/m2. I have run it using 2.2 and L* and do see a shift between the two gammas. Could someone please explain the differences between the two that would account for what I am seeing, and offer some recommendation as to which would be preferable. I work in PS CS4 using ProPhoto 16 bit.
The idea behind 2.2 gamma is to have a non color managed solution for sRGB workflow. There is no other benefits except that manufactures may be hardwire their monitors to be close to 2.2.
The idea behind L* to have as linear color transform as possible while rendering to a monitor. That is to say since all color transforms happens in single 3D LUT with common use of tetrahedral interpolation, L* ing your monitor allows for more linear dependency between PCS (CIE Lab) and the device (your monitor). If L* were the only nonlinear component in the transformation then matching your monitor to L* would have allowed "lossless" color rendering to the monitor, namely, no bit resolution will be sacrificed during the rendering. That means that it would reduce banding! Sounds exciting?! Well the problem is that your monitor has its own LUT, thus if native gamma is 2.7 (as quite common case) any deflection from native gamma would result in loosing resolution bits.
Is it beneficial? Theoretically, it might be. If you have higher bit monitor that allows some room for TRC adjustment without loosing color resolution. Or if the monitor is CRT and is driven through VGA cable.
However in general the answer will depend on many components - color transform LUT dimensions, interpolation used by CMM, monitor native gamma, LUT bit depth and size.
I would think that an optimal TRC lies between native gamma and L* in real world.
BTW. I don't think ProPhoto is a good choice, at least not in ICC based workflow.