What Andrew says.
Adding to point (2), the main reason for a non-linear TRC in a profile used for storing encoded images is (as I understand it) to match human visual perceptual response, where we are sensitive to a relative change in luminance, rather than an absolute change. The result is that (for example) in an 8-bit linear encoded system, a change from 10 to 15 is much more visible than a change from 150 to 155. On that 8-bit linear encoded system, the step size may be visible at the lower end, so it makes sense to use a non-linear TRC that reduces the step size at the low end. Hence nearly all 8-bit encoding systems (e.g. jpeg) use a non-linear TRC. It doesn't matter so much with 16-bits, where the low end step size with linear encoding is generally too small to be visible.
Any non-linear TRC in an encoded image needs to be reverse before display, obviously, but colour management does that.
In the bad old days, and before colour management, CRT displays had a non-linear response (more exponential) so a compensating (logarithmic) curve was applied before display. In a colour-managed system, any non-linear response of an output device is described in the TRC of the device profile, so the output is corrected automatically by the colour managed software.
But I reserve the right to be wrong.