I'm not sure what "myth" the OP refers to exactly, but you have cameras which package 14, 12 or some other number lower than 16 into a 16-bit data structure within the file. To use anything other than 8 or 16 bit data structures in the file complicates encoding (writing) and later decoding (reading) of the file -- doing this is not a marketing game played by manufacturers, it lowers engineering cost and improves encode/decode performance.
Some manufacturers use a limited range (0-2^n, n < 16) while others scale their values to the full range of a 16-bit data structure. In terms of information, both procedures are legitimate, and no data is lost in either method of storing a chunk of less-than-16-bit data in a 16-bit data structure.
But herein lies the confusion. How to differentiate between a 16-bit data structure containing <16 bits of information from the analog-to-digital (ADC) converter, and a 16-bit data structure containing fully 16-bits of data from the ADC?
I'm with Doug--a simple phrase like "true 16-bit" is a reasonable approach for getting the point across quickly. If the photographer is curious, s/he can ask for more information ("what do you mean by 'true', Doug", and get a complete answer). I see nothing wrong or dishonest with this whatsoever. On the contrary, it informs me that the person I'm talking to just might understand how this stuff works better than the average salesperson, and that is a rare thing.
As for whether the 16-bit ADC output actually contains 16 bits of true information, this is a separate question (clearly it depends on the hardware implementation. There are many sources for noise (I won't re-open that discussion), there are many different designs for analog stage processing and there are different technologies (CMOS and CCD being the primaries), and there are many applications (some more demanding than others).) But even if "true 16-bit" data contains noise which effectively lowers the fidelity to < 16-bits, the same can be said of 14- or 12-bit data; in general, they will contain less than 14- or 12-bits of respective signal as well, for exactly the same reasons.
So for all intents and purposes, "true 16-bits" should contain more information than 14- or 12-bits, given comparable hardware implementations. Whether these differences make any visible difference to your work will depend on your hardware and your application.