NEF files, DNG, CR2 and lossless JPEG compression all use Huffman encoding to reduce the size of the file. This is a very common encoding technique and uses the following algorithm (simplified form):
1/Take the difference between the value of this pixel and the previous pixel (if at the first pixel use the difference between this pixel and the median value).
2/This will then give you a string of values for the row of differences between pixels. Now, for a typical image most of these values are going to be at or close to zero (i.e the distribution of values is strongly skewed towards smaller values unless the image contains a lot of high frequency detail - see note below). The reasoning for this is that most pixels are similar in value to the preceding one (flat or slowly changing tones - NB each channel is processed separately).
3/Using the statistical distribution of data we can create a code table which outputs numbers of various length, such that common input values are represented by shorter code lengths. By encoding fixed 16-bit values using a variable length code with shorter codes being assigned to more common input values then the average number of bits required to represent a pixel is reduced from say 16-bits to an average of 8-10bits per pixel. Hence, there is a 40-50% reduction in file size.
4/Once on the computer the process is reversed and the original data is extracted.
The above gives the gist of the encoding process, though in reality it is a little (though not much more) complex. The key element is in choosing or calculating the encoding table. For in camera applications this is likely to be a fixed table (i.e. the same for each image) based upon a statistical sample of images. However, more complex algorithms can calculate an optimal table per image (provided you have processing power and time in order to undertake the statistical analysis).
The note is that Huffman encoding is the second part of the lossy-JPEG encoding process, the first part strips out high frequency information from the image which makes the Huffman encoding very efficient and produces much more compact files.
To re-iterate, in case it isn't clear, Huffman encoding is a LOSSLESS encoding process, so no information is lost by way of using this algorithm.
It sounds as if Nikon marketing really doesn't have a clue as there is NO effect on image quality the 'almost' is misleading and totally redundant.
(NB The reason I know all this is that I have written my own raw convertor for research purposes and had to determine the above decoding process).