Thanks for the file, it seems super at first glance, I'm curious about one little thing though, at the top of the sheet it says ProPhoto RGB 16 bit data, so why the file profile is Adobe RGB?
The document absolutely isn't in Adobe RGB (1998). I downloaded it again, it's in ProPhoto RGB. Sounds like your Photoshop color settings are set to automatically
Convert and your RGB working space is set to Adobe RGB (1998)
or you're converting this elsewhere but no, it's in ProPhoto RGB.
Here is an answer that everyone can verify probably even with a scanner, using the CIE76 Delta E metric and even considering that this metric is not uniform across the spectrum, ie we can see some color difference better then others and that I tested only a few RGB triplets by changing just on color by one 8 bit level say (100, 100,100) compared to (100,101,100), I converted each of those to (XYZ) and then to L*A*B* then applied both CIE76 Delta E and CIEDE2000 and the result are under the just discernible threshold according to my reference, in other words we can't see it
You're getting yourself into a bit of a rabbit hole here. First, dE 76 isn't an ideal formula for reporting small differences in color distance. The scanner isn't the appropriate tool, a Spectrophotometer is. And you can take a very high quality Spectrophotometer like my iSis XL and measure the same print containing solid patches and you'll NEVER see anything close to dE 0.00 due to noise inherent in the device. A fraction of one dE sure, on a good unit. And a dE of 1 or less may or may not be visually perceptual; as a 'rule', we state that less than one isn't a visible difference but it depends again on where in color space you examine the distance. It's why an average dE report, with a good amount of sampling (hundreds or perhaps thousands of measurements) along with the max dE of one patch is useful here. If you measure 500 colors and 499 are all under a dE of 1, but one patch is dE 1.6, that's quite telling. Lastly, consider few targets for creating actual ICC profiles for print are high bit but rather 8-bit per color. Why? Because of the facts outlined already: few printers will use the extra precision of numeric data (device values), and it's just not necessary. If the people making software to create profiles don't feel the need for the extra precision, why go there?
Next, I can't fathom how you'd need to measure solid color patches this way after printing with 16-bit vs. 8-bit. Just output a print and look at it.
Think of this, suppose that on this or that print, you have say in 16 bit a value just barely above the 8 bit level of say 100 and the value of next pixel to it is halfway between 101 and 102.
You're mixing up device values with measured colors and the distance between adjacent colors based on Lab and dE (and the various possible formula). See:
http://digitaldog.net/files/ColorNumbersColorGamut.pdfYou can see how two sRGB values can have differing device values and be the same color with tiny dE differences. Two differing values in 8-bit per color might and can produce a dE higher than 1. Depends on the color space, depends on the two sets of triplets and which differ by one value. But this is really again, a rabbit hole that isn't necessary in confirming if or if not, 16-bit vs. 8-bit per color makes a difference visually or even if the driver did or didn't convert the data from high bit to 8-bits per color somewhere in the print path.
Can you output 16-bits of data to a printer? Well yes you can. Can you see a difference doing so? I don't believe you can. Is this really even 16-bits of data or instead 10, 12- or 14-bits of data? If you're using Photoshop, you're not even truly working in 16-bit!
The high-bit representation in Photoshop has always been "15 + 1" bits (32767 (which is the total number of values that can be represented by 15 bits of precision) 1). This requires 16 bits of data to represent is called "16 bit". It is not an arbitrary decision on how to display this data, it is displaying an exact representation of the exact data Photoshop is using, just as 0-255 is displayed for 8 bit files.
High bit (what some are calling 16-bit but probably isn't that specific encoding) is about editing overhead. All you need is the best 8-bits per color to be sent to an output device. Often, very often, that's all you can send to the hardware anyway. But you have a document you can test to visually see if there's a difference you see printing both ways as I outlined. You don't have to measure anything especially if you don't have the correct measuring device to do so; a high 'accuracy' auto Spectrophotometer.