FWIW:
In 16-bit, using the example, ColorThink reports them as 144.1/255.0/240.2 and 144.2/255/240.2.
In 8-bit, using the example, ColorThink reports them as 144.0/255.0/241.0 for both.
Save both out as a color list for CT for it's dE report. On is 0.06 dE, the other 0.24. That's using dE2000. As such, I think we have to agree, they are the same color.
They are the same color only when considered as what I like to call "color for color's sake", rather than as "color in context" of a complex arrangement of colors and tones (i.e. a real photographic image, film or digital) and only when meeting other important requirements of the CIELAB color model, namely using a standardized illuminant, subtending a defined viewing angle (i.e, the 2 degree or the 10 degree observer), and when presented against a neutral gray surrounding scene. In a typical image of a scene that most people would want to photograph or otherwise record, the complex arrangements of tone and color will conspire to emphasize these two LAB specified color values as being different or to disguise them as being the same. At least one example of the exact same addressed LAB value appearing as a different color in different parts of the scene has been posted already. There are countless examples, and we could also create images where two different specified LAB values appear to be the same when embedded in different parts of a real image. As a simple analogy to just how complex scene imagery can get for the human observer to evaluate, think of how camouflage works. I remember reading those "Where's Waldo" books to my young children, where a little man in brightly colored clothing gets hidden within the scene so well that it takes considerable time to discern his existence in the picture. Yet isolate the little guy against a uniform gray surrounding field, and the game of finding him becomes trivial. The specified color values in the little guy haven't changed, only the surrounding colors and tones have to change for the Waldo figure to be "easily recognizable" rather than "just barely noticeable".
For Andrew, as you work towards your new tutorial:
What I've been trying to say in perhaps less than concise language is that CIELAB (and the other variants based on tristimulus functions) is a color model that works exceptionally well as a way of specifying colors independently of each output device's proprietary handling of RGB or CMYK data. Using an open source, reproducible, repeatable way to assign color values (like CIELAB) to pixels is what makes color management work properly, and the elegance of using independently assigned color specification at the pixel level rather than device assigned color specification is huge...but it's still just a reference color specification. The CIELAB model is not sophisticated enough to predict color appearance in the complex viewing conditions that every photograph presents to the viewer. This realization is why I started to move away in my own understanding of image tone and color reproduction from dwelling solely on CIELAB for specifying Lightness, hue, and chroma properties and Delta E for determining how different those color are. CIELAB does quite well, albeit with room for improvement, but not if limited only to the three appearance properties of lightness, hue, and chroma. Thus, dividing color and tone reproduction into two distinct categories, color and tone, where "color" refers to hue and choma properties while "tone" describes lightness and contrast properties extends the CIELAB model much further than do delta E color difference models.
I wrote the following statement about color in an article Jim Kasson mentioned earlier in this thread (you can find the article here:
http://aardenburg-imaging.com/cgi-bin/mrk/_4842ZGxkLzBeMTAwMDAwMDAwMTIzNDU2Nzg5LyoxMQ== ):
"If one considers color information in an image as a signal then hue is analogous to the color signal frequency, and chroma is analogous to the color signal amplitude. Similarly, the spatial information content is essentially carried by the tone signal. Local area image contrast represents modulation in the tone signal amplitude. The I* method of sample selection at equi-spaced distances over the full image area correlates to the sampled spatial frequency of the tone signal".
Some people in an audience will understand the concept of a signal being comprised of a frequency and amplitude although perhaps not as many as understand weights and distances (Gary Fong totally missed it when he threw away frequencies in his diagram of the color spectrum as analogous to "smaller color spaces"). However, if we describe color information as a signal, then we can simply state that color spaces like sRGB versus aRGB differ in their ability to encode the amplitude of the color signal (amplitude being subjectively described with terms like "colorfulness" "saturation" "vividness" and/or "chroma"). The color frequency (i.e. hue) is equally encodable in all of the various RGB color spaces. Likewise, the tone signal (lightness and contrast) which conveys the vast majority of the spatial information content (if not, B&W images would be pretty useless) is equally rendered for all practical purposes in any one of the RGB working spaces since the encodable L* values which give rise to image contrast relationships range equally from 0 to 100 L* units in all of these different RGB color spaces.
Thus, it all really boils down to "use aRGB or Prophoto RGB" when you need to preserve higher levels of color saturation in the image than can be properly encoded in the sRGB color space. If you don't have any need to preserve higher color saturation levels, for example, when converting a color scene to Black & White, then you aren't giving up any color and tone fidelity. There is no technical advantage to the "bigger" RGB working spaces except in their ability to encode higher color saturation. How do you know when a color space is "too small" to encode your chosen image color saturation values correctly? One fairly straight-forward way is to use the histogram function in Lightroom, for example, to see if there is R, G, and/or B channel clipping and if it goes away when choosing a different "bigger" RGB color space.
cheers,
Mark
http://www.aardenburg-imaging.com