Pages: [1]   Go Down

Author Topic: BasICColor Display vs. Spectraview II - two ??'s, validation and black ClrTemp  (Read 7011 times)

shewhorn

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 537
    • http://

I'm scratching my head a bit as I'm trying to wrap it around this issue...  Here's the deal, BasICColor Display treats color validation as if it was completely disconnected from, and has absolutely nothing to do with color temperature. Spectraview II by comparison will scream bloody murder. Spectraview II's behavior is something I intuitively understand as a change in color temperature inherently changes your colors. BasICColor Display seems to view the world in a different way. How 'bout an example...

Our white point target is 6500ºK. Let's say we didn't hit that, we hit 6000ºK. Both will essentially indicate that the validation failed but here's what confuses me... SpectraView II will report a very high dE for all measured patches and this in my experience has a direct correlation to how far off the monitor is from the target white point. BasICColor Display however will tell you that your colors and greyscale test patches all tested fine... mostly under 1.0 dE94. How is that even possible? The error should be MUCH higher. There's no way that a white patch at 6500ºK and a white patch at 6000ºK are less than 1.0 dE94 apart from one another because that's such a big difference, I don't even need to look at it... I can smell the difference! :) It (BCD) will give you a dA, dB and dAB report on color temp and that will of course be high (and is what causes the validation to fail) but I just don't understand how they can decouple color temperature from the equation.

This all started when I noticed that the minimum neutral and minimum native features don't work in a manner that I expect them to. What I expect is that when you select minimum neutral it will attempt to profile your black point to the color temp target for white point (this is what SpectraView II does when you select "optimize for greyscale" in the Calibration Priority preference setting). It actually doesn't do this at all. The explanation I got is that it corrects for chroma and minimum native does not correction for chroma, it just corrects luminance. Okay, that's all fine and good but how the heck can you correct for chroma if it's not referenced to a specific target? The only thing I could come up with is that they're using a weighted curve. For midtones and highlights, the white point target is honored. For shadows however, it would seem as if they're completely disregarding the color temp target, have measured the native color temperature of the black point, and have a bias curve that switches from the desired white point target, to the native black point color temp as you get closer to black.... but... why?

If anyone has any insight on this, I'd be very appreciative.

Cheers, Joe
Logged

shewhorn

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 537
    • http://

Just to add to this... Here's what I think is going on with the minimum neutral feature with some numbers to back it up...

BCD profiles to the target white point. When it first starts profiling, I suspect they measure the native (and for clarity what I'm talking about has nothing to do with minimum native vs. minimum neutral, I think those are just unfortunate names for those features) color temperature of black. At some point, a weighted curve is applied. As you get closer and closer to black, the target color temperature starts going from the specified white point color temp, to the measured native black color temp.

Here's some measurements... in this case the target color temp was 6275. Here's measurements for achieved color temp from 255, 255, 255 to 0,0,0 (taken with Spectraview II and an i1Display Pro):

255 - 6275ºK
223 - 6250ºK
191 - 6275ºK
159 - 6254ºK
127 - 6275ºK
095 - 6235ºK
063 - 6235ºK
031 - 6415ºK
023 - 6865ºK
015 - 8225ºK
011 - 9355ºK
009 - 10200ºK
007 - 11050ºK
005 - 12400ºK
003 - 13600ºK
001 - 15300ºK
000 - 15530ºK

So it would appear as if the numbers support this theory and with that I think that minimum neutral corrects chroma for whatever the target white point happens to be along that weighted curve and minimum native simply doesn't bother to correct for chroma (or more accurately a curve defines how much chroma correction is applied, and chroma correction is eventually eliminated altogether), it just makes sure that the luminance is where it should be at.

The reason they gave for that particular feature is because as we know, Spectrophotometers lose accuracy as the signal to noise ratio decreases as you get closer to black. The option of minimum native relies upon the linearity of the monitor as you get closer to black and forgoes the attempt to correct chroma. Given the error that pucks like the ColorMunki Photo are prone to in the shadows, using minimum native can reduce color casts in the shadows that are the result of sensor noise.

I believe the source of my own confusion on this particular feature reside with what in my opinion is a very unfortunate name. I'm willing to bet that the German localized version makes a lot more sense as the product is developed in German (any Germans here that can verify this?) but as an English speaker the term "neutral" to me is the literal definition in the dictionary which is "a neutral color or shade". In this case "neutral" is defined by the white point target. A profile that gets increasingly blue as you get closer to black is not what I would describe as being "neutral", hence my confusion. When I saw the word "neutral", I expected that feature to behave in the same manner as SpectraView II's "optimize for greyscale" option in the calibration priority preferences.

Cheers, Joe
Logged

digitaldog

  • Sr. Member
  • ****
  • Online Online
  • Posts: 20646
  • Andrew Rodney
    • http://www.digitaldog.net/

Our white point target is 6500ºK. Let's say we didn't hit that, we hit 6000ºK.

Is it important if the print to display match and considering that any Kelvin value is a range of colors and every product will likely produce a different result?

Most of this validation process is just a feel good button.
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

shewhorn

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 537
    • http://

Is it important if the print to display match and considering that any Kelvin value is a range of colors and every product will likely produce a different result?

Most of this validation process is just a feel good button.

I'm aware of this, and if anything this post underscores that fact.  I actually don't place much importance upon the validations but BasICColor is doing something so radically different from the rest of the world that it got me curious. They're (validations) useful to a point and that is to verify that nothing went terribly wrong during the profiling process (the screen saver didn't briefly come on... the monitor didn't lose power, etc.). There's a few other things that's it's useful for but that's not really what I'm concerned about. I'm just interested in knowing how the folks at BasICColor can completely disconnect color temperature from their chroma validations because I see the two as being necessarily dependent upon one another.

Cheers, Joe
Logged

Tim Lookingbill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2436

Quote
I'm just interested in knowing how the folks at BasICColor can completely disconnect color temperature from their chroma validations because I see the two as being necessarily dependent upon one another.

15K+ color temp in the black you say? Typical ISP panel. Thanks for the color temp validation on the black WP I've noticed for years but never bothered with on my G5 iMac and Dell 2209WA. Just curious why it looked so blueviolet.

Hope to ease your mind on this since Andrew pretty much cut to the chase on why it's not important to be concerned about this.

You're attempting to dissect how a spectro and software tries to override, compensate or just leave alone a complicated human visual response system. No calibration package has the perfect solution for this. All of them do a good enough job because they know human's eyes will adapt to a lot of the subtleties not caught or corrected for by hardware and software.

Human's eyes will adapt more quickly to not seeing a color temp color cast the brighter the white appears. What this means is yellow-ish, orange-ish, blue-ish tint is going to be seen as pure white very quickly and thus cause the viewer to see the rest of the tonal gray scale as the same color as long as the measuring device sees it as well. Darker shadows closest to black don't have as many photons bombarding the rods and cones of the eyes inducing adaptation and so will take on the color cast adapted for by the bright white.

However, you may note as I have that zooming in to edit the rock/tree shadow detail of a sunset/Golden Hour scene without looking at the overall image will induce an adaptive effect as you zoom out to note the yellows and oranges and other warmer hues now look off. This is when we as image editors should walk away and wait for this effect to go away before we apply further edits to what once looked correct but now looks odd.

Do you see how it's impossible for a calibration package to try to compensate for this level of complexity and quantify and verify it all with a number system?

In short, do what Andrew says.
« Last Edit: December 30, 2011, 01:23:49 pm by tlooknbill »
Logged

digitaldog

  • Sr. Member
  • ****
  • Online Online
  • Posts: 20646
  • Andrew Rodney
    • http://www.digitaldog.net/

They're (validations) useful to a point and that is to verify that nothing went terribly wrong during the profiling process (the screen saver didn't briefly come on... the monitor didn't lose power, etc.).

Agreed although I suspect if that happened, you’d see it visually.
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

shewhorn

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 537
    • http://

Do you see how it's impossible for a calibration package to try to compensate for this level of complexity and quantify and verify it all with a number system?

I totally understand this and agree. I own a bunch of software packages and hardware packages... Color Eyes Display Pro, BasICColor Display, Spectraview II, i1Profiler, Spyder3 Elite v4, Monaco Profiler, DTP-94, Eye One Pro, Spectrolino/Spectrascan... somewhere around here there's an original i1Display... They all do things a little differently and they all (to varying degrees) accomplish certain goals (ummm... or not  ;D).

In the end, it always comes down to 2 things:

1) To what degree is the package capable of providing a screen-to-print match?
2) When using this package will I be able to achieve a level of consistency that allows me to reproduce the same results a year or two from now. People use the word "accurate" a lot but consistency is really important too. If a client likes the results, it won't matter to them whether or not it was "accurate" (for whatever that word means to whomever is using it... I've noticed that people often tend to define it differently... heck, many photographers rarely ever produce "accurate" results as they're often after an artistic interpretation of something) but they will notice a difference in consistency. I suppose one could argue that with accuracy comes consistency and yadda yadda yadda but that's a different discussion!

Anyhow... I care about none of this right now! :) Here's my motivation. I'm an ex-software engineer and I was that kid who kept asking "why" non-stop. I've found that when someone does something significantly different from the way everyone else is doing something (as appears to be the case with how BasICColor reports chroma discretely from white point) that there's often some random tidbit of "fascinating" to take away from this. I suppose that the best way to say it is that this is purely for my own personal amusement!  ;D ;D ;D

Cheers, Joe
Logged
Pages: [1]   Go Up