Pages: 1 [2] 3 4   Go Down

Author Topic: Losing sleep over monitor calibration  (Read 20639 times)

jackbingham

  • Full Member
  • ***
  • Offline Offline
  • Posts: 205
    • http://www.integrated-color.com
Losing sleep over monitor calibration
« Reply #20 on: March 21, 2007, 02:56:48 pm »

Quote from: digitaldog,Mar 21 2007, 06:40 PM
You're again missing the point. We're not talking Absolute accuracy, that's a meaningless term.

Somebody is missing the point, thats for sure. There is not a day that goes by that we don't successfully employ validations to troubleshoot customer calibration problems. They are an incredibly useful tool for testing various target values against one another as well as ambient conditions and a host of other conditions. You need to get off the sale guy baloney and look at the real world were these things are being done and providing valuable feedback. Never would I nor have I suggested that they can be used to judge accuracy within any stated percentage. So please don't sit there and tell me I can't get any value out of something we employ all the time to assist customers after THE SALE to improve there profiles and profiling habits. I get your point, it's just not relevant to the way the tool is being applied in this case.
Logged
Jack Bingham
Integrated Color Corp Maker

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Losing sleep over monitor calibration
« Reply #21 on: March 21, 2007, 02:59:40 pm »

Oh as to the stat's being useful for deciding what target values to use, again, I don't buy that.

All we're doing here is sending known color values, probably in LAB to a device then measuring those values using said instrument and comparing what we ask for with what we get. The process doesn't have any way to tell you that a native gamma and white point will provide less banding or that there's some correlation between the ambient light and the display plus how you view the prints are ideal although it can provide some suggestions based on some very old ISO specs. It has no idea what printer profile will be used nor if you've setup the soft proofing correctly to handle the dynamic range of the paper. All this validation can really do is compare the measured data with the data sent by the software but since you're using the same instrument, its again like using your left foot to gauge the accuracy of your right foot with no other reference grade measurement device to tell you both are off by X amount. X amount MAY be acceptable! But we simply don't know that.

Lastly, as Edmund mentioned, loading an image with a soft proof and examining a reference print is a much more effective way to see if all your ducks are in order here. We want the print and display to match as closely as possible based on their differences in reference media and dynamic range.

There are probably all kinds of way to measure the print versus the display but if they don't appear to match, what's the point?

Also, we're NOT really producing a white point that's D50 (or any other standard illuminant). If that were the case, everyone would have accepted years ago that calibrating a display to D50 and having an output profile that assumes D50 would match. Yet years of work have resulted in people calibrating to D65 (or god forbid, 6500K) while nearly every printer profile assumes a D50 viewing condition. And the only real D50 viewing condition comes from a light source 93 million miles from your light booth. So KISS does work and we shouldn't put so much credence in measured values all the time. They ARE useful, but they are also often not.
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Losing sleep over monitor calibration
« Reply #22 on: March 21, 2007, 03:08:00 pm »

Yet you refuse to define accuracy or how using the same device to measure itself is accurate.

Define accuracy. Its one of those buzz words used to sell something to people who feel they need it but what does it mean? Accuracy based on what standard measured by what, itself?

If you want to tell people your left foot is 12 inches, so be it. How accurate is that without having something else to compare those measurements to? Its like the nonsense that a camera profile produces accurate color. That's ridiculous.

Accurate color is colorimetrcally correct which means the measured color. But measured by what and what's the accuracy of the measurments? That we CAN define by using other instruments of known quality. But we can't define it using the same instrument unless we just want to make ourselves feel good.

When you buy a car, there's a gas mileage associated wit the car. Its useful to compare a Hummer and a Prius. But there's always the fine print. Your mileage may vary!
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Losing sleep over monitor calibration
« Reply #23 on: March 21, 2007, 03:13:31 pm »

Quote
How about putting away deltaE numbers and use the good 'ole eyeballs to test color matching on a wide range of calibrated displays. I mean that's the whole point of color management and calibration.

Exactly! But that's not very sexy.

There are issues where using your eyes can fool you but that's usually not when viewing images in context. And as you say, the point of all this is to provide a reasonable match between two very dissimilar media. It will never and simply can't be prefect. We've got glowing phosphors (or a Fluorescent backlit product) and a reflective print with usually quite dissimilar dynamic range. Ain't ever going to match 100% even when we have Star Trek or Star Wars technology. All the Bells and Whistles only complicate the process for many users. The geeks love it. But the bottom line is, do the two match?
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

jackbingham

  • Full Member
  • ***
  • Offline Offline
  • Posts: 205
    • http://www.integrated-color.com
Losing sleep over monitor calibration
« Reply #24 on: March 21, 2007, 03:14:28 pm »

Ok, lets use your analogy. You build a series of walls using your right foot to measure. At the end of the day you have three walls the same and two that are different. So did your foot change size or did you screw up the placement of your foot while measuring. This has nothing to do with your left foot at all. We have to accept that the instrument we buy is reasonably accurate and reasonably consistent. If they are not we should throw them out.
As for stats being used to determine the best result, you are making the suggestion that all displays behave exactly the same as every other display at one set of target points and that just ain't so. Luminance, gamma, white point and black point target values should all be fine tuned to the particular display, more so the cheaper they get. If you ignore the display's behavior and capabilities again we might as well use adobe gamma. I hear all the time about what the default standards are or should be and I watch customers struggle to hit target values they simply can't. Validations are indeed a way to make that point.
Logged
Jack Bingham
Integrated Color Corp Maker

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Losing sleep over monitor calibration
« Reply #25 on: March 21, 2007, 03:25:17 pm »

Quote
Ok, lets use your analogy. You build a series of walls using your right foot to measure. At the end of the day you have three walls the same and two that are different. [a href=\"index.php?act=findpost&pid=107923\"][{POST_SNAPBACK}][/a]

They are the same and I didn't say they wouldn't be. But are they the right measurement? I asked for a wall that's 12 feet, you gave me a wall that's 11.5 feet. I asked you to prove to me my wall is 12 feet. So you measured it again with your foot. I don't buy that as being useful. I instead use the tape measure from Home Depot and guess what, my wall isn't 12 feet.

Again you've missed the point and the analogy. You're software and your foot are supposed to tell me the accuracy of the measurements. But they don't.
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

jackbingham

  • Full Member
  • ***
  • Offline Offline
  • Posts: 205
    • http://www.integrated-color.com
Losing sleep over monitor calibration
« Reply #26 on: March 21, 2007, 06:00:34 pm »

Actually the single foot anology is correct for this situation. We have chosen a device to calibrate with. We rely on it to be reasonably accurate. I've chosen my foot as the standard. So long as all the walls match I'm using the tools correctly. You are asking for something else entirely which is proof that it is accurate within some standard. I'm counting on the manufacturer to provide that and am using that reasonable assumption to create some trending that does indeed tell me alot. You're asking for something you can't have and neither can any of the readers of this forum. Again back to my suggestion of absolute verses relative. I'm relatively comfortable that the instrumentation we have available is accurate enough. With that assumption I can build some trending that I find valuable. You want higher precision and you can't have it so you'd rather throw the baby out with the bathwater and suggest that only a visual test will do. I understand exactly what you are trying to say. I just don't agree plain and simple.
"You're software and your foot are supposed to tell me the accuracy of the measurements. But they don't."
This simply isn't true. You are being absolute again. There are degrees of accuracy and I'm saying we hit a high enough standard to be valuable and you are saying only the application of a third device will yield any relevant data. And I'm saying if that were really the case we should stop profiling and use adobe gamma because if you're right all these instruments should never be used by anyone to profile until each and every one has been tested on site.
Logged
Jack Bingham
Integrated Color Corp Maker

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Losing sleep over monitor calibration
« Reply #27 on: March 21, 2007, 06:15:48 pm »

Quote
Actually the single foot anology is correct for this situation. We have chosen a device to calibrate with. We rely on it to be reasonably accurate. I've chosen my foot as the standard. So long as all the walls match I'm using the tools correctly.
[a href=\"index.php?act=findpost&pid=107950\"][{POST_SNAPBACK}][/a]

OK so exactly what are you measuring and what are the results supposed to tell the users?

You send say 10 color patches to the display with 10 lab values. You measure them with the instrument. Then you measure them again after the calibration process. You compare the deltaE of the 10 patches with the same instrument. So you're gauging what accuracy? We expect that if you measure the 10 patches and get 10 values, then measure them again, you should get the same 10 lab values (within reason). Since you used the same instrument, we don't know how accurate either the first set of 2nd set is based on some better standard. So this begs the question, just what did I gain here?
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Losing sleep over monitor calibration
« Reply #28 on: March 21, 2007, 06:22:36 pm »

Quote
I'm relatively comfortable that the instrumentation we have available is accurate enough. With that assumption I can build some trending that I find valuable.
[a href=\"index.php?act=findpost&pid=107950\"][{POST_SNAPBACK}][/a]

I said in my very first post that trending is useful. And I agree that most instruments are 'accurate' (within tolerance of what a human can perceive).

Trending is good, no question. What else are we supposed to gather from the stats?
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

61Dynamic

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1442
    • http://
Losing sleep over monitor calibration
« Reply #29 on: March 21, 2007, 06:38:16 pm »

Alrighty then...

Andrew, I agree with you in our point on not using the same device to calibrate and verify. It's a no-brainer. I also agree that the level of "accuracy" does not need to be great. However, I disagree with the idea that looking at a virtual Gretag Color Chart on the monitor and comparing it to an actual CC is effective means of gauging a monitor profile for the following primary reasons:

1. Not everyone is skillful at gauging color and/or do not understand how human perception can be effected. They may think things are fine when someone like you or I would see a noticeable difference.

2. Not everyone has a working area suitable for such a test. Lighting that varies over time or is a deranged mix of light sources, bright lime-green walls, brightly colored pictures or decorations, stained wood desks, clothing being worn, etc. I've seen it all and these things have an effect on how our eyes perceive things. A skilled operator could manage in such an environment over time but many people don't have that ability yet.

I think comparing a CC to a virtual CC on screen can be just as ineffective in gauging monitor calibration/profiling as using the same instrument for both profiling and verifying. While a skilled viewer may be able to do as you describe effectively, normal people (the average person) is not in that position. You and I and others like us are not normal people.

What about your article on monitor profile testing in photoshop? Wouldn't that be a more effective means of gauging if a profile may have issues or not?
Logged

jackbingham

  • Full Member
  • ***
  • Offline Offline
  • Posts: 205
    • http://www.integrated-color.com
Losing sleep over monitor calibration
« Reply #30 on: March 21, 2007, 06:59:43 pm »

Quote
I said in my very first post that trending is useful. And I agree that most instruments are 'accurate' (within tolerance of what a human can perceive).

Trending is good, no question. What else are we supposed to gather from the stats?
[a href=\"index.php?act=findpost&pid=107956\"][{POST_SNAPBACK}][/a]
Fuinny. Perhaps if you paid attention to what i was saying instead of what you thought I was saying you would see that trending is exactly what I have been saying all along.
and that , that trending can be used to interpret and modify choices to acheive a better profile. As I said it fits the one foot analogy.
Logged
Jack Bingham
Integrated Color Corp Maker

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Losing sleep over monitor calibration
« Reply #31 on: March 21, 2007, 07:01:29 pm »

Quote
1. Not everyone is skillful at gauging color and/or do not understand how human perception can be effected. They may think things are fine when someone like you or I would see a noticeable difference.

2. Not everyone has a working area suitable for such a test.[a href=\"index.php?act=findpost&pid=107958\"][{POST_SNAPBACK}][/a]

Agreed on both counts. This would most certainly be a very subjective evaluation but, the same person viewing both is probably going to be using the system so it has some merit.

There's nothing we can do about the percentage of color blind men and I don't suspect we'll be using the Munsell tests before such evaluations.

There are a number of tests one can use to help evaluate the calibration of a display (the old black screen with selection in the middle: Where do you see zero black separate, do the steps appear neutral). There's the full screen 100% zoom on a black to white gradient with display profile assigned to the file and so forth.

In the grand scheme of things, if the user feels the screen and print match to an acceptable degree, we're in pretty good shape, even if that person's wife can see they don't match ;-)
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Losing sleep over monitor calibration
« Reply #32 on: March 21, 2007, 07:13:26 pm »

Quote
Fuinny. Perhaps if you paid attention to what i was saying instead of what you thought I was saying you would see that trending is exactly what I have been saying all along.
[a href=\"index.php?act=findpost&pid=107963\"][{POST_SNAPBACK}][/a]

Funny, if you read what I wrote, I said that in my first post. Then again lower down.

That doesn't explain this:

Quote
Clearly it can be seen that one profile can be more accurate than another based on a validation. It can be clear that higher validations might lead one to change their target values or ambient conditions, or a host of other steps in order to generate a lower set of delta e values regardless of the method or the use of the same instrument.

So what's this got to do with trending which is comparing the SAME target values over time? It tells me there's a difference between the condition of the device today compared to a week ago. Now how does that tell me I should change my target values?

How do you decide the profile is 'accurate'? We can see that the device has changed since the last calibration session. How does that correlate to profile accuarcy in the first place?

Or:

Quote
There is not a day that goes by that we don't successfully employ validations to troubleshoot customer calibration problems. They are an incredibly useful tool for testing various target values against one another as well as ambient conditions and a host of other conditions.

How does comparing the device drift have anything to do with the validation of the target values? It tells me they change, which is what we expect or we'd only calibrate a display once and be done. The validation is useful to tell you, you need to recalibrate OR you should calibrate every such and such number of hours. But validate the initial targets? How does this work?

If a customer is having an issue, they could run validation to save a few minutes in which they are informed they need to recalibrate. Or they could just recalibrate! Where's the bit about profile accuracy? The profile was presumably accurate once you finished the original calibration and profiling.

Quote
and that , that trending can be used to interpret and modify choices to acheive a better profile.

How does one interpret this to modify a better profile? Better how?
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

djgarcia

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 341
    • http://improbablystructuredlayers.net
Losing sleep over monitor calibration
« Reply #33 on: March 21, 2007, 07:18:43 pm »

Boy, I've never seen a thread more aptly titled .
Logged
Over-Equipped Snapshooter - EOS 1dsII &

jackbingham

  • Full Member
  • ***
  • Offline Offline
  • Posts: 205
    • http://www.integrated-color.com
Losing sleep over monitor calibration
« Reply #34 on: March 22, 2007, 06:36:29 am »

Why are we talking about device drift here? The variation in accuracy from profile to profile is far greater with different target parameters, ambient conditions and monitor quality. If devices drift that much we should all stop profiling altogether. Why are you suggesting all the drift is in the device? Monitors are so much more unstable compared to a  dtp-94 or eye one two so we shouldn't even be considering them as a critical problem.
Based on all the things you say are true, everyone reading this list needs to stop profiling RIGHT NOW. If you can't verify at any level of accuracy at all then you probably can't build either so lets all just quit.
Logged
Jack Bingham
Integrated Color Corp Maker

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Losing sleep over monitor calibration
« Reply #35 on: March 22, 2007, 09:36:08 am »

Quote
Why are we talking about device drift here? The variation in accuracy from profile to profile is far greater with different target parameters, ambient conditions and monitor quality. If devices drift that much we should all stop profiling altogether. Why are you suggesting all the drift is in the device? Monitors are so much more unstable compared to a  dtp-94 or eye one two so we shouldn't even be considering them as a critical problem.
Based on all the things you say are true, everyone reading this list needs to stop profiling RIGHT NOW. If you can't verify at any level of accuracy at all then you probably can't build either so lets all just quit.
[a href=\"index.php?act=findpost&pid=108050\"][{POST_SNAPBACK}][/a]

Accuracy based on what? Accuracy based on what? How many times do I have to ask you.

The same instrument measures the behavior of a device over time just like you measured three walls with your foot. So the accuracy of the device isn't what we're looking it, it's the accuracy of either how well the device behaives in the SAME condition over time (device drift) or you'll tell us what you're measuring with the same instrument that provides a delta of accuracy of the profile.

I told you how I assume you're measuring reference colors to actual measured colors. What else can you do? So taking device accuracy out of the equation, just what on earth are you measuring for accuracy and how given we can't gauge the device itself like your foot.

And no, based on what I'm saying it doesn't mean people shouldn't calibrate their displays. We all know they drift and we have to put them back into a stateded condition based on measuring a pile of colors. But where's the accuracy in YOUR statement about profile accuracy and feedback on target values?
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Losing sleep over monitor calibration
« Reply #36 on: March 22, 2007, 01:02:09 pm »

OK just as a reality check, I called Karl Lang, the color scientist who designed the Radius PressView and later the Sony Artisan about this so called “accuracy test”. His opinion was slightly different than mine. I said it’s mildly useful. He was more forceful (its useless) at least with respect to the 'accuracy of the profile'.

Lets talk true validation. We had this in the Artisan (Quick Calibration). A number of known color values are sent to the display after calibration and profile building. The instrument compares the values based on the measured data. The idea is to tell you if the device has altered its behavior to a fixed deltaE such that you should recalibrate. What more modern products have done is simply store this reality check and track this graph, telling you how far the device has deviated over time. This is useful in telling you that calibrating the device once a month isn’t frequent enough (from month 1 to month 2, your deltaE (say deltaE 2000) is 3, you then find that doing this process weekly provides results that are less than 1, its good indication you should do this more often).

In the case of the Artisan, it took 12 minutes for a full calibration. The Quick Calibration would take no more than 7 minutes. IF the deltaE was too high, it would instead run a full 12 minute calibration. Its a time savor and its useful to do before color critical work. This isn’t about accuracy because again, we’re using the same instrument, and software to measure a subset of colors. Otherwise, run the entire process and just build a new profile.

OK now onto ‘accuracy’. Lets see the definition:

  The state of being accurate; freedom from mistakes, this
   exemption arising from carefulness; exact conformity to
   truth, or to a rule or model; precision; exactness; nicety;
   correctness; as, the value of testimony depends on its
   accuracy.
2: (mathematics) the number of significant figures given in a
        number; "the atomic clock enabled scientists to measure
        time with much greater accuracy"


In the case being discussed here, Jack (and to be fair, all other’s producing software to build profiles) often use the term profile accuracy. What’s it mean? To build a profile be it for a printer or a display or capture device, known color values are sent to or captured by the device. They are measured and a comparison of known and produced LAB values are provided. This allows one to build an ICC profile. In the case of a printer, one could send a known value to the output device based on the profile, measure it and compare the LAB values. But now you’re back to the issue of using the same device! The instrument has a fixed and specific illuminant that may be totally different from the illuminant under which the print is viewed. And heck, do you like the way the print appears in the lighting you’ve built the profile for based on how the image will be viewed? This goes back to the suggestion of just looking at images on the display and comparing them to the print. Do they match? Keep in mind that printer profiles are pretty complex. They have multiple tables for handling different rendering intents and they have to provide a soft proof as well. So there’s the output you get AND the values sent to the display profile for soft proofing. Makes discussing display calibration with respect to a soft proof a lot more variable and difficult. There are some tricks for examining the deltaE of printer profiles by comparing round trip errors going though the PCS. But ultimately you just send a lot of images through the profile, make prints and LOOK AT THEM. The Perceptual mapping is solely based on pleasing color. There is no fixed specifications for how a profile vendor can or should build a perceptual table. And try using an Absolute Colorimetric intent for output (which should in a prefect world produce absolute colorimetric accuracy) and you’ll see a print that’s pretty butt ugly.

The bit about profile accuracy for the display could be determined but NOT with the same instrument that built the profile as I’ve illustrated. If we send 50 solid patches to the display and measure them, how accurate are the resulting readings? Only when you use a known reference instrument that we KNOW has a higher level of accuracy (those significant figures given in numbers), can you know that the original 50 values are accurate and to what degree numerically.

So, how does measuring a small sample of patches, the case with all display profiling products, using the same instrument tell us the profile is accurate to the target values we’ve asked for? In a perfect world, we’d measure 16.7 million samples, one for each possible color. The profiles would be HUGE. It would take forever to measure. In the case of a printer, one can generally produce an acceptable profile using 900-4000 patches of colors. All the others are for lack of a better word, extrapolated to build the profile (which can define 16.7 million colors). For a display, far, far fewer patches are measured. So we have a lower number of samples to measure and we’re measuring it using the same device so there’s no way to measure the accuracy of the profile. We can measure the differences in each profile built over time to gauge device drift but that set of measurements may not be ‘accurate’ to a higher measured standard and that’s OK. As long as the device is consistent (and we assume they are), the inaccuracy over each group is fixed and what we’re trying to measure here is the difference over time, not the accuracy of the original or subsequent profiles.

Accuracy is a marketing buzz word. It’s used to sell stuff. And ALL the color management companies are guilty of doing this. This isn’t any more correct than years ago hearing color management companies sell their wares using the term ‘push button color’.

I’ve asked Jack a number of times how his process gauges accuracy based on the facts above. How does the instrument along with some sample of known and measured LAB values tell you how to set the target calibration (which on an LCD is limited to the intensity of the backlight). If the soft proof seems off, using validation CAN tell you that your current profile isn’t accurately describing the current behavior of the device. The device has changed so trash the profile and start again. But short of that, how can sending X number of LAB values tell you anything more? Where’s the accuracy? What’s the software supposed to be telling you? Still waiting on those answers.

Our job as consumers, (and educators) is to separate the facts from the fluff. To decide if functionality provide in a piece of software is useful or there as a feel good placebo ( an innocuous or inert medication; given as a pacifier or to the control group in experiments on the efficacy of a drug).
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

mistybreeze

  • Full Member
  • ***
  • Offline Offline
  • Posts: 177
Losing sleep over monitor calibration
« Reply #37 on: March 22, 2007, 05:35:58 pm »

After reading this thread, I'm definitely ready to stick someone's foot SOMEWHERE. I think I got lost  after the section on comparing deltaE to my nicotine patch. Now, can someone tell me, who's foot is BIGGER? That's the man for me.  

I sure wish I had Karl Lang's phone number to call when I had a pissing contest to win. Thanks for the laughs, guys. Always good to know we count on you to teach us something. Now it's off to Home Depot for me. I hear they have a sale on monitor calibrators and I hear the guy at the paint counter comes with the most accurate set of eyes. Wish me luck!

Misty
Logged

Tim Lookingbill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2436
Losing sleep over monitor calibration
« Reply #38 on: March 22, 2007, 05:51:57 pm »

I have duck shaped feet, mistybreeze. Sorry to disappoint.

Why all of a sudden do I feel so inadequate?
Logged

jackbingham

  • Full Member
  • ***
  • Offline Offline
  • Posts: 205
    • http://www.integrated-color.com
Losing sleep over monitor calibration
« Reply #39 on: March 24, 2007, 08:51:54 am »

Ok lets apply a real world example to this silly discussion. This one happens every week like clock work. I get a valiudation screen from a customer and all the dark grays and blacks are delta e's of 4-20. My first question is what are your ambient conditions and everytime it's bright. Simple deduction, turn off the lights and run again. Lo and behold the values drop dramatically. Here's another one. Customer sends a validation screen where most of the values are way above 3. The target luminance he is trying to hit is 90 and he has a nice shiny new eizo ce. Change the target value to 150 or so and again the validations drop like a rock cause there is no way that monitor works at below 120 with any accuracy at all. Now neither of these is primamrily instrument drift and it is fair to say the the results after the change are far more accurate than before. Now without some sort of validation I can't imagine how you can confirm any problem condition like these.
Again lets all remember that I have used the words relative accuracy repeatedly in this conversation
 So while you and Karl are no doubt real smart guys you are ignoring the practical value of a tool that consistently delivers valuable information.
Logged
Jack Bingham
Integrated Color Corp Maker
Pages: 1 [2] 3 4   Go Up