Pages: 1 [2] 3 4 ... 9   Go Down

Author Topic: Monitor calibration sensor evaluations  (Read 145167 times)

Iliah

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 770
Re: Monitor calibration sensor evaluations
« Reply #20 on: April 30, 2011, 09:33:48 pm »

Well, DISCUS is an easy choice :)
Logged

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Re: Monitor calibration sensor evaluations
« Reply #21 on: April 30, 2011, 09:39:14 pm »

Is it really worth the price in terms of visible results versus several of the less expensive alternatives?
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Iliah

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 770
Re: Monitor calibration sensor evaluations
« Reply #22 on: April 30, 2011, 09:54:20 pm »

Is it really worth the price in terms of visible results versus several of the less expensive alternatives?

Well, it depends what one is after. 85% of what it does I can do with a combination of DTP-94 or i1 Display 2, i1Pro, and Argyll. But it takes about an hour to create a ccmx correction matrix for each type of a new display, and than about 15 minutes each time for calibration and profiling. With DISCUS I need only one device with me and 15 minutes plus results are better. The interesting part is that if the temperature in the room changes by 5F, or the monitor is not extra stable, or the ambient light is changing, or one is not wearing a black shirt all the effort and money spent on extra precision are more or less going to a bin.
Logged

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Re: Monitor calibration sensor evaluations
« Reply #23 on: April 30, 2011, 10:14:48 pm »

Anyone spending the kind of money a DISCUS costs can just as well buy a few black shirts to accompany it, no? :-)
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Iliah

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 770
Re: Monitor calibration sensor evaluations
« Reply #24 on: April 30, 2011, 10:22:18 pm »

Anyone spending the kind of money a DISCUS costs can just as well buy a few black shirts to accompany it, no? :-)

Yes, if they know to. But more often than not they do not. Manuals are glossing those things over, too.
Logged

Czornyj

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1948
    • zarzadzaniebarwa.pl
Re: Monitor calibration sensor evaluations
« Reply #25 on: May 01, 2011, 04:30:52 am »

With the DISCUS you can also calibrate the display from the distance, so it takes care of the not-black shirt flare then ;)
Logged
Marcin Kałuża | [URL=http://zarzadzaniebarwa

Iliah

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 770
Re: Monitor calibration sensor evaluations
« Reply #26 on: May 01, 2011, 09:49:12 am »

> With the DISCUS you can also calibrate the display from the distance, so it takes care of the not-black shirt flare then

If the operator is not moving at all and the shirt is of uniform colour, yes it does.
Logged

terrywyse

  • Full Member
  • ***
  • Offline Offline
  • Posts: 107
    • WyseConsul (old consulting site)
Re: Monitor calibration sensor evaluations
« Reply #27 on: May 01, 2011, 12:53:25 pm »

As I see it, the question is: when displays are calibrated with a variety of devices and applications which *combination* consistently yields the best results within a reasonable timeframe? This is the question that I've been trying to answer. And I think I've got it.

More to the point of what Scott mentioned......is it possible that different applications handle the devices differently...or compensate for some of their inherent disadvantages with regards to integration times?

Reason I ask....I've been using the ColorMunki lately with basICColor Display 4 and like it....but I noticed that the ColorMunki+basICColor takes on the order of about 10 min. to complete a profile relative to the 6 min. it takes with either my RevD i1Pro or my RevA i1Monitor spectro...so is basICColor Display using longer integration times with the ColorMunki and, as a result, achieving similar measurement quality to the i1Pro?

One major advantage I see to the ColorMunki is that it doesn't appear to suffer from the temperature swings that the i1Pro suffers when contacting the surface of the display.

Terry
Logged
Terry Wyse
Color Management Specialist, Shutterfly Inc.
Dabbler in the photographic arts.

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Re: Monitor calibration sensor evaluations
« Reply #28 on: May 01, 2011, 01:36:53 pm »

......is it possible that different applications handle the devices differently...or compensate for some of their inherent disadvantages with regards to integration times?

My experience is absolutely! Been that way for years too.

The other question is, if the goal is a print to display match, is this an issue with the instruments or software products intermixed as long as one alters the target calibration appropriately. Frankly I could care less if an instrument I ask for D65 theoretically gets closer or father away from that aim point (which I doubt is fully possible anyway), but better, that whatever values I ask for produce a visual match.

Ask two products to hit D65 or any CCT K value in two different software products. What’s the likelihood they will be the same? Does it really matter anyway?

Now instrument variations per model is a big deal! So I’m not letting manufacturers off the hook for vastly different results from the same target requests in the same software. That’s not acceptable for those working in collaboration.
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

Scott Martin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1315
    • Onsight
Re: Monitor calibration sensor evaluations
« Reply #29 on: May 03, 2011, 09:46:57 am »

Our tests of i1Profiler do indeed give more neutral and open shadows with an i1Pro than most other software. The calibration ain't perfect, however. We still visually see color crossovers where none exist with a profile made with, for example, ColorEyes and a DTP-94 (highlights and saturated colors are an entirely different matter).

Ethan, just to follow up. I'm continuing this testing and am finding that on some displays (like a Samsung 245T and Dell 24") I'm seeing dramatically improved results (visually and statistically) using an EyeOnePro device instead of a DTP94, even when the same software is used.

One of the interesting things that I see but suspect most people don't is how different two different types of monitors can look when calibrated the same way. Put a Cinema Display and a Samsung monitor on the same Mac and calibrate them both using the exact same settings and marvel at the disappointing differences you'll see. Better yet, take a client like Whole Foods World HQ where they've got 50+ designers and video professionals all in one area using a hodge podge of different brands of displays. I'm finding that if they are all calibrated using a colorimeter (Spyder3 or DTP94) when you stand back and look at all of them in one room it's kinda surprising how much inconsistency there is between them. Calibrate all of them with a spectro (using CEDP or DispCal) and they are visually perfectly consistent! Combine that with i1Profiler which does a better job at handling the shadows with a spectro than anything else and you've got a truly superior solution. Problem solved.

While lots of users may only have one or two displays these are the real world challenges (10+ different types of displays all side-by-side in one room) that my business faces every day. My hands-on testing is showing that spectros have advantages over even the best colorimeters in some situations and with i1Profiler I don't see any problems with the shadows like we've seen elsewhere. I'm not seeing any color crossovers or drawbacks with my particular pair of EyeOnes.

I'm going to stick to my guns here and suggest that, for now, i1Profiler with an i1Pro seems to be the answer to the question "when displays are calibrated with a variety of devices and applications which *combination* consistently yields the best results within a reasonable timeframe?"

Logged
Scott Martin
www.on-sight.com

Pictus

  • Full Member
  • ***
  • Offline Offline
  • Posts: 216
    • Retouching
Re: Monitor calibration sensor evaluations
« Reply #30 on: May 03, 2011, 10:29:18 am »

Ethan, just to follow up. I'm continuing this testing and am finding that on some displays (like a Samsung 245T and Dell 24") I'm seeing dramatically improved results (visually and statistically) using an EyeOnePro device instead of a DTP94, even when the same software is used.

With dispcalGUI + i1Pro create a correction matrix, then calibrate with DTP-94?  :)
Logged

Scott Martin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1315
    • Onsight
Re: Monitor calibration sensor evaluations
« Reply #31 on: May 03, 2011, 10:40:46 am »

With dispcalGUI + i1Pro create a correction matrix, then calibrate with DTP-94?  :)
Yes, really good stuff there! It does add a layer of complexity though. If I need to go around and calibrate 50 different displays would I do it with DispCal's ~45 minutes process using a correction matrix or do I use i1P's less than two minute process where no correction file is needed? And how much is the display itself likely to change during that ~45 minute process?

Someone who has a colorimeter but no spectro might want to borrow one to make such a correction. But if one own a i1pro and i1Profiler  there's not a whole lot of point going back to a colorimeter.

Lots of angles to looking at this! I like to think of myself as an end users advocate. What the best solution for different types of end users? There's usually different solutions for different types of users.
Logged
Scott Martin
www.on-sight.com

Alan Goldhammer

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4344
    • A Goldhammer Photography
Re: Monitor calibration sensor evaluations
« Reply #32 on: May 04, 2011, 11:12:54 am »

In the FWIW department, I just went through the ArgyllCMS monitor calibration process using a ColorMunki (I must have one of the good batch) to create a profile using 500 color patches to create it.  I used the same standard settings that I use with SpectraView II and the X-Rite wide gamut puck that NEC distributes.  Visually there was no perceptual difference in the profiles but I since I don't have any software to evaluate them, I cannot provide a definitive answer about how similar they are.  It's clear that ArgyllCMS takes much longer than running SpectraView and it may give better results since it can be configured in a number of ways SpectraView cannot.  I have an i1 on order (backorder at X-Rite) and will repeat the process when it arrives since it should provide better readings than the ColorMunki. 
Logged

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Re: Monitor calibration sensor evaluations
« Reply #33 on: May 04, 2011, 11:24:22 am »

In the FWIW department, I just went through the ArgyllCMS monitor calibration process using a ColorMunki (I must have one of the good batch) to create a profile using 500 color patches to create it.  I used the same standard settings that I use with SpectraView II and the X-Rite wide gamut puck that NEC distributes.  Visually there was no perceptual difference in the profiles but I since I don't have any software to evaluate them, I cannot provide a definitive answer about how similar they are.  It's clear that ArgyllCMS takes much longer than running SpectraView and it may give better results since it can be configured in a number of ways SpectraView cannot.  I have an i1 on order (backorder at X-Rite) and will repeat the process when it arrives since it should provide better readings than the ColorMunki. 

Spectraview creates a matrix profile (9 data points) while the process you are using most likely creates an LUT profile with several hundred data points, so perhaps that partly explains difference in processing time. If you want independent verification of profile quality, you can use PatchTool from Babelcolor (www.babelcolor.com). I used it to evaluate profiles for my NEC 271 and it revealed lower (i.e. better) dE readings for my BasicColor 4.22 LUT profile versus my Spectraview II matrix profile, both generated using the same colorimeter (the NEC customized i1 Display 2) and the same basic parameters for luminance, gamma and white point . 
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Alan Goldhammer

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4344
    • A Goldhammer Photography
Re: Monitor calibration sensor evaluations
« Reply #34 on: May 04, 2011, 11:38:56 am »

Spectraview creates a matrix profile (9 data points) while the process you are using most likely creates an LUT profile with several hundred data points, so perhaps that partly explains difference in processing time. If you want independent verification of profile quality, you can use PatchTool from Babelcolor (www.babelcolor.com). I used it to evaluate profiles for my NEC 271 and it revealed lower (i.e. better) dE readings for my BasicColor 4.22 LUT profile versus my Spectraview II matrix profile, both generated using the same colorimeter (the NEC customized i1 Display 2) and the same basic parameters for luminance, gamma and white point . 
I didn't measure the sensor dwell time while it was reading the patches but it certainly was one second at a minimum and likely a little more so it takes about 10 minutes or so to measure those.  I'll look at PatchTool since I will need something as move down the Argyll path to validate what is going on.  I've also created a paper profile for Ilford Gold Fiber Silk using Argyll using the ColorMunki to do the readings.  It's a much better profile than the one done with the ColorMunki software, and I finally got a nice looking sky in the Atkinson arch shot.  It was pretty much right on and I probably could tweak it a little bit more. 

I presume from what you state that you are using BasicColor to calibrate your monitor, correct?
Logged

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Re: Monitor calibration sensor evaluations
« Reply #35 on: May 04, 2011, 12:39:21 pm »

Yes, correct.
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Czornyj

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1948
    • zarzadzaniebarwa.pl
Re: Monitor calibration sensor evaluations
« Reply #36 on: May 04, 2011, 12:41:13 pm »

Spectraview creates a matrix profile (9 data points) while the process you are using most likely creates an LUT profile with several hundred data points, so perhaps that partly explains difference in processing time. If you want independent verification of profile quality, you can use PatchTool from Babelcolor (www.babelcolor.com). I used it to evaluate profiles for my NEC 271 and it revealed lower (i.e. better) dE readings for my BasicColor 4.22 LUT profile versus my Spectraview II matrix profile, both generated using the same colorimeter (the NEC customized i1 Display 2) and the same basic parameters for luminance, gamma and white point .  

There's no correction matrix for i1d2 WG in basICColor 4.2.4, so it basically takes different readings than Spectraview II (see Ethan's sensor evaluation results). basICColor 5 should support i1d2WG sensor (as this colorimeter is now available in basICColor's offer) - I bet that it will also give different results in PatchTool profile validation.
« Last Edit: May 04, 2011, 12:42:50 pm by Czornyj »
Logged
Marcin Kałuża | [URL=http://zarzadzaniebarwa

Ethan_Hansen

  • Full Member
  • ***
  • Offline Offline
  • Posts: 114
    • Dry Creek Photo
Re: Monitor calibration sensor evaluations
« Reply #37 on: May 05, 2011, 02:52:08 am »

Sorry for dropping off the face of the Earth for a few days - needed to do work that I was actually being paid for. There have been a number of questions about our results as well as many useful and thoughtful discussions in this thread. I'll do my best to address them. The questions are in three general categories: (1) Testing methodology, (2) Sensors (particularly the ColorMunki), and (3) everything else.

1: Testing Methodology (aka "do you guys know what the hell you are doing?!?")

Q: What software is used to drive the instruments and how does that factor into the results?
A: When possible, we used ArgyllCMS routines. These have the advantage of being completely customizable with the appropriate source code changes. Our intent was to determine the best possible results from each instrument. For spectrophotometers, this meant using very long integration times and heavy averaging when measuring darker values. The results we report, therefore, should be viewed as best-case for each sensor.

Q: How confident are you of the accuracy values reported in shadow levels?
A: That's an easy one. The spectroradiometer we used, a PR-730, has a luminance sensitivity of 0.0003 cd/m2. The minimum luminance we measured on a monitor was ~0.1cd/m2, or over 300x the resolution of the instrument. The PR-730 is accurate to +/-2% in luminance sensitivity at 0.009cd/m2, or to put it into other terms, we might be seeing 0.0998 or 0.1002cd/m2 rather than 0.1. Color accuracy is a similarly ridiculous +/-0.0015% at a luminance 10x lower than any monitor can reach. Our PR-730 was factory calibrated to NIST traceable standards within a few weeks of our evaluations.

2: Sensor questions

Q: What version of the ColorMunki did you test?
A: The Photo/Design version -- the spectrophotometer capable of emissive measurements for monitors and reflective measurements for prints. The ColorMunki Create is a repackaged Eye-One Display.

Q: What about XRGA? Are some sensors (e.g. ColorMunki) using this and would it make a difference?
A: Not to the best of our knowledge when controlled with Argyll's code. Based on X-Rite's XRGA whitepaper, measured color differences will be minimal for the Munki with or without XRGA.

Q: Are all ColorMunkis inaccurate or is it just the one you guys measured?
A: Only having characterized a single unit, we simply don't know. Until we can measure more samples, I will neither condemn nor exonerate the ColorMunki. Our results were disturbing, showing gross errors in black point measurements, but we might have tested a lemon unit. With the help of a third-party software supplier, we hope to get several more Munkis to evaluate. After verifying our first results showing high error levels in our ColorMunki sample, we emailed X-Rite to ask if they could send a few demo units our way. No response.

Q: For the Eye-One Pro, does using the raw 3nm measurement data help vs. using the default 10nm intervals reported by X-Rite's software?
[Explanation: The i1 Pro samples light at 3nm intervals. The data are noisy, and the default values reported are pre-smoothed to 10nm intervals. The output soectra of either a CCFL or LED backlight is spiky, with significant spikes being narrower than 10nm. The question comes down to whether the default smoothing obliterates useful data]
A: Again, Argyll comes to the rescue. It supports measuring at full resolution. The noise levels are indeed high, and feeding the raw values can create some pretty strange results. I geeked away on my latest plane trip, running some i1 readings through the FFT filtering routines we use in our profiling code. After suitable tweaks, I could get spectral curves closely approximating the 1nm sampling curves from our PR-730. I do not know if i1Profiler uses a similar technique, but I would not be surprised if it does. The dE measurements we reported used the default 10nm, smoothed data. Given that both the absolute magnitude of error on white with an i1Pro and the intra-unit variations were low, the 3nm sampling strategy is a refinement on an already good product. The problem area is in the shadows, where the measurements are noise-limited rather than influenced by spectral spikes.

Q: Any thoughts on the BasICColor DISCUS?
A: Aside from the snazzy specs? With thanks to the good folks at CHROMiX, we hope to have one in-house for testing within a couple of weeks. Mark Paulson was kind enough to volunteer his DISCUS as well. Mark: If the offer still stands, I may take you up on it after we get a chance to run the fisrt sample through its paces.

Everything else

Q: Which is the more important metric, the absolute sensor accuracy or how much sensor-to-sensor variation is seen? From Andrew: "Now instrument variations per model is a big deal! So I’m not letting manufacturers off the hook for vastly different results from the same target requests in the same software. That’s not acceptable for those working in collaboration."
A: Of the two, I would focus on the inter-unit variability. Different manufacturers calibrate their sensors to slightly different references. Seeing a few dE difference in average readings between sensor types can be attributed to calibration settings. The large unit-to-unit differences we saw in, for example, the Eye-One Display point to a sensor that cannot be relied on for accurate readings. The largest deviation we saw on the i1D2 was 14 dE-2000. To visualize this is Photoshop, fill a square with a middle grey [160, 160, 160] (sRGB or Adobe RGB - doesn't matter). Fill an adjoining square with [213, 213, 213]. That is 14 Delta E-2000, and that is not subtle. The graphic below illustrates this.


Scott and Terry pose the question of which combination of software and hardware gives the best results. Determining this is the end goal of our exercise. As a first pass, we aimed to determine what the best-case capability for each instrument. We are then cherry-picking the best sensors to use in software comparisons; e.g trying to eliminate as many variables as possible,

Ethan, just to follow up. I'm continuing this testing and am finding that on some displays (like a Samsung 245T and Dell 24") I'm seeing dramatically improved results (visually and statistically) using an EyeOnePro device instead of a DTP94, even when the same software is used.
The Samsung 245T is a strange beast. It is a PVA panel with moderately wide gamut. The expanded gamut is a function of the backlight on the 245T, so I am not surprised that the i1Pro spectrophotometer gives better results than the DTP-94. This correlates with our measurements. The DTP-94 contains a filter set that is farther from the standard observer than those found in the Spyder 3. Hence, uncorrected readings get progressively less accurate as the backlight spectrum difference between a particular panel and a standard CCFL increases. In our tests, the DTP-94 consistently turned in  the least accurate white level readings on all wide gamut displays.

shewhorn

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 537
    • http://
Re: Monitor calibration sensor evaluations
« Reply #38 on: May 15, 2011, 04:05:28 am »

I'm going to stick to my guns here and suggest that, for now, i1Profiler with an i1Pro seems to be the answer to the question "when displays are calibrated with a variety of devices and applications which *combination* consistently yields the best results within a reasonable timeframe?"

Damn you Scott. Now I'm going to have to go back and re-evaluate i1Profiler for monitor profiling. :-) During the testing I'd decided that CEDP was still king.

FWIW, I've had a preference for my my Eye One Pro for calibrating and profiling my screens. I also have a Spyder 3, DTP 94, and an i1D2 and the profiles produced by the Eye One Pro seem to be far more neutral than my other pucks. Using Spectraview in conjunction with the Eye One Pro on the NEC 2690 I am able to discern differences between 000 and 111 so what more can you ask for (bah... I actually have an answer for that but I'm not permitting myself to say it because it will result in me pulling out the credit card)? In addition the profile is definitely more neutral than the profiles created with the NEC tweaked i1D2 that came with the Spectraview package. I did up the black level a smidge as I noticed some green casts in the shadows but bumping it up to around 0.3 or 0.35 cd/m^2 (can't remember specifically) seemed to take care of the problems I had (or at least... there's nothing I can perceive that bothers me at the moment).

Ethan - THANK YOU for doing all of this work. I'm kind of jealous as I love this type of white box (and um... occasionally black box) testing and would have loved to have had a chance to play with some of those toys.

Cheers, Joe
« Last Edit: May 15, 2011, 04:17:07 am by shewhorn »
Logged

Scott Martin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1315
    • Onsight
Re: Monitor calibration sensor evaluations
« Reply #39 on: May 15, 2011, 01:20:16 pm »

Damn you Scott. Now I'm going to have to go back and re-evaluate i1Profiler for monitor profiling. :-) During the testing I'd decided that CEDP was still king.

I love the iterative nature of CEDP's process. CEDP and i1P with and EyeOnePro is a combination I'm studying on a variety of displays right now. I'm finding that i1P has an edge on some displays while, surprisingly, CEDP can have an edge on others. So there's a need to analyze the broader landscape and make educated, forward thinking recommendations to clients.
Logged
Scott Martin
www.on-sight.com
Pages: 1 [2] 3 4 ... 9   Go Up