Luminous Landscape Forum

Raw & Post Processing, Printing => Colour Management => Topic started by: Jonathan Wienke on May 12, 2009, 09:45:13 am

Title: Spectro Software Feature Request
Post by: Jonathan Wienke on May 12, 2009, 09:45:13 am
I had an idea the other day that I think would improve the quality of printer profiles, but could be applied to other devices as well. The general idea is to specify a desired level of profile accuracy, and then after printing and measuring a standard patch chart, dynamically create, print, and measure a second patch chart. Both the number of patches and the RGB/CMYK values of the patches in the second chart would be calculated to fill in the widest gaps in the measurements from the first patch chart, so that the final profile (made from measurements of both charts combined) has a high probability of meeting the specified accuracy standard. The procedure would work as follows:

1. Start out by printing and measuring a standard TC918 patch chart.

2. Do an analysis of the measured patches to identify areas where the measured patches have the greatest separation--measurements that have the greatest distance in 3D color space to their nearest neighbors.

3. Generate and print the second patch chart with RGB/CMYK values selected to fill in the widest measurement gaps, so that the specified profile accuracy (say 1 DeltaE) can be achieved with a >90% probability.

4. After waiting for the patch charts to dry properly, re-measure both the first and second chart, and generate the final profile from both sets of measurements combined.

The second patch chart would automatically focus on areas where the device is the most non-linear (where a small change in the RGB or CMYK values sent to the device causes a large change in the measured color value). This could be helpful when profiling monitors as well as printers; monitors often have significant non-linearities in the deep shadows and highlights. When profiling a monitor, the second set of RGB values to measure could be calculated and then measured immediately; a second phase that starts after a brief delay to analyze the first set of measurements and select the supplementary RGB values to measure.

X-Rite? ColorEyes? Bueller? Bueller?
Title: Spectro Software Feature Request
Post by: Scott Martin on May 12, 2009, 11:56:53 am
Have you ever used the 2 step profiling process in Monaco Profiler or ColorMunki? It's fantastic, especially for devices that aren't greyscale balanced. MP has had this available for almost a decade now but it's surprising how it's overlooked.
Title: Spectro Software Feature Request
Post by: Jonathan Wienke on May 12, 2009, 12:25:36 pm
I've only used EyeOne and Spyder.
Title: Spectro Software Feature Request
Post by: Ethan_Hansen on May 12, 2009, 01:05:46 pm
Jonathan,

Scott beat me to it. What you describe is similar to the two-step process used by several applications, most recently the ColorMunki. A preliminary target is used to characterize the printer, a second to build the profile. The first such application I am aware of was Franz and Dan's ProfileCity suite from 1999 or 2000. MonacoProfiler came quickly therafter, followed by Argyll, ColorVision (now Datacolor) and Fuji. The goal of all these products was to create a final profiling target whose colors were equally distributed in the printer's color space. This makes the calculations easier and, as you note, can improve profile accuracy. Basing the second - and possibly third, fourth, etc. -- target(s) on a desired output accuracy is an intriguing idea, however.

Of the profiling applications on the market, most require that the target colors be either distributed evenly numerically (i.e. the initial target values are evenly spaced in the n-dimensional color space being profiled) or distributed fairly evenly in the printer color space (the output value spacing). ProfileMaker and Argyll allow some additional flexibility in color spacing, but it is distinctly possible to overload areas and throw the calculations off.

We took a slightly different approach for the profiles we build. Our base RGB target contains around 1100 color patches, some defining an evenly spaced color cube, the rest distributed across areas that printers typically have trouble in. We also include secondary fields of several hundred other patches that may or may not be needed. We then throw computational horsepower at the problem. We have the luxury of only using our code in-house, so there is no need to support systems other than servers stuffed with CPUs and memory. Implementing algorithms that would take much longer than normal folks want to wait is possible.

We find that there are two primary areas of concern from a profile perspective and one for measurement. The profiling problems stem from using the profile as a printer linearization tool. As you noted, if the color patches are spaced too far apart, interpolation errors can reduce profile accuracy.  This is worst if the printer output has a sudden discontinuity occurring in a measurement gap.Inkjets tend to be the worst offenders here, although we see the same behavior with some silver-halide photo lab printers. A good calculation can detect that such a discontinuity exists, but the only solution is a profiling target with better coverage.

A different issue arises in color areas where the printer or driver collapses a wide range of input values into a narrow output band (Epson, here's looking at you). Print output tends to "smudge" for lack of a better term. Good target resolution helps here as well, although the profile calculations play an more significant role.

Finally, there is the matter of the instrument used for the measurement. Hand-held i1 or ColorMunki scans are insufficiently accurate to make these sorts of profiling heroics worthwhile. The instrument drift between calibration cycles is too large, and measurement resolution and accuracy not up to the task. The ColorMunki is small, fast, and convenient. Absolute measurement error is over 3x higher other instruments (Spectrolino, iCColor, and the i1iSis) provide. Short term repeatability, which governs consistency of patch-to-patch measurements, is over 5x worse. You are not going to make a profile with 1 Delta-E accuracy using an instrument that, at best case, is +/-0.6 DE and on some colors well North of +/- 1 DE. The ColorMunki's spectral range also falls at both the red and blue ends. The i1 fares better in this regard, but even when mounted on the i1iO table, accuracy suffers compared to more proficient instrumentation.
Title: Spectro Software Feature Request
Post by: Jonathan Wienke on May 12, 2009, 01:34:04 pm
Quote from: Ethan_Hansen
Jonathan,

Scott beat me to it. What you describe is similar to the two-step process used by several applications, most recently the ColorMunki. A preliminary target is used to characterize the printer, a second to build the profile. The first such application I am aware of was Franz and Dan's ProfileCity suite from 1999 or 2000. MonacoProfiler came quickly therafter, followed by Argyll, ColorVision (now Datacolor) and Fuji. The goal of all these products was to create a final profiling target whose colors were equally distributed in the printer's color space. This makes the calculations easier and, as you note, can improve profile accuracy. Basing the second - and possibly third, fourth, etc. -- target(s) on a desired output accuracy is an intriguing idea, however.

Oh well, at least I have confirmed the idea is sound. I was just throwing out the 1 DeltaE accuracy figure as an example. Whatever figure is chosen should obviously be selected based on the accuracy of the measuring device.
Title: Spectro Software Feature Request
Post by: digitaldog on May 12, 2009, 01:58:09 pm
Quote from: Jonathan Wienke
Oh well, at least I have confirmed the idea is sound. I was just throwing out the 1 DeltaE accuracy figure as an example. Whatever figure is chosen should obviously be selected based on the accuracy of the measuring device.

Yes, its useful in the context of the device and two sample measurements.

Keep in mind, all this talk of deltaE is pretty simplistic in terms of imagery. Its a useful metric for defining differences in two solid colors. Far more useful is the new metric Henry Wilhelm has designed call iStar. It is based, weighted on how we view images. You can find more info on his web site. I've been using it a great deal for a project, sending data to Henry to churn up. Point is, don't put a lot of weight on detlaE values over what they provide; a very simple measure of difference of a single set of colors. Yes, having thousands, and seeing the average deltaE, standard dev etc tells us a lot about certain accuracy of output device and input values. But it doesn't tell us anything useful about images, where the delta's are weighted the same when in fact, we view differences quite differently depending on where in color space they fall.
Title: Spectro Software Feature Request
Post by: Ethan_Hansen on May 12, 2009, 03:33:29 pm
Quote from: digitaldog
Yes, its useful in the context of the device and two sample measurements.

Keep in mind, all this talk of deltaE is pretty simplistic in terms of imagery. Its a useful metric for defining differences in two solid colors. Far more useful is the new metric Henry Wilhelm has designed call iStar. It is based, weighted on how we view images. You can find more info on his web site. I've been using it a great deal for a project, sending data to Henry to churn up. Point is, don't put a lot of weight on detlaE values over what they provide; a very simple measure of difference of a single set of colors. Yes, having thousands, and seeing the average deltaE, standard dev etc tells us a lot about certain accuracy of output device and input values. But it doesn't tell us anything useful about images, where the delta's are weighted the same when in fact, we view differences quite differently depending on where in color space they fall.

I don't follow you. Wilhelm's iStar is designed to give a numerical value to his permanence ratings, allowing a metric to track how a particular print changes over time. The iStar formula breaks out hue, tone, and chroma as well as adds in a print contrast term. Hue, tone, and chroma are lumped together in Delta E (dE) formulas. Print contrast is valuable if one is tracking an individual print, but has little to do with evaluating profile accuracy. The iStar software has helpful reporting features, indicating whether tone or hue dominates the color shift as well as which colors shift the most. Similar information can be extracted for dE evaluations -- format your data sensibly and view in MeasureTool or ColorThink.

Saying that Delta E values provide "a very simple measure of difference of a single set of colors" is misleading. They provide a very precise measure of difference over any number of colors. The original Delta E metric is indeed simple: one dE difference means two colors are separated by one CIELAB value; e.g. L*AB (0, 0, 0) to (1, 0, 0). Each dE unit was further intended to be the minimum visible color difference. CIELAB was designed to be a uniform color space, where the spacing between each value was visually identical. Unfortunately, that did not work out perfectly; CIELAB is not a perceptually uniform color space. A dE (original 1931 version) of 3 is not visible in saturated yellow, while your eyes can distinguish midtone grays falling half a dE apart.

In the years since the original 1931 specification there have been a number of revisions to the dE equation. The latest is dE-2000. which does a commendable job of treating color shifts equally across the visible range. This is exactly what we want when evaluating profile accuracy.

A valid point that Wilhelm makes about iStar is that dE breaks down for large shifts. My copy of Wyszecki and Stiles (http://www.amazon.com/exec/obidos/ASIN/0471399183/drycreekphoto-20) states that above 10 dE-1994 units, human vision no longer sees color differences linearly (the book was published in 2000, so no word on the limits of dE-2000; both models are primarily concerned with smaller shifts). Wilhelm's fading work requires tracking color shifts of larger magnitudes than this, so a modified metric is essential.

Getting back to printer profiling, having models that are only accurate up to 10 dE is no problem. A 10 Delta E-2000 color shift is not subtle. Make an inkjet print on glossy paper using a matte paper profile and driver settings and you will have an average dE-2000 of around 8. I think we want better profiles than that. Evaluating profiles based on dE-2000 is useful, valuable, and informative. This should be checked both for the profile itself (push numbers through to self-check the output and input sides) as well as on actual prints. The first check is if the dE distribution is Gaussian. If so, mean dE and standard deviation are all one needs. If the errors are non-Gaussian, this points to a profile construction problem or, going back to Jonathan's original post, a target that did not provide sufficient resolution into the behavior of the printer. A quick check will highlight the problem areas and a new target can be generated. Damn. Numbers actually are useful.
Title: Spectro Software Feature Request
Post by: digitaldog on May 12, 2009, 04:00:19 pm
Quote
I don't follow you. Wilhelm's iStar is designed to give a numerical value to his permanence ratings, allowing a metric to track how a particular print changes over time.

That's initially why Henry started it, but its evolved far more. He's working on using it as I said, as a matrix for difference in how we perceive images.

Quote
Saying that Delta E values provide "a very simple measure of difference of a single set of colors" is misleading. They provide a very precise measure of difference over any number of colors.

Solid colors yes, but it tells us little if anything about color appearance nor is the color model its based on, with its various warts, based on color appearance models. Its useful, no question. But it has a slew of areas where its not to be used as a definite point of reference.

Probably one of the best posts ever on the issues are from the late Bruce Fraser. Note too, that the work Henry is doing is an attempt to provide better data for imagery:


CIE colorimetry is a reliable tool for predicting whether two given solid
colors will match when viewed in very precisely defined conditions. It is
not, and was never intended to be, a tool for predicting how those two
colors will actually appear to the observer. Rather, the express design goal
for CIELab was to provide a color space for the specification of color
differences. Anyone who has really compared color appearances under
controlled viewing conditions with delta-e values will tell you that it
works better in some areas of hue space than others.

When we deal with imagery, rather than matching plastics or paint swatches,
a whole host of perceptual phenomena come into play that Lab simply ignores.

Simultaneous contrast, for example, is a cluster of phenomena that cause the
same color under the same illuminant to appear differently depending on the
background color against which it is viewed. When we're working with
color-critical imagery like fashion or cosmetics, we have to address this
phenomenon if we want the image to produce the desired result -- a sale --
and Lab can't help us with that.

Lab assumes that hue and luminance can be treated separately -- it assumes
that hue can be specified by a wavelength of monochromatic light -- but
numerous experimental results indicate that this is not the case. For
example, Purdy's 1931 experiments indicate that to match the hue of 650nm
monochromatic light at a given luminance would require a 620nm light at
one-tenth of that luminance. Lab can't help us with that. (This phenomenon
is known as the Bezold-Brucke effect.)

Lab assumes that hue and chroma can be treated separately, but again,
numerous experimental results indicate that our perception of hue varies
with color purity. Mixing white light with a monochromatic light does not
produce a constant hue, but Lab assumes it does -- this is particularly
noticable in Lab modelling of blues, and is the source of the blue-purple
shift.

There are a whole slew of other perceptual effects that Lab ignores, but
that those of us who work with imagery have to grapple with every day if our
work is to produce the desired results.

So while Lab is useful for predicting the degree to which two sets of
tristimulus values will match under very precisely defined conditions that
never occur in natural images, it is not anywhere close to being an adequate
model of human color perception. It works reasonably well as a reference
space for colorimetrically defining device spaces, but as a space for image
editing, it has some important shortcomings.


Quote
In the years since the original 1931 specification there have been a number of revisions to the dE equation. The latest is dE-2000. which does a commendable job of treating color shifts equally across the visible range. This is exactly what we want when evaluating profile accuracy.

Better, not exactly what we want. And you know why all the updates....

Title: Spectro Software Feature Request
Post by: MHMG on May 12, 2009, 07:06:59 pm
Quote from: Ethan_Hansen
I don't follow you. Wilhelm's iStar is designed to give a numerical value to his permanence ratings, allowing a metric to track how a particular print changes over time.

The I* metric was developed initially with image permanence research in mind, but it has the capability to score on a percentile basis any loss of color and tonal accuracy between a reference image and a comparison image.  While the reference may be a printed image before fading, and the comparison may be the same printed image after fading or other aging tests, the reference/comparison pair can just as easily be an original print versus a copy or proof print. Or the reference can be the color data of a source digital file while the comparison image is measured as the actual colors and tones posted on an electronic display. Thus, the I* metric has strong applicability to initial image quality studies as well as image permanence studies.

Full chroma weighting and near neighbor image contrast evaluation are two necessary features of a good color and tonal accuracy metric when evaluating real image content rather than just two side-by-side colors which have no contextual significance other than the fact that they are slightly different colors. Delta E and its various flavors (delta E 2000, etc.) possess neither capability. The I* metric is not just about tracking large changes between a reference image and its comparison image that overwhelm the perceptual scaling significance of delta E models. If, for example, spatial information content in an image is recorded by only small L* value differences between neighboring image elements (a shallow tonal gradient), and those subtle L* variations expand or contract when the reference image is reproduced, then the information content contained in that area of the image is also compromised (ah, those darned highlight, midtone, and shadow details).   For an introduction to the I* metric that explains these considerations in greater detail, please visit the documents page of the AaI&A website and download the article entitled "An Introduction to the I* Metric".

http://www.aardenburg-imaging.com/documents.html (http://www.aardenburg-imaging.com/documents.html)


Best regards,

Mark
http://www.aardenburg-imaging.com (http://www.aardenburg-imaging.com)
Title: Spectro Software Feature Request
Post by: digitaldog on May 12, 2009, 08:15:55 pm
Quote from: MHMG
For an introduction to the I* metric that explains these considerations in greater detail, please visit the documents page of the AaI&A website and download the article entitled "An Introduction to the I* Metric".
http://www.aardenburg-imaging.com/documents.html (http://www.aardenburg-imaging.com/documents.html)

Excellent, thanks for the link, I started reading and have to say my initial post:" Far more useful is the new metric Henry Wilhelm has designed call iStar" deserves to be corrected and of course credited to none other than Mark H. McCormick-Goodhart. My apologies.
Title: Spectro Software Feature Request
Post by: MHMG on May 13, 2009, 07:47:59 am
Quote from: digitaldog
Excellent, thanks for the link, I started reading and have to say my initial post:" Far more useful is the new metric Henry Wilhelm has designed call iStar" deserves to be corrected and of course credited to none other than Mark H. McCormick-Goodhart. My apologies.

Andrew,

You are not the one who should apologize. It is perfectly understandable that you would have assumed the I* metric is 100% WIR because the WIR website presents it that way. Giving proper credit where credit is due is a simple code of ethics to follow. It doesn't cost a penny, but speaks volumes about personal integrity.

Like CIELAB itself, the mathematics of the I* metric are open source, ie. non proprietary, and were published in November of 2004. The functions are relatively easy to program into a spreadsheet program like Excel which is how I use the I* metric here in my digital print research at AaI&A. That the I* metric is superior to color difference models when evaluating changes in image color and tone is trivial to demonstrate with a few simple image manipulations in LAB mode in Photoshop, although I imagine the color science community may want to hold out for a more formal psychophysical study before giving I* its full blessing.  Dedicated software applications like WIR-iStar that can calculate I*color and I*tone scores are a logical step forward in the evolution of the I* metric. I think the I* metric could be a really useful color tool as a plug-in for Photoshop or added to Xrite's Measure tool, etc., and I'd be interested in collaborating with any programmers who'd like to explore other possibilities.  I'm truly pleased to see that the I* metric is finally beginning to gather some interest in the imaging community.

best regards,

Mark
http://www.aardenburg-imaging.com (http://www.aardenburg-imaging.com)
Title: Spectro Software Feature Request
Post by: Davi Arzika on May 13, 2009, 07:56:02 am
Quote from: Onsight
Have you ever used the 2 step profiling process in Monaco Profiler or ColorMunki? It's fantastic, especially for devices that aren't greyscale balanced. MP has had this available for almost a decade now but it's surprising how it's overlooked.

Scott, What do you mean about 2 step profiling process in Monaco Profiler? Can you elaborate more about this? i have Monaco Profiler but never try this process. Shame on me :-)
Title: Spectro Software Feature Request
Post by: digitaldog on May 13, 2009, 09:13:33 am
Quote from: Davi Arzika
Scott, What do you mean about 2 step profiling process in Monaco Profiler? Can you elaborate more about this? i have Monaco Profiler but never try this process. Shame on me :-)

There's a linearization then profile option but this is not the same as the iteration process in ColorMunki. My NDA's don't let me go into more detail but the chief color scientist at X-Rite has provided pretty impressive functionality in this newer process and using a tiny amount of patches in comparison to other packages. As for CM profile quality, while we could look at numbers alone, I can say that profiles I built in early testing were on par, at least when ink hit paper with an iSis and 2700 plus target package with ProfileMaker Pro.
Title: Spectro Software Feature Request
Post by: digitaldog on May 13, 2009, 09:14:55 am
Quote from: MHMG
Dedicated software applications like WIR-iStar that can calculate I*color and I*tone scores are a logical step forward in the evolution of the I* metric. I think the I* metric could be a really useful color tool as a plug-in for Photoshop or added to Xrite's Measure tool, etc., and I'd be interested in collaborating with any programmers who'd like to explore other possibilities.

Lets talk off list, I think that would also be useful and I've got the ears (along with Karl Lang who says hello) of both teams who could implement this.
Title: Spectro Software Feature Request
Post by: Scott Martin on May 13, 2009, 10:17:42 am
Quote from: Davi Arzika
Scott, What do you mean about 2 step profiling process in Monaco Profiler? Can you elaborate more about this?

MP allows one to print a small "linearization" target, measure it, and it then creates a customized profiling target that takes under consideration the non-linear (or native tonal response curve) nature of that device.

Most MP users use ColorPort for target generation and measurement and simply take the final measurements into MP for profile generation. So, in CP, generate, print and measure a "Linearization 40 step" target. Then create a new target with one of the "XRite Profile" options that are compatible with MP, check the customize button, check the Linearization button and select the measurement file for the Lin target you've just measured. When you hit OK you'll see the generate target color patches change into customized target colors.

This two step process is unnecessary for well behaved grey balanced processes like todays modern inkjet printers and papers. The process is best suited for processes that aren't as well grey balanced, like silver halide printers and solvent and UV curable printers that use RIPs that don't perform a very good linearization prior to profiling.

I was told I was the only person outside of XRite that provided product testing and feedback when the ColorMunki was being produced in early 2007. The Munki's new engine represents an evolution of thought and process from MP's approach. We focused carefully on silver halide and dye sub photo printers when testing the new engine as they are some of the most challenging and poorly behaved devices to try and profile. The Munki proved to work fantastically well and superior to all other packages in this context and it did so with remarkably few patches. A little more control over Perceptual rendering (like what MP has) would be nice as would the ability to use high end spectros.

Anyway, I hope this helps. Davi, what printer are you considering using this process with? It's important not to go through the extra hassle if you don't have to. In fact, 2 step profiling with MP can perform worse than a 1 step profile in the wrong context.

Title: Spectro Software Feature Request
Post by: Ethan_Hansen on May 13, 2009, 03:23:12 pm
Quote from: Onsight
Anyway, I hope this helps. Davi, what printer are you considering using this process with? It's important not to go through the extra hassle if you don't have to. In fact, 2 step profiling with MP can perform worse than a 1 step profile in the wrong context.

This is an important point. MP's linearization works, with some caveats, for CMYK profiling. If you are building RGB profiles, the multiple steps required in the translation can indeed backfire. Generally speaking, the more an RGB-driven printer needs linearization, the less well MP's lin-tool works. Also, if the paper substrate contains significant OBA levels, fugeddaboutit.
Title: Spectro Software Feature Request
Post by: Scott Martin on May 13, 2009, 04:09:13 pm
Quote from: Ethan_Hansen
Generally speaking, the more an RGB-driven printer needs linearization, the less well MP's lin-tool works.
Hmm, I'm not sure if I understand you. If a device is well linearized then 1 step profiling works great. If a device doesn't have a good linearization then a 2 step profile will really help overcome a poor linearization. Perhaps that is what you are saying too.

Quote from: Ethan_Hansen
Also, if the paper substrate contains significant OBA levels, fugeddaboutit.
UV filtered devices solve any problems with OBAs, and surprisingly, actually solve problems with some papers that don't.
Title: Spectro Software Feature Request
Post by: tived on May 13, 2009, 11:51:37 pm
Quote from: Onsight
Hmm, I'm not sure if I understand you. If a device is well linearized then 1 step profiling works great. If a device doesn't have a good linearization then a 2 step profile will really help overcome a poor linearization. Perhaps that is what you are saying too.


UV filtered devices solve any problems with OBAs, and surprisingly, actually solve problems with some papers that don't.

Hi guys,

what an interesting link I don't have anything to contribute, but I do have a question.

In order to archive the best possible profiles, what would be the recommended hard ware and software. I have been told by others, that the iOne Pro has issues with specular highlights reflecting off the textured paper. Can anyone confirm or deny this.

Where would one start, without having to spend $10k+ but still get very good profiles. yes, it is probably Color management 101 question. I have just finish preparing images to CMYK, for a press in a different country. The hard part for me, was not knowing my end points/target...such as how black and how much colors would shift? Maybe this is just something that comes with experience?

yes, they did supply a profile (Fogra uncoated 29L), but being a visual person in a technical world, it help me, when I actually had a visual, luckily I had gotten it right.

But I would would like to be able to better predict, what the outcome is going to look like, in the comfort of my own office, not that I mind flying :-)

thanks very much, keep this thread going, its very interesting! thanks all

Henrik
Title: Spectro Software Feature Request
Post by: Scott Martin on May 14, 2009, 11:37:41 am
Quote from: tived
In order to archive the best possible profiles, what would be the recommended hard ware and software.
Hard to say without knowing more about your equipment, usage and demands for quality. My gut feeling is that some onsite color management work with color-by-the-numbers prepress workflow training might not only be beneficial but could cost less than getting into professional profiling equipment. You might look into what options you have for this in your area.
Title: Spectro Software Feature Request
Post by: papa v2.0 on May 15, 2009, 06:49:25 am
Hi

I* sounds quite interesting.
So how would it be used in the real world. Is it a proposed metric for image difference between original and reproduction?

Have read the published paper but still a bit confused.




Title: Spectro Software Feature Request
Post by: MHMG on May 15, 2009, 09:39:07 am
Quote from: papa v2.0
Hi

I* sounds quite interesting.
So how would it be used in the real world. Is it a proposed metric for image difference between original and reproduction?

The I* metric was originally conceived as an algorithm to objectively evaluate and score color and tonal accuracy for imaging systems that exhibit or have to contend with large shifts in color between the source image and a second mage of the same scene which is to be visually compared to the source image.  This happens almost all the time when trying to reproduce full dynamic range digital images containing rich highlight and shadow details onto a reflection print material that by its physical nature cannot support as large a color and tonal range.  Strong tonal compression and color remapping must be invoked to make a visually coherent "translation" or rendering of the image onto the paper.  Color difference models work really well when source and output have similar gamuts and a near perfect match can be achieved, but they aren't very useful in the situation I just described where significant color remapping must take place. That's where a metric like I* is needed.

A couple of real world examples:  1) we might want to judge initial appearance and aged appearance in image permanence studies where the source image is the print in its original condition and the comparison image is the print in greatly faded condition.  2) A printing company may want to evaluate two different papers for color and tonal output quality on a particular printer and be able to share the results objectively with its clients.  By scoring on a percentile basis, the I* metric gives even non color geeks a fighting chance of understanding what the scores mean.  If a printmaker tells the average customer "the print I just made deviated from your digital camera file on average by 20 delta E", that means nothing to most people and is in fact outside the useful range of perceptual scaling significance even to color scientists. If I said, the print I just made retains 90% color accuracy (hue and chroma) and 85% tonal accuracy (lightness and contrast) compared to your digital image file", most people will intuitively have a feel for what the percentile rank scores mean. They could probably guess  that 100% would have been a perfect score, whereas most customers would have no idea that zero delta E would have been a perfect score.

Note that the case of the "perfect match" is the one point in color reproduction terms where both delta E models and I* metric score the reproduction with equal success.  As the errors get larger the scoring advantage becomes increasingly in favor of the I* metric.

Is I* a proposed metric?  I guess so. The I* metric was published as a technical paper in the NIP 20 Conference proceeding of the Society for Imaging Science and Technology in November 2004.  Other researchers can try it out and use it if they think it has value.  The color science community hasn't given it a "blessing" yet, but researchers, especially in the museum and archives community that deal with works of art on display, are beginning to take serious interest in it.  Also, both WIR and AaI&A use it, but WIR so far has not switched its image permanence testing services over to I* yet.  There are undoubtedly legacy issues to face whenever companies move from one test standard to a newer one.  AaI&A is a new company with no such legacy issues, so all of its testing is I* based.  Because I invented the I* metric, I'm obviously biased about it's usefulness, but after five years of throwing lots of visual appearance problems in image reproduction at it to see if I can trick it into generating nonsensical scores, I haven't been able to do it. I can demonstrate nonsensical delta E results with incredible ease. And as for densitometric test models, let's not even go there!

If you'd like to see I* in action with a simple well known image target (macbeth colorchecker colors) in light fading studies of systems that exhibit significant color changes and catalytic fading problems as they are exposed to light, visit my website and download some of the examples for dye-based systems like the Epson 1270 (blue colored links have public access).

http://www.aardenburg-imaging.com/acceleratedagingtests.html (http://www.aardenburg-imaging.com/acceleratedagingtests.html)

Follow the link for Light Fade Test Results

cheers,

Mark
http://www.aardenburg-imaging.com (http://www.aardenburg-imaging.com)
Title: Spectro Software Feature Request
Post by: papa v2.0 on May 15, 2009, 10:09:23 am
Hi
thanks for the reply

Yes I agree we need a way of telling how accurate the reproduction is as a whole compared to the original.

Do you include some sort of viewing standard when calculating the I* value.

I see that the system is image dependent and is there a mechanism for reporting which elements of the image are in error. If for example I* reported a 85% colour accuracy - which colours are out? (Is it the sky or the models jacket or the product colour).

If at the end of the day the reproduction goal is to produce a 'pleasing' reproduction (for arguments sake) of the scene and not a colormetric reproduction, how would I* fit in?
Title: Spectro Software Feature Request
Post by: MHMG on May 15, 2009, 01:17:03 pm
Quote from: papa v2.0
Hi
 
Do you include some sort of viewing standard when calculating the I* value.

I see that the system is image dependent and is there a mechanism for reporting which elements of the image are in error. If for example I* reported a 85% colour accuracy - which colours are out? (Is it the sky or the models jacket or the product colour).

If at the end of the day the reproduction goal is to produce a 'pleasing' reproduction (for arguments sake) of the scene and not a colormetric reproduction, how would I* fit in?

The I* metric uses the CIELAB color model as the underlying architecture, so viewing standards get handled by the illuminant assumption you make when measuring the LAB values before processing the I* math.

We still don't have really great artificial intelligence algorithms that will take an original scene and capture and process it for color and tone in a way that pleases everyone (though many digital camera companies do have proprietary ways to produce "pleasing" color that they think the majority of their customers will like "out of the box"). In fact it's obviously an impossible task to please everyone. Long live custom layer edits in photoshop!  You may like skintones warm in a particular scene, for example, while I may prefer them cooler.  Anyway, I* enters the workflow at the point where you have decided what a good source image should be. That image becomes your reference image for the I* calculations.  In image permanence testing, for example, the assumption is that the original print (whether it's the most pleasing print or not) contains the color and tonal qualities you are trying to preserve. One could find a situation, for example, where a print that is too dark will fade and lighten in a way that becomes more pleasing to most observers over time before it fades too far and becomes less pleasing than the original. The I* metric would not compute it getting "better" then getting worse.  The reference print was dark to begin with, so accurate color and tone scores means it should stay that way.

Thus, the basic assumption with I* is that you now want to bolt down your preferred color and tonal relationships and reproduce them with as much colorimetric accuracy as possible.  Once you've got your aimpoint reproduction in mind and have edited your preferences into your digital image file, then I* can take it from there and tell you downstream how the subsequent reproduction choices are stacking up in terms of retaining said chosen color and tonal quality. In other words, once you've created your "pleasing" image, then the whole process at that point becomes one of accurate colorimetric matching to the extent that it is possible. There's the rub. It often not possible, so I* can tell you how far you have strayed, and as you suggest, even what localized parts of the image are suffering more in the reproduction than others. The I* analysis is all about color and tone distributions in an image. Specific images have specific colors. They get sampled as an ordered array of locations (ie. spatial frequency analysis) within the image and then summed and averaged by the I* method to produce the overall score.

The whole "pleasing versus accurate" endeavor is why, for example, many printmakers build a "master" file with carefully chosen edits that gives them their "ideal" color and tone, then turn on softproofing in Photoshop and add final edit layers to try to "pop" the profile-translated color and tone back into better visual alignment with the source image before committing to printer output.  An I* plug in in photoshop, for example, could help to objectively quide your visual edits to tell you you when are getting closer or farther away. It's possible that the I* metric could even give profiling applications some feedback that could help produce a "smart CMM".  There's a lot of research potential for further development of the I* metric. We've just scratched the surface.

Finally, just to make sure I've fully answered your question about tracking specific regions of interest in the image with the I* metric, a robust I* software application can do what your are suggesting and track specific colors or even localized areas within the image and give you selective I* scores about these specialized regions. Some "colorgeek" apps have this image-specific tracking capability now only using delta E.  The WIR iStar comparative image analysis software was designed to perform this type of evaluation using I* math as well as having the conventional delta E methods available to the user.

best regards,

Mark


Title: Spectro Software Feature Request
Post by: papa v2.0 on May 15, 2009, 03:54:06 pm
Hi Mark

How does it  take into account an image that is in one colour space and printed to another colour space using perceptual as the rendering intent.
Would that not give poor I* values as all the colormetric values could be changed?
Title: Spectro Software Feature Request
Post by: MHMG on May 15, 2009, 05:43:46 pm
Quote from: papa v2.0
Hi Mark

How does it  take into account an image that is in one colour space and printed to another colour space using rel col as the rendering intent.
Would that not give poor I* values as all the colormetric values could be changed?


Ideally, in a perfect reproduction the measured LAB values for the reference image in one colorspace match the LAB values of the comparison image in another colorspace because under both colorspace viewing conditions the assumption is that you have fully adapted to the illuminant. LAB theory says, for example, that neutral gray under one illuminant still looks like the same neutral gray under another illuminant unless you have not adapted to each illluminant when you viewed the samples. The I* metric is indeed comparing reference image to output image in absolute colorimetric terms. That is by definition what colorimetric accuracy is all about.  The metric is not trying to compensate for relative colorimetric translations. On the contrary, it is intended to show you how much accuracy loss occurred as you made the translation.  For example, all working colorpace models (sRGB, aRGB, prophoto, etc) and all monitor display profiles map monitor white to LAB 100, 0, 0 which of course no reflection print can attain and monitor black to LAB 0,0,0 which again no reflection print media can attain. That's why the need for an I* metric.  So, when you render monitor white relative to the paper white you get no colorant being formed and your actual paper white LAB value may be very different not only in the fact that L* won't =100 but the color may not have a*=0 or b*=0 either. Yet the I* metric is looking at your reference image aimpoint (LAB 100,0,0). You didn't get there so there is an error and perfect I* scores aren't obtained. That's reality. So yes, the more your media deviates from neutral white and is darker than L*=100 and the more the max black in the system deviates from LAB 0,0,0 and the more compressed your color gamut is, then the more constrained the system is and the more translation must take place in terms of printing your digital file to paper, especially if your digital file contains perfect whites and perfect blacks. I*scores do indeed go down. That said, a good ICC profile will do a better job than a bad one even under relative colorimetric mapping translations, so the better your relative colorimetric rendering, the higher the I* score will be.

There are also other ways to analyze a system with the I* metric. You could for example convert the source image to the printer profile color space using relative colorimetric rendering, then convert to LAB using absolute colorimetric rendering. This would produce a digital file containing the predicted LAB values of the actual print (including predicted paper white and max black values). Use that as the reference image data for I*, and now you have a theoretical shot at obtaining the 100% scores when you measure the comparison image (ie. the actual print) whereas in the previous example above, there is no possibility of a perfect match between reference and comparison images. In this latter example, the more the color and tonal accuracy scores as computed by I* drop from 100%, the more your printer is not printing the way the ICC profile is predicting. This is a really great way to validate ICC profile performance.

I know we are in a color management discussion thread, but we are also starting to push hard into color geek territory. You've raised very good questions. I hope my answers make sense. It took me about three years to develop the I* metric, so it's understandable that a first read through the published I* papers are going to leave you with questions.

cheers,
Mark
Title: Spectro Software Feature Request
Post by: digitaldog on May 16, 2009, 09:00:50 am
Quote from: MHMG
Thus, the basic assumption with I* is that you now want to bolt down your preferred color and tonal relationships and reproduce them with as much colorimetric accuracy as possible.  Once you've got your aimpoint reproduction in mind and have edited your preferences into your digital image file, then I* can take it from there and tell you downstream how the subsequent reproduction choices are stacking up in terms of retaining said chosen color and tonal quality. In other words, once you've created your "pleasing" image, then the whole process at that point becomes one of accurate colorimetric matching to the extent that it is possible. There's the rub. It often not possible, so I* can tell you how far you have strayed, and as you suggest, even what localized parts of the image are suffering more in the reproduction than others.


This is what we've been trying to do with iStar Mark. We have a group of hero images that we output on an Epson 3800, including the wonderful Roman 16s (http://www.roman16.com/en/), our preferred images and even synthetics. The final output is to a digital press. We know that there's a huge difference in gamut and so forth between the two but the Epson output represents our idealized, preferred output. Its what the client looks at and says "that is our preferred output from these images). With iStar, I would expect that as we tweak our output profiles and press behavior, we now have a metric that tells us, and client how much closer we are getting to the idealized output (Epson reference prints). Using deltaE would show huge differences due to the vast differences in the two devices and of course, not be weighted to imagery but lots of solid colored patches. Does this seem like a reasonable use of the iStar technology?

Once we get as close as we can to our goal using iStar, we can also use it as a trending target.
Title: Spectro Software Feature Request
Post by: Jonathan Wienke on May 16, 2009, 09:30:32 am
So what are the chances of I* being incorporated into EyeOne Match anytime soon?
Title: Spectro Software Feature Request
Post by: MHMG on May 16, 2009, 12:27:41 pm
Quote from: digitaldog
This is what we've been trying to do with iStar Mark. We have a group of hero images that we output on an Epson 3800, including the wonderful Roman 16s (http://www.roman16.com/en/), our preferred images and even synthetics. The final output is to a digital press. We know that there's a huge difference in gamut and so forth between the two but the Epson output represents our idealized, preferred output. Its what the client looks at and says "that is our preferred output from these images). With iStar, I would expect that as we tweak our output profiles and press behavior, we now have a metric that tells us, and client how much closer we are getting to the idealized output (Epson reference prints). Using deltaE would show huge differences due to the vast differences in the two devices and of course, not be weighted to imagery but lots of solid colored patches. Does this seem like a reasonable use of the iStar technology?

Once we get as close as we can to our goal using iStar, we can also use it as a trending target.

Yes, what you describe is a perfect situation for the I* metric. It should work extremely well. Thanks for sharing.

Mark
Title: Spectro Software Feature Request
Post by: MHMG on May 16, 2009, 12:39:43 pm
Quote from: Jonathan Wienke
So what are the chances of I* being incorporated into EyeOne Match anytime soon?
Probably not good at the moment. I've been trying to get an answer to a simple question for some time now about Xrite's Net Profiler software. Does it or doesn't it support the Spectrolino?  No follow-through from anyone I've spoken to at Xrite so far.  I really don't know the right people to talk to about the I* metric, so that seems even more challenging to open a dialogue about it.  I think it would make a nice fit with Measure Tool. That said, Digital Dog just gave me a good contact name, so you never know, it could happen.