Two problems here. One is that the spectrophotometer is not "accurate". In your one off test, it appears to be very consistent. A device can be consistently inaccurate if you know what I mean, like a ruler with an inaccurate scale. The i1 Pro varies from device to device by 0.4dE2k on average according to X-rite, but in the real world it is more like 5 times (or even more) that amount, according to tests I have seen. Also, measurement errors occur more frequently than you might expect.
Yes, you're entirely right, I was using sloppy language. I should have said consistent and not accurate. But it comes to the same thing in this test because the paper is profiled using the same i1Pro2 that is then used to scan the test target.
Second is I don't know what you mean by imperfections in the print. If you ask the printer to print the same thing twice in a row, I would say it is extremely consistent. Visually and measurably, even when studying the dither pattern under high magnification. I have done this test many times before.
Again, sloppy language on my part. What I mean is that the whole process of making the profile and then using it (via Photoshop, the CMM, the Print Plug-in) to print a target that is then scanned has inevitable inaccuracies due to the printer, the spectro, the software (for example, to print a spot color on the test target the likelihood is that the data will need to be interpolated and the new color may not print exactly as predicted), the physical media (ink, paper), even perhaps things like the room temperature and humidity, etc.
So again, if you print a target 5 times one after the other and measure the spot colors you may find that there is little variation between the prints ... so there is good repeatability, but not necessarily good accuracy.
What I am trying to measure is exactly that: the accuracy of the print, post rendering.
[As a second function, the test can be used to see if there is a drift over time (so, for example, the measurements may show an average dE of 0.5 today, but if I print the same target in a month's time I may get an average dE of 1.4, which would show that the printer calibration has drifted for this particular paper/ink combination)].
Again I strongly suspect that the interpolation and prediction of what colors a simulated i1 Pro might derive from simulated print converted through a printer profile is less than ideal. Mostly because the simulated i1 Pro "sees" color differently from yours. Device lamp spectrum differences, calibration, variances in the sensor, aperture grating etc.
Your diagram shows exactly the round trip conversion I am talking about RGB to lab to RGB. The final RGB should theoretically be the same source data used to make 1. the printed target and 2. the data for measurement simulation. The unknown to me is what happens to the RGB data after profile conversion to the simulated Lab values in fakeread? Although ArgyllCMS is open source, I'm not knowledgeable to understand the code yet.
What I mean by a round-trip would be to go through the profile in the forward and then the reverse direction. In this case the profile is only used once in the forward direction for both the print and the simulation. So we have an RGB image in the workspace that is converted to RGB colors for the printer (by the CMM/profile); and we have the exact same RGB image that is converted to RGB by fakeread through the same profile, and then converted to D50 Lab so that the spectrometer readings can be compared directly by colprof (it could be that fakeread doesn't convert to RGB at all but goes straight from the image RGB to Lab, but if it does convert to RGB then I don't know how it does the conversion to Lab ... I'll try to find out. Looking at the code it seems to be doing a conversion to RGB then back to Lab using a conversion matrix and white point adjustment, but the code is complicated and I don't understand what it's doing).
Of course the simulation is unlikely to be perfect: for example, the print used the Microsoft CMM whereas fakeread will use it's own internal conversion algorithms and there are bound to be differences there; and there could be programming errors in the Argyll code.
Like you I don't know enough about the internals to be able to gauge the simulation errors; but I would be pretty confident in the Argyll code as it's been out there for a long time and it's very widely used. Also, whatever error is introduced by the simulation should be consistent: so say the maximum simulation error is a dE of 1.0 ... well then you can take the test results as being correct to +/- dE of 1.0, which is still very good.
I think that perhaps the most useful thing is not the absolute accuracy of the test, but that it can highlight problem areas. For example if you find that all results have a dE of 1.0 or better, but 10 results have a dE of greater than 5 or 10 (or maybe much bigger) then the chances are that there is something seriously wrong with your profile or your printer (like nozzle clog, say). So you can then do some tests to see what the problem is.
It would be interesting to know what level of consistency you can achieve with your i1 Pro 2 for handheld measurements on a day to day basis. Your first test was excellent, more excellent than anything I have or seen others been able to achieve in scan mode, for large patch targets. Even tech support of X-rite Switzerland was unable to better my average best results, even when I had the target wrongly set up (nothing to do with wrong patch size input).
I'll try again in a week or so and let you know. It is a new instrument and perhaps I have a good one by luck. Also I am very careful in making sure the prints are well and truly dry and I scan very carefully ... slow and steady. Argyll uses lines between the spot colors and these may also help.
If you like doing it this way, by all means. No one can tell you what to do! But please do not come to the wrong conclusions, like saying common calibration is ok because you had low dE variances.
I do not think it is a useful test for me personally because it cannot help me isolate a problem if there is one. The amount of effort and time to make a print of a target and measure it is the same to start with, so I would much prefer to compare it to an actual measurement I made previously, than some simulated one. I am able to derive far more useful information out of this kind of test, and saves me time.
It compares against the Lab reference values of the target. That way you can tell if the profile is doing a good job of gamut mapping colors sampled from all over the RGB space, and where it might need to do a better job.
I'm not hung up on this test at all. What I'm trying to do at the moment is find a way of verifying my print system to try to make sure that it is as solid as I can make it. When I say that common calibration is OK, what I mean is that using common calibration followed by profiling (which takes out the calibration errors) appears to be producing good results on my printer. The test is one measure, but of course I'm also looking at prints visually.
Here is another test that compares two prints of a target:
===================================================================
rem Profcompare.bat iccprofile
targen -v -d2 -G -f100 ProfCompare1
copy ProfCompare1.ti1 ProfCompare2.ti1
printtarg -v -r -ii1 -a1.0 -T300 -M6 -pA4 ProfCompare1
printtarg -v -r -ii1 -a1.0 -T300 -M6 -pA4 ProfCompare2
cctiff -v -ir -e %1 ProfCompare1.tif ProfCompare1O.tif
move /Y ProfCompare1O.tif ProfCompare1.tif
cctiff -v -ir -e %1 ProfCompare2.tif ProfCompare2O.tif
move /Y ProfCompare2O.tif ProfCompare2.tif
Pause Print ProfCompare1.tif and ProfCompare2.tif with no color management
Pause Scan ProfCompare1
chartread ProfCompare1
Pause Scan ProfCompare2
chartread ProfCompare2
Pause The test results will be in ProfCompare.txt
colverify -v2 -N -k -s -w -x ProfCompare1.ti3 ProfCompare2.ti3 > ProfCompare.txt
==================================================================
This test can do two things: show the repeatability of your instrument (you can scan the same print twice and colverify will then give you the scan differences); show the drift over time of the printer calibration or print issues like head clogs (of course you would need to run the test in two goes, saving the first set of results so you can compare them to the second test).
I'm looking into a test to compare the image Lab values to the scanned Lab values, but although I think this would be useful, it would need to be used with care because the profile/CMM will change the data (that is it's job, after all). It would certainly show the extent to which the profile had shifted the values ... and if you saw some very large differences, particularly if they were clustered around a hue or saturation or lightness range then it might indicate a profile problem (but this would probably best be found visually using GamutVision or ColorThink).
Yes, this sort of testing can be done in many different ways. I posted the link because it was well laid out online - a good spring board for coming out with more ideas too. It is interesting to study how these companies design their tests. Google translate helped me through all the German.
Same here with Google translate
. I don't really understand what values ColorCheck are comparing and how they are doing it, but I assume that they are comparing the Lab values, and that they have chosen the target colors to be most likely within the printer gamut (they mention Fine Art and the target is an sRGB image). Of course this won't tell you what's happening for out-of-gamut colors, but I think some of their reports are quite interesting, for example the a* and b* plots.
I won't bother starting a new topic as it's more likely to end up in lots of arguments ... I just thought it might be useful to get some other people's input.
Robert