Pages: 1 ... 3 4 [5]   Go Down

Author Topic: Upgrading from HPZ3100 to iPF6400  (Read 21158 times)

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Upgrading from HPZ3100 to iPF6400
« Reply #80 on: October 19, 2014, 09:05:31 am »

Thanks Robert, I'm saving notes of all your ArgyllCMS related information.

It is excellent to know that you are seeing values far less than 1 dE. What is the actual dE report, with a break down of the best, worst and average measurements?

Hi Samuel,

I didn't keep the results from the last test, so I redid the test and the results are not quite as good this time, but still very good:


Total errors (CIEDE2000):                 peak = 1.049735, avg = 0.528698
Worst 10% errors (CIEDE2000):        peak = 1.049735, avg = 1.053080
Best  90% errors (CIEDE2000):         peak = 0.867472, avg = 0.480072
avg err:                                           X  0.003830, Y  0.003735, Z  0.002949
avg err:                                           L* 0.476939, a* 0.415635, b* 0.417006


This is a test on an HP Instant Dry Satin paper on the iPF6400.  I wasn't over-careful with the profile (didn't leave much drying time for example) and used only 1200 patches; but it still shows that using Common Calibration is absolutely fine.  I based the paper on the Canon Satin Photo 240.  I'm very satisfied with this.

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

samueljohnchia

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 498
Re: Upgrading from HPZ3100 to iPF6400
« Reply #81 on: October 19, 2014, 11:55:35 am »

Hi Robert,

Just to make sure I understood it correctly, you calibrated your printer using HP instant dry satin paper and the Canon Satin Photo 240 media setting, then built a 1200 patch profile.

Did you

1. re-calibrate the printer then print a second 1200 patch target and verified it against the profile by letting ArgyllCMS simulate the measurement
2. You did not re-calibrate and just printed a second 1200 patch target and verified it against the profile?
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Upgrading from HPZ3100 to iPF6400
« Reply #82 on: October 19, 2014, 01:14:18 pm »

Hi Robert,

Just to make sure I understood it correctly, you calibrated your printer using HP instant dry satin paper and the Canon Satin Photo 240 media setting, then built a 1200 patch profile.

Did you

1. re-calibrate the printer then print a second 1200 patch target and verified it against the profile by letting ArgyllCMS simulate the measurement
2. You did not re-calibrate and just printed a second 1200 patch target and verified it against the profile?

Hi Samuel,

I know it's a bit confusing, but I'll try to explain it as well as I can  :).  

First of all, I calibrated the printer using the Canon HW Coated paper (it's the only Canon paper I currently have ... I'm just waiting for some to arrive).  The iPF6400 comes with 5 sheets of A2 HW Coated for printhead alignment (and, I assume, common calibration) ... not a lot!

Then I created a new custom paper using the HP Instant Dry Satin paper and based it on the Canon 240gsm Satin paper.  I did a paper feed adjustment etc.

Then I created an icc profile for the paper using 1292 patches using i1Profiler on the same roll of HP ID Satin.

I did some test prints and everything seemed fine.

Now on to Argyll.

What I did there (this is in the commands I posted) is to create a 100 patch target and converted it to the profile I had created for the paper before printing it using no color management (I could have left the paper with no color profile and printed it with the profile instead of no color management, but I'm not sure what the CMM would do in that situation as it has no source profile to work from).  I scanned the printed target using the i1Pro2 and Argyll: this creates a .ti3 file.

The commands also simulate this process by converting the same 100 patch target to another .ti3 file, through the icc profile (that's the fakeread command).  It's simulating the print and scan and so it represents the ideal situation of a perfect print ... as it's all software and there's no real paper or printer involved.

So by comparing the two .ti3 files we can see how close to ideal the print and scan are.  In the case of the ID Satin it's really very close with an average dE2000K error of 0.5, which most of us couldn't distinguish (and which may not be far off the resolution of the i1Pro2).

What that proves to me is that calibrating the printer using a common calibration is just fine (as you and Geraldo have said to me) and there's really no need to be worrying about .ac1 files and spectral data etc.  It also shows that i1Profiler did a very good job of profiling the printer (because if it had not then the dE figures would be way out).

What it doesn't say much about is how good the profile is in terms of smoothness, for example.  I would think not very good because I only used around 1200 patches (Graham Gill of Argyll recommends 3000+ patches for a high quality inkjet RGB profile).

The other thing that's useful with this test is that if you run it today and note the results, then run it again in a month's time (or after you've changed the roll for a new one, say, or put in a new ink cartridge) then if the results of the new test are significantly worse than the first it's telling you that you need to reprofile the paper.  It's very easy to automate this as the text file can be converted to an Excel spreadsheet easily (you just need to cut and paste it into Excel, global remove all the ':', and convert text to columns with space as the delimeter).  So you do this with the first set of values and then with the second set of values and you can then compare the two sets very easily.

Argyll seems quite daunting at first because it's all command-line stuff, but once you have some batch files it becomes very easy.  You just need to download the software which you can get for free here: http://www.argyllcms.com/.  It supports most spectros like the ColorMunki, i1Pro etc., as well as most colorimeters.  The documentation is also very good and it's easy to get help.

I don't know how the Argyll-generated profiles stack up compared to i1Profiler ... that's altogether a more difficult thing to establish.  i1Profiler is nice and visual ... Argyll has many advanced options.  My feeling is that probably one is better at some things and the other better at others ... but for me at this stage I would tend to use i1Profiler to make profiles because it's easier, but I would use Argyll for the sort of test that we're talking about and also if I needed a particularly good perceptual rendering for an image, as only Argyll can do an optimised perceptual render.

Robert


 


 
« Last Edit: October 19, 2014, 01:20:16 pm by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

samueljohnchia

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 498
Re: Upgrading from HPZ3100 to iPF6400
« Reply #83 on: October 20, 2014, 10:45:21 pm »

What that proves to me is that calibrating the printer using a common calibration is just fine (as you and Geraldo have said to me) and there's really no need to be worrying about .ac1 files and spectral data etc.  It also shows that i1Profiler did a very good job of profiling the printer (because if it had not then the dE figures would be way out).

Robert, Thank you for detailing your process. I understand now. Yes, common calibration works alright, but unfortunately your test does not prove that common calibration works well - it merely proves that i1Profiler did a good job (correct on that one) and that your measurement by hand is ok.

To prove that common calibration works ok, you should run calibration again, and the compared the measurement data of the first target to the second target - no profile involved at all. No fakeread with Argyll. Just compare only the measurement data. The only variable is the repeatability of the i1 Pro 2, which can be reasonably good if your technique is excellent when measuring by hand.

I did just that with the iSis, multiple measurements, extra large patches sizes, to minimize measurement errors. I had good results as I reported earlier.

Quote
In the case of the ID Satin it's really very close with an average dE2000K error of 0.5, which most of us couldn't distinguish (and which may not be far off the resolution of the i1Pro2).

If you measure the same target twice and you are seeing these numbers, something would be wrong - I would look into whether measurement technique can be improved, paying attention to issues like your dragging speed, consistency of speed, ruler position, measurement aperture position. Maybe the patches are on the limit of the minimum patch size for the device, increasing the patch width could help. The average dE2K of the i1 Pro 2 is about 0.1 - 0.15 when remeasuring. Because you did not do that it is hard to say what else might be contributing to the errors.

This old topic is a good reference of what you should be seeing with the respective devices.

Quote
What it doesn't say much about is how good the profile is in terms of smoothness, for example.  I would think not very good because I only used around 1200 patches (Graham Gill of Argyll recommends 3000+ patches for a high quality inkjet RGB profile).

Yes, you could certainly go with a larger amount of patches. Around 2000 (well chosen) patches or so is the point of diminishing returns. Having 3000+ patches increases the risk of measurement errors significantly. Try measuring the same target two or three times and you might notice just how different the measurements can be. Part of that is user error, part of that is software limitations to derive the measurement data.

Quote
Argyll seems quite daunting at first because it's all command-line stuff, but once you have some batch files it becomes very easy.

Yes, I agree! I have used Argyll a number of times. I don't have any experience with batch files and scripting but I must look into it when I have time. Argyll does some things better than i1Profiler, and vice versa. But it is quite hard to beat i1Profiler's perceptual rendering when it is set up optimally. I have made tests which show other profiling solutions offering better results in some cases, but the question is which is the best overall? Won't it be great to combine all the best qualities of each?
« Last Edit: October 20, 2014, 10:50:37 pm by samueljohnchia »
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Upgrading from HPZ3100 to iPF6400
« Reply #84 on: October 21, 2014, 04:12:27 am »

Robert, Thank you for detailing your process. I understand now. Yes, common calibration works alright, but unfortunately your test does not prove that common calibration works well - it merely proves that i1Profiler did a good job (correct on that one) and that your measurement by hand is ok.

To prove that common calibration works ok, you should run calibration again, and the compared the measurement data of the first target to the second target - no profile involved at all. No fakeread with Argyll. Just compare only the measurement data. The only variable is the repeatability of the i1 Pro 2, which can be reasonably good if your technique is excellent when measuring by hand.

What I meant when I said that Common Calibration works OK is that it is not necessary to have a .am1 media file with spectral data in order to get accurate print results; it's quite enough to use Common Calibration as the profiling handles the less than perfect calibration that one would expect when using Common Calibration.

But yes, it does also say that i1Profiler did a good job of profiling.

Quote
If you measure the same target twice and you are seeing these numbers, something would be wrong - I would look into whether measurement technique can be improved, paying attention to issues like your dragging speed, consistency of speed, ruler position, measurement aperture position. Maybe the patches are on the limit of the minimum patch size for the device, increasing the patch width could help. The average dE2K of the i1 Pro 2 is about 0.1 - 0.15 when remeasuring. Because you did not do that it is hard to say what else might be contributing to the errors.

I haven't done a repeated test with the i1Pro so I'm not sure what the repeatability is.  I'll check that out when I have some time.  But you have to remember that the dE errors I'm showing here are the differences between an actual print and the ideal ... to get an average dE of 0.5 with a max of 1 is pretty damn good!  My comment about the resolution of the i1Pro was not warranted (although, looking at the link you gave me it would seem that a max dE2000K of around 0.5 with a manual scan might not be unexpected).

Quote
This old topic is a good reference of what you should be seeing with the respective devices.

Yes, you could certainly go with a larger amount of patches. Around 2000 (well chosen) patches or so is the point of diminishing returns. Having 3000+ patches increases the risk of measurement errors significantly. Try measuring the same target two or three times and you might notice just how different the measurements can be. Part of that is user error, part of that is software limitations to derive the measurement data.

Yes, I agree! I have used Argyll a number of times. I don't have any experience with batch files and scripting but I must look into it when I have time. Argyll does some things better than i1Profiler, and vice versa. But it is quite hard to beat i1Profiler's perceptual rendering when it is set up optimally. I have made tests which show other profiling solutions offering better results in some cases, but the question is which is the best overall? Won't it be great to combine all the best qualities of each?

How do you set up i1Profiler to give an optimal result for a Perceptual profile?

Thanks for the info!

Robert
« Last Edit: October 21, 2014, 04:22:55 am by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Upgrading from HPZ3100 to iPF6400
« Reply #85 on: October 21, 2014, 09:27:18 am »

Hi Samuel,

I tested the i1Pro2 against a 100 spot wedge (scanned twice with a calibration in between) and as you can see below the dE2000 are better than 0.3, with an average of .06, which seems pretty OK.

===========================================================
Total   errors                        (CIEDE2000)     peak   =   0.288437,   avg   =   0.064116   
Worst   10%   errors   (CIEDE2000)   peak   =   0.288437,   avg   =   0.188102
Best   90%   errors           (CIEDE2000)   peak   =   0.115421,   avg   =   0.051622
avg   err   X   0.000583,   Y   0.000591,   Z   0.00044      
avg   err   L*   0.053047,   a*   0.042422,   b*   0.063142      
===========================================================

I also did a repeated spot-test on a single spot and got a peak dEab of .12 with an average of .04 over 25 readings (I had to calculate the dE values in Excel and dE2000 is too complicated for me ... it would need a VBA script probably, or a very complicated formula).  The dE2000 values should be much lower that the dEab values, so that's very repeatable with a low error.

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

samueljohnchia

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 498
Re: Upgrading from HPZ3100 to iPF6400
« Reply #86 on: October 21, 2014, 08:30:38 pm »

What I meant when I said that Common Calibration works OK is that it is not necessary to have a .am1 media file with spectral data in order to get accurate print results; it's quite enough to use Common Calibration as the profiling handles the less than perfect calibration that one would expect when using Common Calibration.

Yes, it is not necessary to have spectal data and perform unique calibration to get accurate print results. But from your testing procedure, you could not conclusively make this statement.

If you didn't even run common calibration at all (you can turn it off on the printer), you would get equally good profiling results from the printer because it is new. Not that I recommend doing it this way.

Btw I've found your list thread about using fakeread on Argyll. Ben has given you excellent advise, I note.

Quote
the dE errors I'm showing here are the differences between an actual print and the ideal ... to get an average dE of 0.5 with a max of 1 is pretty damn good!  My comment about the resolution of the i1Pro was not warranted (although, looking at the link you gave me it would seem that a max dE2000K of around 0.5 with a manual scan might not be unexpected).

Quite the opposite, if we are talking about device consistency, not profile accuracy. An average dE of 0.5 is huge and 5 times out of the specifications of the device. A max of 1 is quite bad also. I would run an i1diagnostics test in this situation and send in my i1 Pro for repair if it fails.

I'm still not convinced that it is the best way to validate profiles. I need to understand the inner workings of it better. I'll be snooping around. I think the error may also have something to do with the way fakeread interpolates the results for sampling points between the discrete points in the profile's CLUT.

Quote
I tested the i1Pro2 against a 100 spot wedge (scanned twice with a calibration in between) and as you can see below the dE2000 are better than 0.3, with an average of .06, which seems pretty OK.

This is not merely ok, it is really excellent! If you can keep up this level of measurement consistency when measuring 3000 patch profiling target, I bow to you sir.

Quote
I also did a repeated spot-test on a single spot and got a peak dEab of .12 with an average of .04 over 25 readings...so that's very repeatable with a low error.

Yes that's within the device specifications. All good.

Quote
How do you set up i1Profiler to give an optimal result for a Perceptual profile?

As a starting point, version 2 profile, D50, and zero all the profile settings sliders. Play around with the saturation and neutralize gray sliders depending on the media you are profiling, and what you are printing.
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Upgrading from HPZ3100 to iPF6400
« Reply #87 on: October 22, 2014, 09:36:27 am »

Yes, it is not necessary to have spectal data and perform unique calibration to get accurate print results. But from your testing procedure, you could not conclusively make this statement.

If you didn't even run common calibration at all (you can turn it off on the printer), you would get equally good profiling results from the printer because it is new. Not that I recommend doing it this way.

Btw I've found your list thread about using fakeread on Argyll. Ben has given you excellent advise, I note.

Quite the opposite, if we are talking about device consistency, not profile accuracy. An average dE of 0.5 is huge and 5 times out of the specifications of the device. A max of 1 is quite bad also. I would run an i1diagnostics test in this situation and send in my i1 Pro for repair if it fails.

I'm still not convinced that it is the best way to validate profiles. I need to understand the inner workings of it better. I'll be snooping around. I think the error may also have something to do with the way fakeread interpolates the results for sampling points between the discrete points in the profile's CLUT.

The test procedure checks an actual print through the icc profile to a simulated print through the same icc profile.  The simulated print has none of the printer/paper issues, it just takes the print wedge and produces the test data.  So you can think of it as printing using a perfect printer.  The comparison of the scanned print to this simulated print then tells you how well the printer/profile has performed.  A max dE2000 of 1 with an average dE of 0.5 is excellent IMO: it means that if you viewed the print against an ideal print that you would not be able to tell the difference between them (or maybe JUST, on the few patches at around 1).

The printer calibration is a bit of a red herring because the profile will compensate for a poor calibration.  It's just that it's better to have a calibrated printer so the profile has less of a job to do.  What is very important though is making sure the printer is laying down the correct amount of ink.  Have you found a good way of setting this ... using the media configuration tool, I assume?

So I think the test tells you the following:
- the print is producing Lab values that correspond very closely to the image values (rendered by the profile)
- the profile is OK - because if it was not the print values would be wrong.
- the printer is performing correctly - because if it was not the print values would be wrong.
- there is (or isn't) a current need to reprofile the paper

subject to these being only accurate to the extent that the print wedge is representative of the color gamut of the paper.  This can be improved by using a wedge with more spot colors and also by tuning the targen parameters.

What the test does not tell you, IMO is the following:
- it doesn't tell you if the profile is smooth and was made with enough sample points
- it doesn't tell you how good the profile's rendering algorithm is - for example how well it has brought out-of-gamut colors into gamut and what it has done with the in-gamut color
- whether or not the printer has been properly calibrated
- whether or not the print ink limits are high enough or too high

So I think it's a very useful test, but it doesn't give you the full picture.  To validate the profile fully is another day's work really ... and probably has to be done by examining test prints to see how smooth the rendering is, and so on.

A variation on the test is possible: and that is to compare the print scanned results to the actual image Lab values.  But I think analyzing this would be very difficult because, after all, the profile's job is to alter the image colors to make them fit in a pleasing way into the print gamut, so there should be differences.  But if we do this test making sure that all the image colors are within the print gamut then the values should be very close, so it may tell us something about the profile (would have to try it out to see what!).

Yes, Ben usually does give good advice ... although in this case I wouldn't quite agree with him: I think this is a useful test, easy to do, and one that can certainly point out a few potential problems that could end up saving a lot of time and wasted prints.

Quote
As a starting point, version 2 profile, D50, and zero all the profile settings sliders. Play around with the saturation and neutralize gray sliders depending on the media you are profiling, and what you are printing.

Thanks!
Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Upgrading from HPZ3100 to iPF6400
« Reply #88 on: October 22, 2014, 01:54:33 pm »

The Canon Photoshop Print Plugin has the 300ppi setting grayed out.  Is this because the plugin will resample to 600ppi anyway before printing, even if the image resolution is below that?  If so it seems odd that there should be a 300ppi setting at all (grayed out or not).

Also, with Print Mode set to Highest (Maximum Number of Passes), is the dithering at 1200dpi?

Have you seen any difference between upsizing to 600ppi or just passing the 300ppi image to the plugin?

Thanks

Robert
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

samueljohnchia

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 498
Re: Upgrading from HPZ3100 to iPF6400
« Reply #89 on: October 24, 2014, 11:01:24 am »

Robert, I don't think the test is as useful as you describe. Since the target's values are taken on the round trip through the profile, it is extremely unlikely that your measurements would be any different if you printed the same target right after profiling, like you did. Why should it be? Yes, I more or less agree with your list if your measurements are within 1dE2k of the "ideal" given by Argyll. But what if it is not? How would it tell you what has gone wrong? You would not know if it was a profile issue or a printing issue. Much better would be to print a "reference" target, ensure that you measured it properly, and keep your measurement data and the print. Then re-prints of the target in the future can be compared to this target, visually and measurably.

I think why you are measuring a difference of as large as 1dE and a relatively high average of 0.5 is due to the less than perfect simulation. These numbers are within the i1 Pro's specifications of device to device consistency. It may be that your i1 Pro differs from Argyll's by that much.

You are right that a good calibration is important so the profile does not bear all the heavy lifting. They work much better when the printer is more linear in its native state. Making sure the printer is laying down the right amount of ink should be the paper manufacturer's job - they build the .am1 files which contain this information. Unfortunately I do not find their choices of base media setting and inking always optimal, and I suspect that their spectral data, if created, was not done properly, so I create my own using the Media Configuration Tool. Depending on how fussy you are you can go down this route. Be prepared to spend a lot of time, ink and paper.

Quote
A variation on the test is possible: and that is to compare the print scanned results to the actual image Lab values.

Maybe www.colorcheck-online.de/ might be of interest to you.

Quote
The Canon Photoshop Print Plugin has the 300ppi setting grayed out.

This depends on the media setting selected. Try plain paper for example, you will see it available. If 600 ppi is selected, then the plug-in expects images of that resolution, otherwise it will resample it.

Quote
Also, with Print Mode set to Highest (Maximum Number of Passes), is the dithering at 1200dpi?

No, dithering does not occur at a fixed number of dpi. The output has a fixed droplet size and achieving lighter colors requires fewer dots spaced futher apart. 1200 dpi is the printer specification for the nozzle pitch. When max no. of passes is invoked, the ink order and layering changes, and the output is visibly smoother. 16 passes vs 7 passes compared to Highest.

Quote
Have you seen any difference between upsizing to 600ppi or just passing the 300ppi image to the plugin?

Yes of course. As always, hand over to the driver/plug-in images at the requested ppi, sharpened properly.
« Last Edit: October 24, 2014, 11:05:51 am by samueljohnchia »
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Upgrading from HPZ3100 to iPF6400
« Reply #90 on: October 24, 2014, 06:44:01 pm »

Robert, I don't think the test is as useful as you describe. Since the target's values are taken on the round trip through the profile, it is extremely unlikely that your measurements would be any different if you printed the same target right after profiling, like you did. Why should it be? Yes, I more or less agree with your list if your measurements are within 1dE2k of the "ideal" given by Argyll. But what if it is not? How would it tell you what has gone wrong? You would not know if it was a profile issue or a printing issue. Much better would be to print a "reference" target, ensure that you measured it properly, and keep your measurement data and the print. Then re-prints of the target in the future can be compared to this target, visually and measurably.

I think why you are measuring a difference of as large as 1dE and a relatively high average of 0.5 is due to the less than perfect simulation. These numbers are within the i1 Pro's specifications of device to device consistency. It may be that your i1 Pro differs from Argyll's by that much.

Hi Samuel,

Thanks for the other info!

I think we should really start another topic for this discussion as it's quite interesting from a profiling/ color management point of view.

Anyway, here is my understanding of what the test is doing:



First of all, the test target is copied, along with the patch definition to make the reference.

The test image is then printed through the profile and scanned to produce the test Lab values (as shown on the left hand path of the image)

The reference image is passed through Fakeread, which renders the image through the profile, just like the printing does, and it then converts the RGB values to Lab. It doesn't do a round-trip conversion, but it simulates the scan ... as it says in the documentation: fakeread ... "Simulates the measurement of a devices response".  In other words it simulates the scan of the print.

The two sets of data are then compared using colverify.

Of course I could well be wrong, but my understanding of this process is that the simulation essentially removes the physical print and scanning with a spectro, so that it produces data that is ideal data, as if from a perfect printer and scanner.  The comparison then should show the imperfections in the print/scan.  

Most of the imperfection will be in the print because the spectrometer is very accurate (from my test I'm getting an average dE00 of 0.06 with a peak of better than 0.3).  The comparison of the test v reference I did shows an average dE of about 0.5 with a peak of around 1.  So the error due to the print+scan should be around 0.5+/- 0.06 average and 1+/-0.3 peak.  There may be some small errors in the simulation (fakeread) but these would only be rounding-type errors which I would think are not significant.

Of course my understanding/interpretation could be wrong.

If we assume that it isn't wrong then what the test tells me is that the print is correct as per the profile.  If I ran the test immediately after profiling the paper and the dE differences were large then it might mean that the profile was bad, or it might mean that something had gone wrong with the printer.  It wouldn't tell me what was wrong but it would tell me that something was wrong ... so I would then have to investigate further.

If the comparison was OK immediately after profiling but was not after some time had passed, then I would know that the problem was most likely with the printer, and most likely with calibration drift.  So then I would re-profile the paper.

Of course we could print & scan, then print & scan at a later stage and compare the two sets of data ... and these would equally well tell us if there was a need to reprofile.

So (again, if I understand the test correctly) it's not a magic bullet or a cure-all, but it's a useful test and it's also very easy to run because all that's required is to print the target and scan it and the commands do the rest.

Robert
« Last Edit: October 24, 2014, 06:46:56 pm by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Upgrading from HPZ3100 to iPF6400
« Reply #91 on: October 25, 2014, 07:18:29 am »


Maybe www.colorcheck-online.de/ might be of interest to you.


Thanks - this seems interesting.  It isn't clear to me if the comparison is between the target printed using the print profile, against the Lab values of the reference, or whether the print profile is used so that the comparison is like the test I have here (that is, both fed through the profile).  Do you know?

The problem with comparing the printed target through the icc profile to the Lab values of the reference is that the data will necessarily be different, especially for out-of-gamut colors. So how do you know then if the dE differences are normal and correct, or if they are an error?

This sort of testing can also be done with Argyll (supplemented with Excel for the graphs) ... unfortunately Argyll is complicated and has many options, which makes it hard to use (but very powerful if you know how to!); and even though there is help from the freelist forum it's patchy and slow.  On the other hand, the ColorCheck-online seems quite easy and has some nice reporting ... but you have to pay for it and you get what they give you and have little control over what that is (I assume, not having tried it yet).  The documentation being in German doesn't help (me anyway).

Robert

Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

samueljohnchia

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 498
Re: Upgrading from HPZ3100 to iPF6400
« Reply #92 on: October 26, 2014, 11:53:22 pm »

Hi Robert, feel free to start another topic if you wish, although it seems we are having our own conversation here. I suspect that most of this information is much too esoteric and most cannot be bothered to set up their printing environment to such a particular level of detail.

Quote
Most of the imperfection will be in the print because the spectrometer is very accurate

Two problems here. One is that the spectrophotometer is not "accurate". In your one off test, it appears to be very consistent. A device can be consistently inaccurate if you know what I mean, like a ruler with an inaccurate scale. The i1 Pro varies from device to device by 0.4dE2k on average according to X-rite, but in the real world it is more like 5 times (or even more) that amount, according to tests I have seen. Also, measurement errors occur more frequently than you might expect.

Second is I don't know what you mean by imperfections in the print. If you ask the printer to print the same thing twice in a row, I would say it is extremely consistent. Visually and measurably, even when studying the dither pattern under high magnification. I have done this test many times before.

Again I strongly suspect that the interpolation and prediction of what colors a simulated i1 Pro might derive from simulated print converted through a printer profile is less than ideal. Mostly because the simulated i1 Pro "sees" color differently from yours. Device lamp spectrum differences, calibration, variances in the sensor, aperture grating etc.

Quote
It doesn't do a round-trip conversion, but it simulates the scan ...

Your diagram shows exactly the round trip conversion I am talking about RGB to lab to RGB. The final RGB should theoretically be the same source data used to make 1. the printed target and 2. the data for measurement simulation. The unknown to me is what happens to the RGB data after profile conversion to the simulated Lab values in fakeread? Although ArgyllCMS is open source, I'm not knowledgeable to understand the code yet.

It would be interesting to know what level of consistency you can achieve with your i1 Pro 2 for handheld measurements on a day to day basis. Your first test was excellent, more excellent than anything I have or seen others been able to achieve in scan mode, for large patch targets. Even tech support of X-rite Switzerland was unable to better my average best results, even when I had the target wrongly set up (nothing to do with wrong patch size input).

Quote
So (again, if I understand the test correctly) it's not a magic bullet or a cure-all, but it's a useful test and it's also very easy to run because all that's required is to print the target and scan it and the commands do the rest.

If you like doing it this way, by all means. No one can tell you what to do!  :) But please do not come to the wrong conclusions, like saying common calibration is ok because you had low dE variances.

I do not think it is a useful test for me personally because it cannot help me isolate a problem if there is one. The amount of effort and time to make a print of a target and measure it is the same to start with, so I would much prefer to compare it to an actual measurement I made previously, than some simulated one. I am able to derive far more useful information out of this kind of test, and saves me time.

Quote
It isn't clear to me if the comparison is between the target printed using the print profile, against the Lab values of the reference, or whether the print profile is used so that the comparison is like the test I have here (that is, both fed through the profile).  Do you know?

It compares against the Lab reference values of the target. That way you can tell if the profile is doing a good job of gamut mapping colors sampled from all over the RGB space, and where it might need to do a better job.

Yes, this sort of testing can be done in many different ways. I posted the link because it was well laid out online - a good spring board for coming out with more ideas too. It is interesting to study how these companies design their tests. Google translate helped me through all the German.

« Last Edit: October 27, 2014, 02:30:56 am by samueljohnchia »
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Upgrading from HPZ3100 to iPF6400
« Reply #93 on: October 27, 2014, 03:36:15 am »

Two problems here. One is that the spectrophotometer is not "accurate". In your one off test, it appears to be very consistent. A device can be consistently inaccurate if you know what I mean, like a ruler with an inaccurate scale. The i1 Pro varies from device to device by 0.4dE2k on average according to X-rite, but in the real world it is more like 5 times (or even more) that amount, according to tests I have seen. Also, measurement errors occur more frequently than you might expect.

Yes, you're entirely right, I was using sloppy language.  I should have said consistent and not accurate.  But it comes to the same thing in this test because the paper is profiled using the same i1Pro2 that is then used to scan the test target.

Quote
Second is I don't know what you mean by imperfections in the print. If you ask the printer to print the same thing twice in a row, I would say it is extremely consistent. Visually and measurably, even when studying the dither pattern under high magnification. I have done this test many times before.

Again, sloppy language on my part.  What I mean is that the whole process of making the profile and then using it (via Photoshop, the CMM, the Print Plug-in) to print a target that is then scanned has inevitable inaccuracies due to the printer, the spectro, the software (for example, to print a spot color on the test target the likelihood is that the data will need to be interpolated and the new color may not print exactly as predicted), the physical media (ink, paper), even perhaps things like the room temperature and humidity, etc.  

So again, if you print a target 5 times one after the other and measure the spot colors you may find that there is little variation between the prints ... so there is good repeatability, but not necessarily good accuracy.  

What I am trying to measure is exactly that: the accuracy of the print, post rendering.  

[As a second function, the test can be used to see if there is a drift over time (so, for example, the measurements may show an average dE of 0.5 today, but if I print the same target in a month's time I may get an average dE of 1.4, which would show that the printer calibration has drifted for this particular paper/ink combination)].

Quote
Again I strongly suspect that the interpolation and prediction of what colors a simulated i1 Pro might derive from simulated print converted through a printer profile is less than ideal. Mostly because the simulated i1 Pro "sees" color differently from yours. Device lamp spectrum differences, calibration, variances in the sensor, aperture grating etc.

Your diagram shows exactly the round trip conversion I am talking about RGB to lab to RGB. The final RGB should theoretically be the same source data used to make 1. the printed target and 2. the data for measurement simulation. The unknown to me is what happens to the RGB data after profile conversion to the simulated Lab values in fakeread? Although ArgyllCMS is open source, I'm not knowledgeable to understand the code yet.

What I mean by a round-trip would be to go through the profile in the forward and then the reverse direction.  In this case the profile is only used once in the forward direction for both the print and the simulation.  So we have an RGB image in the workspace that is converted to RGB colors for the printer (by the CMM/profile); and we have the exact same RGB image that is converted to RGB by fakeread through the same profile, and then converted to D50 Lab so that the spectrometer readings can be compared directly by colprof (it could be that fakeread doesn't convert to RGB at all but goes straight from the image RGB to Lab, but if it does convert to RGB then I don't know how it does the conversion to Lab ... I'll try to find out. Looking at the code it seems to be doing a conversion to RGB then back to Lab using a conversion matrix and white point adjustment, but the code is complicated and I don't understand what it's doing).

Of course the simulation is unlikely to be perfect: for example, the print used the Microsoft CMM whereas fakeread will use it's own internal conversion algorithms and there are bound to be differences there; and there could be programming errors in the Argyll code.

Like you I don't know enough about the internals to be able to gauge the simulation errors; but I would be pretty confident in the Argyll code as it's been out there for a long time and it's very widely used.  Also, whatever error is introduced by the simulation should be consistent: so say the maximum simulation error is a dE of 1.0 ... well then you can take the test results as being correct to +/- dE of 1.0, which is still very good.

I think that perhaps the most useful thing is not the absolute accuracy of the test, but that it can highlight problem areas.  For example if you find that all results have a dE of 1.0 or better, but 10 results have a dE of greater than 5 or 10 (or maybe much bigger) then the chances are that there is something seriously wrong with your profile or your printer (like nozzle clog, say).  So you can then do some tests to see what the problem is.

Quote
It would be interesting to know what level of consistency you can achieve with your i1 Pro 2 for handheld measurements on a day to day basis. Your first test was excellent, more excellent than anything I have or seen others been able to achieve in scan mode, for large patch targets. Even tech support of X-rite Switzerland was unable to better my average best results, even when I had the target wrongly set up (nothing to do with wrong patch size input).


I'll try again in a week or so and let you know.  It is a new instrument and perhaps I have a good one by luck.  Also I am very careful in making sure the prints are well and truly dry and I scan very carefully ... slow and steady.  Argyll uses lines between the spot colors and these may also help.

Quote
If you like doing it this way, by all means. No one can tell you what to do!  :) But please do not come to the wrong conclusions, like saying common calibration is ok because you had low dE variances.

I do not think it is a useful test for me personally because it cannot help me isolate a problem if there is one. The amount of effort and time to make a print of a target and measure it is the same to start with, so I would much prefer to compare it to an actual measurement I made previously, than some simulated one. I am able to derive far more useful information out of this kind of test, and saves me time.

It compares against the Lab reference values of the target. That way you can tell if the profile is doing a good job of gamut mapping colors sampled from all over the RGB space, and where it might need to do a better job.


I'm not hung up on this test at all.  What I'm trying to do at the moment is find a way of verifying my print system to try to make sure that it is as solid as I can make it.  When I say that common calibration is OK, what I mean is that using common calibration followed by profiling (which takes out the calibration errors) appears to be producing good results on my printer.  The test is one measure, but of course I'm also looking at prints visually.

Here is another test that compares two prints of a target:

===================================================================
rem Profcompare.bat iccprofile

targen -v -d2 -G -f100 ProfCompare1
copy ProfCompare1.ti1 ProfCompare2.ti1

printtarg -v -r -ii1 -a1.0 -T300 -M6 -pA4 ProfCompare1
printtarg -v -r -ii1 -a1.0 -T300 -M6 -pA4 ProfCompare2
cctiff -v -ir -e %1 ProfCompare1.tif ProfCompare1O.tif
move /Y ProfCompare1O.tif ProfCompare1.tif
cctiff -v -ir -e %1 ProfCompare2.tif ProfCompare2O.tif
move /Y ProfCompare2O.tif ProfCompare2.tif

Pause Print ProfCompare1.tif and ProfCompare2.tif with no color management

Pause Scan ProfCompare1
chartread ProfCompare1

Pause Scan ProfCompare2
chartread ProfCompare2

Pause The test results will be in ProfCompare.txt
colverify -v2 -N -k -s -w -x ProfCompare1.ti3 ProfCompare2.ti3 > ProfCompare.txt
==================================================================

This test can do two things: show the repeatability of your instrument (you can scan the same print twice and colverify will then give you the scan differences); show the drift over time of the printer calibration or print issues like head clogs (of course you would need to run the test in two goes, saving the first set of results so you can compare them to the second test).  

I'm looking into a test to compare the image Lab values to the scanned Lab values, but although I think this would be useful, it would need to be used with care because the profile/CMM will change the data (that is it's job, after all).  It would certainly show the extent to which the profile had shifted the values ... and if you saw some very large differences, particularly if they were clustered around a hue or saturation or lightness range then it might indicate a profile problem (but this would probably best be found visually using GamutVision or ColorThink).

Quote
Yes, this sort of testing can be done in many different ways. I posted the link because it was well laid out online - a good spring board for coming out with more ideas too. It is interesting to study how these companies design their tests. Google translate helped me through all the German.


Same here with Google translate  :).  I don't really understand what values ColorCheck are comparing and how they are doing it, but I assume that they are comparing the Lab values, and that they have chosen the target colors to be most likely within the printer gamut (they mention Fine Art and the target is an sRGB image).  Of course this won't tell you what's happening for out-of-gamut colors, but I think some of their reports are quite interesting, for example the a* and b* plots.

I won't bother starting a new topic as it's more likely to end up in lots of arguments ... I just thought it might be useful to get some other people's input.

Robert
« Last Edit: October 27, 2014, 06:00:52 am by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana

samueljohnchia

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 498
Re: Upgrading from HPZ3100 to iPF6400
« Reply #94 on: October 30, 2014, 09:07:18 pm »

Hi Robert, sorry for the delayed reply, I was busy.

It is good that we are on the same page regarding consistency.

Quote
What I am trying to measure is exactly that: the accuracy of the print, post rendering.  

This is of course impossible, because you are using an inherently flawed device, not a reference grade spectrophotometer, to make the measurements.

Now that I think about it, fakeread most likely is just using the profile to convert from device RGB (your target patches) to Lab. While printing the target means using spectral data from measuring with your i1 Pro 2 to generate the Lab values. Therefore, Fakeread is not based on any reference or perfect i1 Pro. So I think we are sort of in agreement here on this one.

I am also now in agreement with you that the greatest contributor to the dE differences is interpolation error in the various conversion and measurement steps.

Also the accumulation of error, from your first round of measurements to build the profile, and the measurement of a print made with the profile, cannot be a good thing.

I am even more convinced that this is not a useful test. If you get low dEs everything is fine and dandy but I know nothing else. If it fails, I would have no idea what is contributing to this failure. I might as well conduct a more useful test to allow me to immediately separate printer and measurement issues from the onset. You may perhaps have a measurement error when creating the profile in the first place. It would still involve printing and measurement, but what you compare the measurement to is different.

There are better ways to verify if your profile is performing well or not. If you want to evaluate profile accuracy in terms of if I send L*50, will my printer print L*50? - this test does not answer that. If you want to know if your profile will render gradients smoothly, is free of banding, free of hue shifts etc, it does not answer it. If you wanted to know if calibration was helping put the printer in a more linear state, you need to graph the native performance of the printer before and after calibration. If you wanted to know if calibration on OEM Canon paper is better than third party papers, or if calibration on glossy is better than matte, you also need to graph the native performance of the printer.

Quote
then the chances are that there is something seriously wrong with your profile or your printer (like nozzle clog, say)

Quite honestly, if you had anything that seriously wrong with you printer, you will see it with your naked eyes just studying the print. No need for this time laborious test.

And if you have anything as subtly wrong with the dot pattern of the printer as I had been having, this test is not accurate enough to pick it up. You still need to look closely at the actual print outs.

Quote
I'll try again in a week or so and let you know.  It is a new instrument and perhaps I have a good one by luck.  Also I am very careful in making sure the prints are well and truly dry and I scan very carefully ... slow and steady.  Argyll uses lines between the spot colors and these may also help.

Great. 2000+ patches is where it starts to get challenging, to measurement consistently without errors. I once measured over 100,000 patches in one sitting - not a good idea! Even the much more expensive iSis produces more measurement errors than I would like it to. Indeed, the separation lines between patches does help in software detection of the patches. Any i1 Pro 2 that passes the i1Diagnostics test should be in good condition for measurement.

Quote
What I'm trying to do at the moment is find a way of verifying my print system to try to make sure that it is as solid as I can make it.

I'm afraid there is no one way of verifying this. You would have to use a variety of tests and experiments to discover this. And then at some point you will realise that making it solid (to borrow your word) for a particular paper will not work for another paper. Even on the same paper optimizing for dmax or gamut will cause you to sacrifice something else. Getting super accurate skin tones or spot colors may cause inevitable banding in other color gradients. It is all about constantly re-balancing depending on what kind of imagery you are printing.

Quote
Here is another test that compares two prints of a target.

Would it not be easier to just use the QA Analysis module in i1Profiler for this?

Quote
I don't really understand what values ColorCheck are comparing and how they are doing it, but I assume that they are comparing the Lab values

Yes, I think this is what is happening. Having a sampling of colors which will fall outside the gamut boundaries of the printer is not a problem - it is also useful to see how such colors are treated, especially when evaluating the perceptual rendering. Unless you going to do reproductions, I would be less hung up about the accuracy of image Lab compared to print Lab. Visual assessments are an excellent start.
« Last Edit: October 31, 2014, 03:42:08 am by samueljohnchia »
Logged

Robert Ardill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 658
    • Images of Ireland
Re: Upgrading from HPZ3100 to iPF6400
« Reply #95 on: November 03, 2014, 07:40:15 am »


"What I am trying to measure is exactly that: the accuracy of the print, post rendering."

This is of course impossible, because you are using an inherently flawed device, not a reference grade spectrophotometer, to make the measurements.

Yes, of course any test that requires an instrument will never be 100% accurate. But you can say that if you use the same i1Pro2 for profiling and verification, that you do it in the same conditions as much as possible, that you allow the same amount of time for the ink to dry etc., that you can be confident that your comparisons are within a dE00 of x, relative to each other.  We should be able to establish what x is with a reasonable level of certainty by doing several scans of a target and comparing the values.

I know, for example, that if I repeatedly measure the same spot color, that I start off with a dE94 of about 0.005 and that this drifts up to about 0.03 as the lamp heats up.  If I move the device to a different spot on the color patch, the dE94 value can go up to about 0.3 max.  I also know that if I recalibrate the device that the measurement changes by a dE94 of about 0.1.  So the worst case is a dE94 difference of 0.4 between measurements (not taking into account any other error sources, like rounding or interpolation errors).

So straight away I know that the repeatability of the instrument is very good although it does drift a bit as the lamp heats up, that recalibration introduces a difference of about 0.1 and that with the particular paper (Canson Baryta in this case) there can be a variability of up to 0.3 over a 3cm square patch.  So I can say that if I do a validation that the readings are only good to a dE94 of 0.4 (ignoring any other sources of error).  That's well within the just noticeable difference range, so if I can get to this level of accuracy in a test it's better than I can see by visual inspection.

Quote
Now that I think about it, fakeread most likely is just using the profile to convert from device RGB (your target patches) to Lab. While printing the target means using spectral data from measuring with your i1 Pro 2 to generate the Lab values. Therefore, Fakeread is not based on any reference or perfect i1 Pro. So I think we are sort of in agreement here on this one.

Yes, I'm trying to pin down exactly what the various commands do, but at this stage I would be reasonably confident in saying that fakeread uses the AtoB1 (for relative) or AtoB3 (for absolute) profile table to convert the RGB test values to Lab.  So the Lab value it produces is only as good as the forward tables.  For this reason, it's important in a test either a) to eliminate any destination out-of-gamut colors, or to make sure that the colors are all in gamut (the easiest way to do that is to produce the RGB test values and then to Assign the destination profile to the image ... which is effectively what both cctiff and fakeread do, if I understand them correctly).

Quote
I am also now in agreement with you that the greatest contributor to the dE differences is interpolation error in the various conversion and measurement steps.

Yes, that's possibly true, especially if the profile was made from few color patches.  To see how large this error might be, what I did was to generate an RGB target and run it through fakeread to get the in-gamut Lab values.  I then converted one of the values back to RGB and then back again to Lab (using xicclu) and compared the two Lab values.  I got a dE94 error of about 0.2.  The profile I used was a pretty good one made from 2600 patches.  So, based on this single test value, we could now say that our validation is only good to a dE94 of 0.6.

My own feeling is that we should not be concerned about validation errors of dE00 less 1.0.  If the dE00 is over 1.0 then this could well be pointing to a problem somewhere, most likely not due to the instrument or to rounding/interpolation errors.  As a dE00 of 1.0 is just noticeable, I think that's OK (after all, we're not looking for errors that we can't notice  :)).  

Quote
Also the accumulation of error, from your first round of measurements to build the profile, and the measurement of a print made with the profile, cannot be a good thing.

Yes, that's true: errors in the profile will certainly cause problems ... not just in a validation test but in the printed image.  So trying to find out if the profile is OK (whether by using a 3D gamut map, doing a visual examination of a test print, or using some sort of validation test like the ones I'm looking at) is surely a useful thing to do.

Quote
I am even more convinced that this is not a useful test. If you get low dEs everything is fine and dandy but I know nothing else. If it fails, I would have no idea what is contributing to this failure. I might as well conduct a more useful test to allow me to immediately separate printer and measurement issues from the onset. You may perhaps have a measurement error when creating the profile in the first place. It would still involve printing and measurement, but what you compare the measurement to is different.

There are better ways to verify if your profile is performing well or not. If you want to evaluate profile accuracy in terms of if I send L*50, will my printer print L*50? - this test does not answer that. If you want to know if your profile will render gradients smoothly, is free of banding, free of hue shifts etc, it does not answer it. If you wanted to know if calibration was helping put the printer in a more linear state, you need to graph the native performance of the printer before and after calibration. If you wanted to know if calibration on OEM Canon paper is better than third party papers, or if calibration on glossy is better than matte, you also need to graph the native performance of the printer.

Sure, of course.  I'm not suggesting that this a fix-all test.  It's just a test which may or may not be useful.  Personally, the very first thing I do after making a profile is to look at a 3D gamut map to see that the gamut looks OK, doesn't have any holes in it, has the expected gamut volume, appears smooth; then I look at the black and white density response to see how smooth it is and that I am getting the expected DMax. Then I check to see how it performs through a granger rainbow (visually on the monitor). Then I compare the profile to a similar profile (say one I've done before, or the manufacturer-supplied profile, to see the differences). After that I will do a test print and examine it visually to check things like gradient smoothness etc. THEN I may run the kind of test I'm talking about.

I guess that one very useful outcome of this discussion would be to come up with a procedure to check the print system, whether this is by doing test prints or by using validation tests, or by using inspection software like ColorThink and GamutVision, or by a combination of these.

Just out of interest, have a look at the Canson iPF6400 Baryta profile, perceptual intent: serious garbage!


Quote
"What I'm trying to do at the moment is find a way of verifying my print system to try to make sure that it is as solid as I can make it".

I'm afraid there is no one way of verifying this. You would have to use a variety of tests and experiments to discover this. And then at some point you will realise that making it solid (to borrow your word) for a particular paper will not work for another paper. Even on the same paper optimizing for dmax or gamut will cause you to sacrifice something else. Getting super accurate skin tones or spot colors may cause inevitable banding in other color gradients. It is all about constantly re-balancing depending on what kind of imagery you are printing.

Exactly.  One test may reveal one problem whereas another may reveal another.  For example, if you print with the Canson iPF6400 Baryta profile with Perceptual intent, you will most likely get a pretty bad result (but you might not, depending on the image).  If you look at the profile using a 3D gamut viewer ... you will certainly see that it is totally flawed.  

And, for sure, everything could be fine for one paper and completely bad for another (but in that case it would be reasonable to suspect the profile).

I do agree that the acid test is the print.  But you certainly learn a hell of a lot when you start to look at various tests: not just from the point of view of whether things are right or not, but much more basic things like what happens to an image as it is being translated from input to working space to output, for example.  A few years ago I would happily jump from ProPhoto to Lab to sRGB to print ... and things would be sort of OK; but now when I look at an image I'm immediately drawn to potential problem areas, like banding due to out-of-gamut clipping, for example.  I think I understand what is going on much better than I used to, and I'm less likely to make silly mistakes that result in an inferior print (or web image).

Quote
"Here is another test that compares two prints of a target".

Would it not be easier to just use the QA Analysis module in i1Profiler for this?

No, once you have the batch files Argyll is very easy to use.  It is also very flexible in what you can do.  But then again, I haven't really looked at i1Profiler for quality assurance (I guess you need to either select or make a .pxf CGATs file, print and scan it twice and then use the Data Analysis to compare the results?).

Robert
« Last Edit: November 03, 2014, 07:48:42 am by Robert Ardill »
Logged
Those who cannot remember the past are condemned to repeat it. - George Santayana
Pages: 1 ... 3 4 [5]   Go Up