Pages: 1 ... 3 4 [5] 6 7   Go Down

Author Topic: Generating a Kodachrome profile from an IT8 target  (Read 57968 times)

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Re: Generating a Kodachrome profile from an IT8 target
« Reply #80 on: May 30, 2011, 12:38:37 am »

We must be inhabiting the same ether space and unconsciously communicating. Last night I was thinking about what happens when you change colour numbers in the profile space and thought: colours must be moving away from what the profile is trying to correct, but couldn't quite convince myself that it was true. Now I know it is. Yet another trap for a novice Kodachrome scanner – don't edit in the profile space, even when testing. It will be interesting to see if editing in another space makes much difference to the shadows.

I processed your Reference 18 Gamma 1 with the linear XYZ profile, converted to ProPhoto RGB, then a simple Exposure adjustment to brighten, adjust contrast, and cut down a little flare. Then Levels to make small adjustments to the Red and Blue channels (for simple alignment of the characteristic curves, as I was advocating in the Kodachrome threads a few months ago). I think this Gamma 1 version compares favorably with your edited version. Check out the color detail in the face. PSD file with Layers.

Quote
QUES
Any idea how profilers average a patch? How much of the patch do they average? Do they discard values exceeding certain limits? Do they correct the average if the histogram shows a bias (it should be bell-shaped, I assume)?

It seems to me that to obtain an optimum profile, you should precondition the RGB data and the darker IT8 patches.

This is from the Argyll documentation of the scanin measurement utility, which of course is also used in Coca:
Quote
Normally scanin computes an average of the pixel values within a sample square, using a "robust" mean, that discards pixel values that are too far from the average ("outlier" pixel values). This is done in an attempt to discard value that are due to scanning artefacts such as dust, scratches etc. You can force scanin to return the true mean values for the sample squares that includes all the pixel values, by using the -m flag.

Scanin uses the information contained in a cht_format file that specifies the locations of the patches to be read, and what fraction of the patch area to read.

The Scanin command seems to give reasonable measurements. It's completely automatic and produces a measurement file that can be altered with a text editor, so it's not necessary to modify the target scan itself. Then just run the colprof command to make the profile and you're done. Since only 2 or 3 commands are needed to build profiles, it takes only a short time to install and learn to use Argyll.
« Last Edit: May 30, 2011, 01:03:09 am by crames »
Logged
Cliff

guyburns

  • Full Member
  • ***
  • Offline Offline
  • Posts: 101
Re: Generating a Kodachrome profile from an IT8 target
« Reply #81 on: May 30, 2011, 04:53:54 am »

Since only 2 or 3 commands are needed to build profiles, it takes only a short time to install and learn to use Argyll.

The only problem with that idea is, you are obviously very experienced with this sort of thing and I'm not. I took one look at this instruction … "Making the tools accessible: You should also configure your %PATH% environment variable to give access to the executables from your command line environment" … My goodness. I'd rather learn Ancient Greek than enter a modern programming environment – although I was pretty good at entering a stack of punch cards encoded in FORTRAN into a PDP 11. But that was too many years ago now to contemplate.

Just the intensity of Argyll's documentation puts me off. Pity, but that's the way it is. I'm stuck with Coca.

Some interesting things came of reading about Scanin:

Resolution
It is designed to cope with a variety of resolutions, and will cope with some degree of noise in the scan (due to screening artefacts on the original, or film grain), but it isn't really designed to accept very high resolution input. For anything over 600DPI, you should consider down sampling the scan using a filtering downsample, before submitting the file to scanin.

I'll have to ask the author of Coca whether he scales the image before sending it to Scanin. I've been using 2000 dpi because 4000 dpi didn't work. Maybe even that's overkill.


Preconditioned or not?
Normally scanin has reasonably robust feature recognition, but the default assumption is that the input chart has an approximately even visual distribution of patch values, and has been scanned and converted to a typical gamma 2.2 corrected image, meaning that the average patch pixel value is expected to be about 50%. If this is not the case (for instance if the input chart has been scanned with linear light or "raw" encoding), then it may enhance the image recognition to provide the approximate gamma encoding of the image.

Another question for the author of Coca – has he left the gamma at it's default of 2.2? Maybe this explains why gammas higher than one seem to work better for me. It's becoming very confusing. From the above quote it appears there is an assumed preconditioning to gamma 2.2, yet the author of Argyll said in an email to me there was no preconditioning:

Argyll offers a number of models for input profiles (gamma + matrix, shaper + matrix, cLut). None of them use polynomial models to create curves or tables. I suspect that the described "pre-conditioning" approach really doesn't apply to models similar to a gamma+shaper model, since any pre-conditioning gamma will be cancelled out in a simple mathematical fashion. The Argyll shaper+gamma model uses a power curve plus higher-order shaping curves, so once again I'd be surprised if applying some extra gamma would change the result much, since it will be simply counteracted by the gamma curve parameter in the model.

For cLut type profiles, by default Argyll automatically creates a device curve that maximizes the linearity of the fit of the data points. I would guess that this has a similar effect to the "pre-conditioning" idea, but is tailored to the actual data, rather than being assumed. Note that applying an assumed "pre-conditioning" gamma may have a very detrimental effect for some device characteristics, if they don't meet the assumptions. The device values for instance might be linear light, or they could be gamma corrected. The profile has to account for both cases and everything in between.

In optimizing the model fit, Argyll generally uses the CIE94 delta E formula, since it has a better correspondence to visual errors than straight Lab delta E. Naturally a delta in XYZ space is not used, as it has poor visual correlation.


Preconditioning. Guy Burns reckons 1.6 is optimum for his setup; Hutchcolor mentions 3.0, 2.8; Cliff reckons 1.0; LPROF is somewhere in between; others say set GS11 to 100-115. What all this means, of course, is that they all work acceptably. Send a profiler an image with any gamma you like and you'll get a decent profile most of the time. Not necessarily the optimum, but decent.
Logged

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Re: Generating a Kodachrome profile from an IT8 target
« Reply #82 on: May 30, 2011, 07:05:00 am »

Just the intensity of Argyll's documentation puts me off. Pity, but that's the way it is. I'm stuck with Coca.

I hear you. No matter, Coca is good. If you want to try something in Argyll, I can run it for you.

Quote
Resolution
It is designed to cope with a variety of resolutions, and will cope with some degree of noise in the scan (due to screening artefacts on the original, or film grain), but it isn't really designed to accept very high resolution input. For anything over 600DPI, you should consider down sampling the scan using a filtering downsample, before submitting the file to scanin.

In testing scanin I used my own 4000 ppi scans with no down-sampling and had no problems. The measured values in the famous GS20+ region are close to what I get by with careful, small, selection to avoid artifacts.

Have you checked to see whether gamma affects the averaging, if you're not scaling in a linear space?

Quote
Preconditioned or not?
Normally scanin has reasonably robust feature recognition, but the default assumption is that the input chart has an approximately even visual distribution of patch values, and has been scanned and converted to a typical gamma 2.2 corrected image, meaning that the average patch pixel value is expected to be about 50%. If this is not the case (for instance if the input chart has been scanned with linear light or "raw" encoding), then it may enhance the image recognition to provide the approximate gamma encoding of the image.

I haven't seen a problem with the feature recognition in linear gamma or any other.

Quote
Preconditioning. Guy Burns reckons 1.6 is optimum for his setup...

Perhaps only when the scanner profile is used as the editing space?

attached: here is the diagnostic image that scanin generated while reading your Gamma 1 scan, showing the patch recognition and reading areas.
Logged
Cliff

guyburns

  • Full Member
  • ***
  • Offline Offline
  • Posts: 101
Re: Generating a Kodachrome profile from an IT8 target
« Reply #83 on: May 30, 2011, 10:14:57 am »

Cliff – I'll go through your last few posts over the next day or two and pick out the gems. I've just about come to the end of my testing, other than seeing the effect of editing in a space other than the profile space. I don't reckon I'll see any improvement, because I based my choice of gamma 1.6 (and earlier, 1.8 ) by just looking at the raw, profiled, unedited scans. After I made the choice of what I thought was the best raw scan, only then did I try editing. I did try editing a couple of gamma 1.0 scans a week or so ago, but couldn't get them to look as good as a gamma 1.8 (this was before I decided 1.6 was even better).


L* Errors Caused By The Surround
Attached is a graph of L* errors for a variety of GS patches, measured with respect to each patch. This graph is not comparing L* with the IT8 value; the graph shows the variation in average L* in each patch as the linear size of the selection changes: 100%, 80%, 64%, 51% and 41% (funny numbers because I scaled by 0.8 each time). As expected, the 100% patch is most effected by the surround, and the effect is most pronounced for darker patches. Starting from a 100% selection (in reality about 95%), as the selection was made smaller for each patch, the errors became smaller. Below ~40%, there was very little change in the average value, so I chose 40% as my reference.

Summary
To avoid L* errors caused by the surround being of a different luminance than the patch itself, the optimum size of the selection should be around ~40%, resulting in a maximum error of less than 0.1 L* units. Errors remain low up to an 80% selection, but anything above that should be avoided, particularly for the darker patches, because the errors become significant.
« Last Edit: May 30, 2011, 10:22:27 am by guyburns »
Logged

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Re: Generating a Kodachrome profile from an IT8 target
« Reply #84 on: May 30, 2011, 11:12:59 am »

I don't reckon I'll see any improvement, because I based my choice of gamma 1.6 (and earlier, 1.8 ) by just looking at the raw, profiled, unedited scans. After I made the choice of what I thought was the best raw scan, only then did I try editing.

Hmm. Without cranking up the lightness to evaluate the dark tones?

Let me know what you think of the gamma 1 version I did yesterday of Reference Scan 18. Crops attached.

Quote
L* Errors Caused By The Surround
Attached is a graph of L* errors for a variety of GS patches, measured with respect to each patch. This graph is not comparing L* with the IT8 value; the graph shows the variation in average L* in each patch as the linear size of the selection changes: 100%, 80%, 64%, 51% and 41% (funny numbers because I scaled by 0.8 each time). As expected, the 100% patch is most effected by the surround, and the effect is most pronounced for darker patches. Starting from a 100% selection (in reality about 95%), as the selection was made smaller for each patch, the errors became smaller. Below ~40%, there was very little change in the average value, so I chose 40% as my reference.

Summary
To avoid L* errors caused by the surround being of a different luminance than the patch itself, the optimum size of the selection should be around ~40%, resulting in a maximum error of less than 0.1 L* units. Errors remain low up to an 80% selection, but anything above that should be avoided, particularly for the darker patches, because the errors become significant.

That's interesting. I will play around with scanin later to see if its "robust mean" is similarly affected.
« Last Edit: May 30, 2011, 12:31:34 pm by crames »
Logged
Cliff

guyburns

  • Full Member
  • ***
  • Offline Offline
  • Posts: 101
Re: Generating a Kodachrome profile from an IT8 target
« Reply #85 on: May 31, 2011, 08:00:05 am »

Hmm. Without cranking up the lightness to evaluate the dark tones?
When initially comparing the results of IT8 profiling, and in all subsequent comparisons, I simply opened my reference files (scanned at the various gammas), applied the relevant profile, and had a look at the image, whether it be light, dark, or in between. No cranking up the lightness, no bringing down the highlights, I evaluated all images unedited. Ref 18 Gamma 1.0 (for example) immediately struck me as having something wrong with it. When a similar effect was observed in some of the other Gamma 1.0 references, I decided that Gamma 1.0 was not optimum. Editing may have been able to bring it up to look similar to the others, but I haven't tested that yet because at this stage I want to compare the effect of IT8 profiling alone to determine the best starting point. Comparing editing at various gammas is a test yet to come, after I work out how to optimally scan using my setup. But what I have done, is to play around with the gamma I consider to be the best starting point, to see what improvements can be had. When I work out how to edit in the most effective, efficient manner, then I'll test that editing method with all gammas, subsuming the IT8 profiling and editing into one overall valid comparison. If Gamma 1.0 comes out on top, I'll be a Gamma 1.0 man. I'm not in love with a certain gamma – I want the best end result.


Problems with an Analytical Approach
One of the problems with the purely analytical approach of evaluating profiles by their calculated errors, is the possibility of internal errors being locked into the loop and not revealing themselves. Generating a profile is a closed loop: you assume you have an accurate IT8 target which is then scanned, you generate a profile, and the profiler compares the resulting colours with the IT8 colours and generates a profile. A closed loop works well if everything is reasonably accurate. Errors between target and scan (after profiling) will be reported as being low. If however, one of the patches has been incorrectly measured by Kodak, say, or has "surround" problems, then a low error may still be reported because there is nothing to tell the profiler that something is wrong. The profiler may be generating a very accurate profile for the target – it corrects for the inaccuracy, not aware of the inaccuracy – but the profile may be inaccurate for real-life slides. I'm reminded of a comment by the author of an Olympus book on SLR cameras from the 1980s in which he explains the use of 18% gray cards, and finishes with: "I can guarantee that if you go around photographing gray cards, you'll have perfect exposure every time". In the case of IT8 targets, the sentiment could be paraphrased: "If you profile IT8 targets, and generate error reports, you'll always get near perfect results." It's closed loop. Unless there is something wrong with the profiler, you must get good results.


Low Errors, Poor Profile
Let's say that Kodak reckons GS23 has an L* value of 0.5, when in fact the actual patch is 3.0. Let's assume GS23 scans at 3.0 (good scanner), and that the profiler does a good job at profiling. When the errors are calculated, they will therefore be reported as being low. But come to the scan of a real-life slide, which may also contain actual L* = 3.0 values, every time the profile sees L* = 3.0 in the real image it says: "That should really be 0.5". So it alters the colour by darkening, causing a visible problem in real-image shadows, but no obvious problem in the target (the 3.0 patch will be corrected to 0.5 to make the target look like it should).

The above problem happens in practice with my Kodachrome IT8 target (at least that's my explanation. Here is what I think is going on:

1. Kodak manufactures a master IT8 slide using a certain process that will be repeated in the manufacture of subsequent slides.

2. Each patch is measured with a light instrument, and an IT8 data file is created. Now, how is the L* measurement made? I don't know, so I'll take a guess. Does Kodak have a physical mask which sits over the slide and only allows D50 light to come through one entire patch at a time? Maybe, but I think unlikely (remember, I'm just guessing). I reckon they would throw a small circle of light through each patch, measure the colour of the circle of light, and make sure the circle doesn't approach too closely to the sides of the patch. Thus, Kodak measures the colour of only a small portion in the centre of each patch, ignoring the "surround" problem.

3. But, in reality, the surround problem comes back when the slide is scanned and subsequently profiled. For two reasons: (a) the scanner throws flare onto a dark area if it abuts a bright area; (b) the Kodachrome target appears to have been manufactured with a surround problem inbuilt. By looking closely at the top and bottom of GS15-23, through a 15x microscope eyepiece on a bright slide viewer, I think I can see a lighter stripe, top and bottom. If Kodak measured colour with a circle of light centred on the patch, they avoided this lighter area. If a profiler averages the entire patch, it does not avoid this area, and it's average will be different from Kodak's.

4. GS23, the blackest, is most affected: a bright white on the left, and three light grays on the other sides. With my scanner, GS23 scans higher than it should by 3 or 4 L* units – and worse, it scans lighter than GS22 (which has the same bright white on the right, and two light grays, top and bottom). Being lighter, GS22 is less affected. GS21 has light grays top and bottom only, and because it is lighter than GS22, is less affected again. GS21-15 are similarly affected top and bottom, but because they are progressively lighter, the "surround" problem is not as prominent.

5. This surround problem is most prominent for GS15-23. It should not cause problems in other areas, but it is probably best that only the central area of each patch be averaged by a profiler.


Explanation in Figures
We might have a situation where Kodak provides an L* number measured in a small central area, but the scanner/profiler combination profiles a different area, which, because of flare and the surround effect (inbuilt and from the scanner), causes the two values to be reconsilable for the target itself, but not reconsilable when applied to other images.

I'd better explain with figures (made up, just for explanation), which refer to three patches, the darkest two of which show an unexpected (but real) reversal in density, as do GS22 and GS23, caused by flare and surround problems. The first figure is an assumed scanned value of L*, the second is the target value. The figure in brackets is what an ideal profiler would do to the scanned value.

10.1  ->  9.3  (sent darker by 0.8 )
 3.6  ->  1.1  (sent darker by 2.5)
 3.7  ->  0.5  (sent darker by 3.2)

The IT8 profile, when applied to the scanned target, works perfectly. The image on screen looks just like the slide. The profiler has accommodated the flare, the scanner errors, the surround problem. Congratulations all around.

But something comes along to spoil the fun: a real-life slide. It also has L* areas of 10.1, 3.7 and 3.6, but they have not been compromised by flare or surround problems, just slight scanning errors. i.e. they scan at close to these values. Let's assume they scan at exactly these values for the sake of explanation. They are a part of a forest scene in shadow, and 10.1 is brighter than 3.7, which is ever so slightly brighter than 3.6. What does the profile do to them? Well, it transforms them by its inbuilt routines, the same as in the example above:

10.1  ->  9.3
3.7  ->  0.5
3.6  ->  1.1

The shadows of the real scene now have problems. The 10.1 shadow is close to what it should be, but the other two shadow details have been reversed in their brightness and made blacker. The profile, by correcting the IT8 target for flare, surround, and scanner errors, will apply those corrections to all other slides even when the first two problems don't exist.


Summary
Colour correction by profiling, in the presence of flare and surround problems, introduces errors into the shadows, errors that are not present in an uncorrected scan. The errors can be minimized by averaging only the central area of each patch (or by replacing the whole patch with a colour value taken from the centre area), and by faking GS23 to be more in line with what it should be (0.51).

Why is the problem not so apparent at higher scan gammas? Something for me to think about.
« Last Edit: May 31, 2011, 11:42:48 am by guyburns »
Logged

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Re: Generating a Kodachrome profile from an IT8 target
« Reply #86 on: May 31, 2011, 09:53:15 am »

When initially comparing the results of IT8 profiling, and in all subsequent comparisons, I simply opened my reference files (scanned at the various gammas), applied the relevant profile, and had a look at the image, whether it be light, dark, or in between. No cranking up the lightness, no bringing down the highlights, I evaluated all images unedited. Ref 18 Gamma 1.0 (for example) immediately struck me as having something wrong with it. When a similar effect was observed in some of the other Gamma 1.0 references, I decided that Gamma 1.0 was not optimum.

If this is why you reject Gamma 1.0, I think you should take another look at it. This issue continues to pop up, and I think it would be good for the conversation to get past it once and for all.

My guess is that you are seeing an anomaly in the way Photoshop displays images in a linear gamma space. Photoshop can make linear gamma images appear extremely posterized. It is worst at 50% zoom or less. It should be better at 100% zoom. If I am right, the posterized look will disappear as soon as you convert to a workspace with a gamma greater than 1, or sometimes just "Layer/Flatten Image" will remove it.

So the display of gamma 1.0 images in Photoshop can be misleading, because images that are perfectly smooth can appear heavily posterized. I think it has to do with shortcuts in the way Photoshop quickly sends data to the display.

I raise this point again because, if this is the reason for your rejecting gamma 1.0 profiles out-of-hand, you are not giving them a fair evaluation. I have not been able to demonstrate to myself that gamma >1 profiles are so obviously superior.

My position on gamma 1.0 profiles is that they are at least as good as higher gamma profiles. I am not saying they are the only way to go. I did a lot of comparisons on real images yesterday that confirms my feeling that gamma 1.0 profiles are not worst, and may actually be slightly better in terms of visibility of noise in the darkest tones, an advantage visible only after extreme brightening of the dark tones.

I have been evaluating the profiles in the context of actual image editing. I will describe my method later, and I also want to address some of the many interesting points you made in your post. For now I will say that my preliminary conclusion is that is possible to make perfectly-equivalent images from Argyll/Coca profiles of any gamma (but only if they are of the S+M type)

Here are some images - does the first one show what you are seeing?
Logged
Cliff

guyburns

  • Full Member
  • ***
  • Offline Offline
  • Posts: 101
Re: Generating a Kodachrome profile from an IT8 target
« Reply #87 on: May 31, 2011, 12:05:55 pm »

If this is why you reject Gamma 1.0, I think you should take another look at it. This issue continues to pop up, and I think it would be good for the conversation to get past it once and for all…

Here are some images - does the first one show what you are seeing?
No, that's not what I am seeing. What I see is an obvious, but subtle, change in contrast and colour in the shadows that zooming-in doesn't fix, and that changing colour space/editing doesn't appear to be able to improve, but I haven't fully tested the latter across a range of images.

I'm not trying to convince you that Gamma 1.0 is inferior. It's simply as I see it on my setup. We'll have to agree to disagree: with my setup, Gamma 1.0 appears to give inferior results compared with higher gammas. On your system – different monitor, different platform – it may differ. I have a mate in a photographic club who is very interested in all of this. He runs a PC, so I'll check it out on his system, and get his opinion without prompting him which image I think is superior.

We'll move on – but I will still call it as I see it on my setup. And if I come to see it differently for whatever reason, I'll be saying so. I'm not anti Gamma 1.0, I'm pro best-image.
Logged

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Re: Generating a Kodachrome profile from an IT8 target
« Reply #88 on: June 01, 2011, 09:33:57 am »

Problems with an Analytical Approach
One of the problems with the purely analytical approach of evaluating profiles by their calculated errors, is the possibility of internal errors being locked into the loop and not revealing themselves. Generating a profile is a closed loop: you assume you have an accurate IT8 target which is then scanned, you generate a profile, and the profiler compares the resulting colours with the IT8 colours and generates a profile. A closed loop works well if everything is reasonably accurate. Errors between target and scan (after profiling) will be reported as being low. If however, one of the patches has been incorrectly measured by Kodak, say, or has "surround" problems, then a low error may still be reported because there is nothing to tell the profiler that something is wrong. The profiler may be generating a very accurate profile for the target – it corrects for the inaccuracy, not aware of the inaccuracy – but the profile may be inaccurate for real-life slides. I'm reminded of a comment by the author of an Olympus book on SLR cameras from the 1980s in which he explains the use of 18% gray cards, and finishes with: "I can guarantee that if you go around photographing gray cards, you'll have perfect exposure every time". In the case of IT8 targets, the sentiment could be paraphrased: "If you profile IT8 targets, and generate error reports, you'll always get near perfect results." It's closed loop. Unless there is something wrong with the profiler, you must get good results.

The analytical approach in the present case seems useful in that it confirms the problem with increased errors in the darkest GS patches.

Quote
Low Errors, Poor Profile
Let's say that Kodak reckons GS23 has an L* value of 0.5, when in fact the actual patch is 3.0. Let's assume GS23 scans at 3.0 (good scanner), and that the profiler does a good job at profiling. When the errors are calculated, they will therefore be reported as being low. But come to the scan of a real-life slide, which may also contain actual L* = 3.0 values, every time the profile sees L* = 3.0 in the real image it says: "That should really be 0.5". So it alters the colour by darkening, causing a visible problem in real-image shadows, but no obvious problem in the target (the 3.0 patch will be corrected to 0.5 to make the target look like it should).

It's a real problem if we can't trust the Kodak target values. Apparently Kodak measured the target with some highly-specialized equipment, as the Q60 data are calculated from spectral measurements. The spectral data is available from Kodak in a QSP file.

Another option is the Silverfast Kodachrome targets. They have hand- or batch-measured targets. The supplied Q60 measurements are quite different from the Kodak K3 one, for example in one of the Q60s, the Silverfast GS0-GS23 ranges from L* 2.02 to 77.89, while the Kodak ranges from 0.51 to 88.28. The Silverfast targets have a lower dynamic range and perhaps would generate less flare. I don't know if profiling with Silverfast targets would be better, but it would almost certainly be different. They have that same white line between GS22 and GS23, however.

Quote
The above problem happens in practice with my Kodachrome IT8 target (at least that's my explanation. Here is what I think is going on:

1. Kodak manufactures a master IT8 slide using a certain process that will be repeated in the manufacture of subsequent slides.

2. Each patch is measured with a light instrument, and an IT8 data file is created. Now, how is the L* measurement made? I don't know, so I'll take a guess. Does Kodak have a physical mask which sits over the slide and only allows D50 light to come through one entire patch at a time? Maybe, but I think unlikely (remember, I'm just guessing). I reckon they would throw a small circle of light through each patch, measure the colour of the circle of light, and make sure the circle doesn't approach too closely to the sides of the patch. Thus, Kodak measures the colour of only a small portion in the centre of each patch, ignoring the "surround" problem.

You are probably right about this, although they might have used both a mask and a restricted illumination circle in some kind of microscope arrangement.

Quote
4. GS23, the blackest, is most affected: a bright white on the left, and three light grays on the other sides. With my scanner, GS23 scans higher than it should by 3 or 4 L* units – and worse, it scans lighter than GS22 (which has the same bright white on the right, and two light grays, top and bottom). Being lighter, GS22 is less affected. GS21 has light grays top and bottom only, and because it is lighter than GS22, is less affected again. GS21-15 are similarly affected top and bottom, but because they are progressively lighter, the "surround" problem is not as prominent.

5. This surround problem is most prominent for GS15-23. It should not cause problems in other areas, but it is probably best that only the central area of each patch be averaged by a profiler.

I think a practical approach to the GS23 problem is to scan an unexposed frame, which would provide DMAX with the least-possible flare, and substitute that measurement for the GS23 patch. I am going to try to dig out an unexposed frame and try this on my scanner. Then GS22 and others could be asjusted from GS23 with some interpolation based on the real-world DMAX.

Quote
Explanation in Figures
We might have a situation where Kodak provides an L* number measured in a small central area, but the scanner/profiler combination profiles a different area, which, because of flare and the surround effect (inbuilt and from the scanner), causes the two values to be reconsilable for the target itself, but not reconsilable when applied to other images.

I'd better explain with figures (made up, just for explanation), which refer to three patches, the darkest two of which show an unexpected (but real) reversal in density, as do GS22 and GS23, caused by flare and surround problems. The first figure is an assumed scanned value of L*, the second is the target value. The figure in brackets is what an ideal profiler would do to the scanned value.

10.1  ->  9.3  (sent darker by 0.8 )
 3.6  ->  1.1  (sent darker by 2.5)
 3.7  ->  0.5  (sent darker by 3.2)

The IT8 profile, when applied to the scanned target, works perfectly. The image on screen looks just like the slide. The profiler has accommodated the flare, the scanner errors, the surround problem. Congratulations all around.

But something comes along to spoil the fun: a real-life slide. It also has L* areas of 10.1, 3.7 and 3.6, but they have not been compromised by flare or surround problems, just slight scanning errors. i.e. they scan at close to these values. Let's assume they scan at exactly these values for the sake of explanation. They are a part of a forest scene in shadow, and 10.1 is brighter than 3.7, which is ever so slightly brighter than 3.6. What does the profile do to them? Well, it transforms them by its inbuilt routines, the same as in the example above:

10.1  ->  9.3
3.7  ->  0.5
3.6  ->  1.1

The shadows of the real scene now have problems. The 10.1 shadow is close to what it should be, but the other two shadow details have been reversed in their brightness and made blacker. The profile, by correcting the IT8 target for flare, surround, and scanner errors, will apply those corrections to all other slides even when the first two problems don't exist.

I think this is a good explanation of the problem. The input data is not monotonic - due to flare, you could get the same Red value from either patch GS20 or GS23, for example. The profiler can't allow this, so has to do some smoothing in the curve fitting process to produce a 1:1 mapping of the scanner RGBs to XYZ or Lab. How well the profiler does this smoothing is the key. It might well be that introducing the gamma preconditioning to the data gives the Argyll algorithms an easier job of curve-fitting.

Quote
Summary
Colour correction by profiling, in the presence of flare and surround problems, introduces errors into the shadows, errors that are not present in an uncorrected scan. The errors can be minimized by averaging only the central area of each patch (or by replacing the whole patch with a colour value taken from the centre area), and by faking GS23 to be more in line with what it should be (0.51).

I did some tests with Argyll by changing the size of the measurement area on the patches. I did this by editing the BOX_SHRINK parameter in the it8.cht file. Coca uses the same it8.cht file, which you can find and change it in the Coca/Argyll/ref directory if you're feeling adventurous.

SAMPLE_ID   XYZ_X   XYZ_Y   XYZ_Z   RGB_R   RGB_G   RGB_B

BOX_SHRINK 3.5
GS20   0.50000   0.47000   0.34000   0.79935   0.50474   0.56849   
GS21   0.33000   0.31000   0.22000   0.74026   0.42002   0.42761   
GS22   0.13000   0.12000   0.10000   0.77738   0.34446   0.30561   
GS23   0.06000   0.06000   0.08000   0.79995   0.37243   0.32470

BOX_SHRINK 7.5
GS20   0.50000   0.47000   0.34000   0.78572   0.49316   0.55793      
GS21   0.33000   0.31000   0.22000   0.72382   0.41017   0.41799      
GS22   0.13000   0.12000   0.10000   0.73651   0.32489   0.28964      
GS23   0.06000   0.06000   0.08000   0.75358   0.34378   0.29991

BOX_SHRINK 12.0
GS20   0.50000   0.47000   0.34000   0.77368   0.48801   0.55766   
GS21   0.33000   0.31000   0.22000   0.71336   0.40633   0.41275   
GS22   0.13000   0.12000   0.10000   0.71942   0.31878   0.28413   
GS23   0.06000   0.06000   0.08000   0.73492   0.33332   0.29179   
You can see that as BOX_SHRINK (amount to exclude around the edges) increases, the average RGB numbers get smaller. BOX_SHRINK is in units relative to the patch size, for your targets the measurement areas are:

Full patch 46 X 92 pixels
Shrink 3.5 34 X 83 pixels (Coca default)
Shrink 7.5 20 X 68 pixels
Shrink 12 4 X 51 pixels

Unfortunately, although it seems some flare appears to be excluded, reducing the size of the measurement area this way still doesn't prevent GS23 from being lighter than GS22.

Scanning an unexposed slide to determine GS23/DMAX could be very helpful. Do you have one?
Logged
Cliff

guyburns

  • Full Member
  • ***
  • Offline Offline
  • Posts: 101
Re: Generating a Kodachrome profile from an IT8 target
« Reply #89 on: June 02, 2011, 12:37:57 am »

I did some tests with Argyll by changing the size of the measurement area on the patches. I did this by editing the BOX_SHRINK parameter in the it8.cht file. Coca uses the same it8.cht file, which you can find and change it in the Coca/Argyll/ref directory if you're feeling adventurous… Unfortunately, although it seems some flare appears to be excluded, reducing the size of the measurement area this way still doesn't prevent GS23 from being lighter than GS22.
If I can find the file, I'll certainly change it. Where do you find the patch size that Coca uses? Or is it the same as what I would measure in Photoshop? My graphs in a previous post, indicate you have to go down to 40% patch size before the RGB values stop decreasing. That means a box-shrink of about 28 to fully remove the effect of the gray surround and flare. But it doesn't work for GS23. The only way to fix GS23 would be to replace it with a new artificial patch of suitable value.

What I have done is to manually replace the whole of GS15-22 with the average of a centre 40% patch from each, and completely replace GS23. When I finish testing I'll upload the "improved" IT8 target. Initial testing has shown that it has no effect on anything other than the last few GS patches – looking good!

What would be the best way to generate a fake GS23? I used your Lab generator to generate the actual IT8 value. A better way, I assume, would be be work out the ratio between GS22 and GS23 and do it that way. Is the following technique valid?

Given the L* values of:
GS22 (IT8) = 1.07
GS23 (IT8) = 0.51

These are very close to 2:1, i.e. one exposure stop.

1. From a Gamma 1.0 scan, select 40% of GS22, and average it.
2. Open Colour Picker and set the foreground colour to the average of the 40% patch. Step Backward to undo the averaging on GS22.
3. Make a 100% selection of GS23 and Edit > Fill with the foreground colour.
4. Apply an Exposure correction of -1.0

Result: an artificial GS23 that should correlate well with its closest neighbour.

Quote
Scanning an unexposed slide to determine GS23/DMAX could be very helpful. Do you have one?
I'll have a look through my slides. Must be a piece of unexposed Kodachrome somewhere. If not, I know someone in Hobart who has a roll of unexposed Kodachrome. I'll ask him to snip a piece off the end. That would be suitable, wouldn't it?
Logged

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Re: Generating a Kodachrome profile from an IT8 target
« Reply #90 on: June 02, 2011, 02:27:54 am »

If I can find the file, I'll certainly change it. Where do you find the patch size that Coca uses? Or is it the same as what I would measure in Photoshop? My graphs in a previous post, indicate you have to go down to 40% patch size before the RGB values stop decreasing. That means a box-shrink of about 28 to fully remove the effect of the gray surround and flare.

I measured it from a diagnostic image that Argyll makes. (I posted an example before.) A BOX_SHRINK of 7.5 will give you a 20 x 68 pixel measuring area from the full 46 x 92 patch, about 32%.

Quote
But it doesn't work for GS23. The only way to fix GS23 would be to replace it with a new artificial patch of suitable value.

I'm trying to replace the GS23 patch with DMAX scanned from unexposed KC. I scanned some unexposed K200 and K64. Haven't found any unexposed K25 yet, but I have a two frames that are severely underexposed (uxK25a and b in the following table). Here are the Polaroid scanner 16-bit RGB numbers I measured in PS:

K64       53   10   20
K200      20   2   8
uxK25a   53   16   44
uxK25b   44   17   36
average   42.5   11.25   27

GS23   98   58   78 (from Q60 scan)

I'll have to study this a little more - not sure which to use. I'm trying to splice a patch that has essentially no flare into a series of patches that do have flare.

As you discuss below, I guess you can say that GS23 should be little less than half the GS22 RGBs.  Red highest and Green lowest, I think it's important to preserve the ratio of R:G:B to prevent casts in the shadows.

Notice that despite having the same dyes, the different types of Kodachrome have a different DMAX. You can see this also in curves in the Kodak publications. I wonder which emulsion the Q60 are on? I might take mine out of its mount to take a look. It puzzles me that the Kodak Q60 is designed to be used with all Kodachrome types, but it's DMAX is only going to match one of them.

Quote
What I have done is to manually replace the whole of GS15-22 with the average of a centre 40% patch from each, and completely replace GS23. When I finish testing I'll upload the "improved" IT8 target. Initial testing has shown that it has no effect on anything other than the last few GS patches – looking good!

What would be the best way to generate a fake GS23? I used your Lab generator to generate the actual IT8 value. A better way, I assume, would be be work out the ratio between GS22 and GS23 and do it that way. Is the following technique valid?

Given the L* values of:
GS22 (IT8) = 1.07
GS23 (IT8) = 0.51

These are very close to 2:1, i.e. one exposure stop.

1. From a Gamma 1.0 scan, select 40% of GS22, and average it.
2. Open Colour Picker and set the foreground colour to the average of the 40% patch. Step Backward to undo the averaging on GS22.
3. Make a 100% selection of GS23 and Edit > Fill with the foreground colour.
4. Apply an Exposure correction of -1.0

Result: an artificial GS23 that should correlate well with its closest neighbour.

Looks like a good plan. The only problem I see is working in Lab - the L* values are easy to interpolate but what do you do about the a* and b* values?

Quote

I'll have a look through my slides. Must be a piece of unexposed Kodachrome somewhere. If not, I know someone in Hobart who has a roll of unexposed Kodachrome. I'll ask him to snip a piece off the end. That would be suitable, wouldn't it?

Unexposed but developed I hope!
Logged
Cliff

guyburns

  • Full Member
  • ***
  • Offline Offline
  • Posts: 101
Re: Generating a Kodachrome profile from an IT8 target
« Reply #91 on: June 02, 2011, 07:16:06 am »

The only problem I see is working in Lab - the L* values are easy to interpolate but what do you do about the a* and b* values?

Well, since "a" and "b" and I don't get on, I'm going to ignore them! More thought is needed, but I think that as GS23 is at the end of the range, it may as well be faked as the exact IT8 value. Since all it's GS neighbours scan higher than they should anyway (because of flare and surround), a lower GS23 than what otherwise would appear on the slide, will have the effect of pulling the others into line when the profiler tries to regress them.

Some initial results from testing my modified IT8 profiles (I modified GS15-22 to reduce flare and surround effects, and faked GS23).

Conditions
1. Scanned the IT8 target at gamma 1.0, 16-bit, 4000dpi.
2. Modified GS15-22 as previously described, and replaced GS23 with its expected value.
3. Applied a linear sRGB profile. Saved it as sRGB 1.0.
4. Converted sRGB 1.0 to a gamma 1.6 sRGB profile. Saved it as sRGB 1.6.
5. Converted sRGB 1.0 to a true sRGB profile. Saved it as sRGB 2.2.
6. Downsampled the three versions to 1000 dpi and profiled each, using Lab and S+M.
7. Applied the new profiles to the relevant image.
8. For each image, compared Lab and S+M and chose the best, then compared the three images against each other and against my earlier optimum of Gamma 1.8 (with original profile).


Results
Very very close, using Ref 18 as the test image. Much closer than previously. Gammas 1.0 and 1.6 have a very slight wash over them in the darkest areas compared to sRGB and 1.8 sRGB, the latter having the densest blacks (very slightly denser than sRGB). However, the dense blacks of 1.8 come about by complete saturation in the red channel, whereas the sRGB's blacks are about 1 RGB unit away from saturation. 1.8 also has more contrast in the dark areas, but in some way it appears unnatural, as if the profile has gone too far in pushing blacks downwards. And since I don't like saturation, this relegates 1.8 to being behind sRGB.

In addition, there is a reversal of effect for S+M and Lab profiles. For Gamma 1.0, S+M gave the best result with slightly less haze than Lab. For G1.6 and sRGB, the reverse applies – Lab gave the slightly better result.

Using Reference 18 and applying to it an Lab profile generated from a modified IT8 target, sRGB comes out on top. But there is not much in it (just the slightest extra haze in the darkest areas of the other contenders), and the result may change when I apply the profiles to my other reference images.


Effect of Modified IT8 target on L*
Unless I've made an error somewhere, the results of using a modified IT8 target show an improvement over an unmodified target. Attached are the L* values for the darker GS patches for Gamma 1.0, 1.6 and 2.2 (sRGB). The figures confirm what I see under close visual inspection of the dark areas of Ref 18: that for the modified IT8 target, preconditioning to sRGB gives the best results; and for the unmodified target, Gamma 1.6 gives the best result. But there is not much difference, and all the gammas give very acceptable results, with any difference probably being swamped in the editing.


Gamut Warnings for Gamma 1.0, 1.6, and sRGB
When using unmodified profiles, gamma warnings are displayed for the three images, with approximately equal distribution across the three. These warnings disappear when converted to sRGB. For modified profiles, the warnings show a slight increase for Gamma 1.6 and sRGB as compared to Gamma 1.0, but again they disappear when converted.


QUES: Does this mean that sRGB has a wider gamut than the generated profile?

Also, when viewing Ref 18, I noticed that the funny-looking areas that occur at low magnification, exactly matched the out-of-gamut areas. I thought it was a PS problem, but it might be because the colours were out of gamut.
Logged

guyburns

  • Full Member
  • ***
  • Offline Offline
  • Posts: 101
Re: Generating a Kodachrome profile from an IT8 target
« Reply #92 on: June 02, 2011, 07:37:59 am »

Regarding Box Shrink – I found it so I'll give it a go. Windows said it couldn't open the text file, so I opened it with Notepad. Since I don't like playing around with such things unless I am reasonably sure of what I am doing:

Q1: Will Coca still recognise the data file if I save it from Notepad?

Q2: Is this IT8.cht file newly created each time Coca is run, or is it a permanent fixture which Coca originally installs and then just looks at?

Q3: What's the mathematical relationship between Box Shrink and the size of the box?

Q4: If I change the image resolution, I assume the Box size changes, so I could end up in strife if I have a Box Shrink that is too big. Where is the Box size stored so that I can check the size?

Thinking about it – and there'll be more questions for sure – it might be easier if I just replace the sus patches with a 40% average. That, I can do, and I know it works.



Logged

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Re: Generating a Kodachrome profile from an IT8 target
« Reply #93 on: June 02, 2011, 08:57:45 am »

Regarding Box Shrink – I found it so I'll give it a go. Windows said it couldn't open the text file, so I opened it with Notepad. Since I don't like playing around with such things unless I am reasonably sure of what I am doing:

Q1: Will Coca still recognise the data file if I save it from Notepad?

Yes. With Notepad, if you "Save" after changing the BOX_SHRINK number, it should be fine.

Beware that if you "Save AS" instead of "Save", Notepad will fight you and try to append a ".txt" suffix to the file name, which is no good. If you do a "Save AS", to save a backup version for example, change the "save as type" to "all files" and it will not append the ".txt" at the end of the name.

Quote
Q2: Is this IT8.cht file newly created each time Coca is run, or is it a permanent fixture which Coca originally installs and then just looks at?

It's permanent, and will only change if you edit it.

Quote
Q3: What's the mathematical relationship between Box Shrink and the size of the box?

I was afraid you were going to ask that. It's not so clear to me. There are other lines in the IT8.CHT file that define the BOX size. The gory details are in the Argyll documentation for the CHT File Format:

Quote
The physical units used for boxes and edge lists are arbitrary units (i.e. pixels as generated by scanin -g, but could be mm, inches etc. if created  some other way), the only requirement is that the sample box definitions need to agree with the X/YLIST definitions. Typically if a scanned chart is used to build the reference, the units will be pixels of the scanned chart.

The BOXES keyword introduces the list of diagnostic and sample boxes. The integer following this keyword must be the total number of diagnostic and sample boxes, but not including any fiucual marks. The lines following the BOXES keyword must then contain the fiducial mark, diagnostic or sample box definitions. Each box definition line consists of 11 space separated parameters, and can generate a grid of sample or diagnostic boxes:

    kl lxs lxe lys lye w h xo yo xi yi

-SNIP-

 w, h are the width and height of each box in the array.

-SNIP-

The keyword BOX_SHRINK marks the definition of how much each sample box should be shrunk on each edge before sampling the pixel values. This allows the sample boxes to be defined at their edges, while allowing a safety margin for the pixels actually sampled. The units are the same arbitrary units used for the sample box definitions.

Here is the beginning of the IT8.CHT where this stuff is specified:

BOXES 290
  F _ _ 1 1  616.0 1.5  615.5 358  1 358.5
  D ALL ALL _ _ 615 409 1 1 0 0
  D MARK MARK _ _ 14 14 1 1 0 0
  Y 01 22 A L 25.625 25.625 26.625 26.625 25.625 25.625
  X GS00 GS23 _ _ 25.625 51.25 0.0 358.75 25.625 0.0

BOX_SHRINK 3.5

I think the x and y dimensions of the GS patches are defined on the line beginning with "X", as 25.625 x 51.25 units. The number for BOX_SHRINK would be relative to these. I guess it's somehow scaled to the actual pixel dimensions of the scan. Somehow a BOX_SHRINK of 3.5 arbitrary units results in a 46 x 92 pixel patch being shrunk to about 34 x 83, if I've measure correctly.

I'm sorry but my eyes have completely glazed over at this point. What was that you said about the Argyll documentation? ???

Quote

Q4: If I change the image resolution, I assume the Box size changes, so I could end up in strife if I have a Box Shrink that is too big.

Later I will resize a target scan and check the scanin diagnostic image to see what happens.

Quote
Where is the Box size stored so that I can check the size?

See above, I think it's in the line starting with "X".

Quote
Thinking about it – and there'll be more questions for sure – it might be easier if I just replace the sus patches with a 40% average. That, I can do, and I know it works.

It would be useful to compare doing it both ways. If the automated results are good enough, it could save a lot of trouble for anyone who wants to duplicate your results.
Logged
Cliff

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Re: Generating a Kodachrome profile from an IT8 target
« Reply #94 on: June 02, 2011, 09:04:50 am »

Wow, you've been busy! It will take me a while to digest this. Can you package up some of your new profiles so that I can take a look?
Logged
Cliff

guyburns

  • Full Member
  • ***
  • Offline Offline
  • Posts: 101
Re: Generating a Kodachrome profile from an IT8 target
« Reply #95 on: June 02, 2011, 05:32:45 pm »

You wouldn't think something as simple as scaling could be made so difficult. If what you said about Box Shrink (BS) and the patch sizes after shrinking is correct, BS and the size of the box after shrinking (F, as a fraction of the horizontal original size) are related by:

BS = 12.81 (1-FH)

I want to scale the boxes to 40%, hence BSH = 12.8 (1-0.4) = 7.7.

The problem is, the vertical scaling is different, given by:

BS = 25.63 (1-FV)

The difference is caused by having a constant amount subtracted from different-sized edges. A problem arises when trying to make the box small enough (40%) to get away from the "surround" flare at top and bottom, the major component of the flare. You end up with a tall, narrow box given by this relationship:

FH = 2FV - 1.0

It is not possible using Box Shrink to scale vertically to 0.4, because the lower limit (when the box turns into a vertical line with no horizontal dimension) is 0.5. So, because of the way Argyll implements scaling, a figure higher than 50% has to be chosen, thus incurring additional L* errors. Therefore let's choose FH = 0.4, to give FH = 0.7. My hand drawn graph of a few posts back shows that for GS22 (GS23 will be faked) for a 70% patch, the error will be between these limits:

64% Patch … 0.1
80% patch … 0.2

So the optimum Box Shrink would appear to be 25.63 (1 - 0.7 ) = 7.7. This keeps the sample box away from the top and bottom surround, but still has a good horizontal spread to sample as many points as possible. But it won't be as good as using a true 40% box. On the other hand, the scaling will apply to all patches (and not just a few of the GS patches as I did manually), so overall the problem of flare and contamination may improve.

Note: the dimensions of each patch (25.625 x 51.25), appear to be the number of pixels when the image is scanned at 600 dpi, the figure that the Argyll documentation recommended. At 2000 dpi, I measured the dimensions as 85 x 182.
« Last Edit: June 02, 2011, 10:55:26 pm by guyburns »
Logged

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Re: Generating a Kodachrome profile from an IT8 target
« Reply #96 on: June 02, 2011, 07:58:20 pm »

For a 64% vertical patch, the horizontal scale will be 0.64-0.5 = 0.14. Too narrow. For 80%, it will be 0.3, which should be okay. So the optimum Box Shrink would appear to be 25.63 (1 - 0.8 ) = 5.1, say 5. This minimizes the L* error caused by the surround, and maximises the horizontal box dimension. But it won't be as good as using a true 40% box. On the other hand, the scaling will apply to all patches (and not just a few of the GS patches as I did manually), so overall the problem of flare and contamination may improve.

I ran your "Kodachrome IT8 Gamma 1.0.tif" scan thru scanin with BOX_SHRINK set to 5.1. Here are the Argyll IT8 measurement file and the diagnostic image so that you can see the measurement areas. A little off center, I wonder if there be a way to fix that?

Maybe a change in the box dimensions can accomplish the vertical scaling you need?

There is a huge mailing list archive for Argyll that might have relevant info.
 
Logged
Cliff

guyburns

  • Full Member
  • ***
  • Offline Offline
  • Posts: 101
Re: Generating a Kodachrome profile from an IT8 target
« Reply #97 on: June 02, 2011, 11:03:03 pm »

I made an error in the formula relating FH and FH, now corrected, which means the optimum BS is 7.7 (not 5.5). Not much difference.

GS Boxes off centre? How come all the others aren't off centre? Maybe it's best to keep BS at 3.5 if Argyll does things like shift boxes around.
Logged

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Re: Generating a Kodachrome profile from an IT8 target
« Reply #98 on: June 03, 2011, 03:39:37 am »

I made an error in the formula relating FH and FH, now corrected, which means the optimum BS is 7.7 (not 5.5). Not much difference.

GS Boxes off centre? How come all the others aren't off centre? Maybe it's best to keep BS at 3.5 if Argyll does things like shift boxes around.

I updated the files. Looks better.

diagnostic image 7.7
measurements 7.7
Logged
Cliff

crames

  • Full Member
  • ***
  • Offline Offline
  • Posts: 210
    • http://sites.google.com/site/clifframes/
Re: Generating a Kodachrome profile from an IT8 target
« Reply #99 on: June 03, 2011, 06:30:35 pm »

There are a couple of interesting threads on the Argyll Mailing List about making scanner profiles.

Mentioned are two options for the colprof command that might be worth trying: -u and -r.

About the -u option:
http://www.freelists.org/post/argyllcms/Number-of-patches-well-behaved-printer,22

Quote
The next release will by default add some extrapolation patches up to the
device min/max values along the neutral axis when -u is used with input
profiles, to overcome the sometimes unexpected default extrapolation behaviour. You can
always override this with extra patches though, if you don't like what it does.

cheers,
        Graeme Gill.

from colprof doc:

Quote
-u: cLUT style input profiles will normally be created such that the white point of the test chart, will be mapped to perfect white when used with any of the non-absolute colorimetric intents. This is the expected behaviour for input profiles. If such a profile is then used with a sample that has a lighter color than the original test chart, the profile will clip the value, since it cannot be represented in the lut table. Using the -u flag causes the lut based input profile to be constructed so that the lut table contains absolute color values, and the white of the test chart will map to its absolute value, and any values whiter than that, will not be clipped by the profile, with values outside the range of the test chart being extrapolated. The profile effectively operates in an absolute intent mode,  irrespective of what intent is selected when it is used. This flag can be useful when an input profile is needed for using a scanner as a "poor mans" colorimeter, or if the white point of the test chart doesn't represent the white points of media that will be used in practice, and that white point adjustment will be done individually in some downstream application.

-un: By default a cLUT input profile with the -u flag set will extrapolate values beyond the test chart white and black points, and to improve the plausibility of the extrapolation, a special matrix model will be created that is used to add a perfect device white and perfect device black test point to the set of test patches.  Selecting -un disables the addition of these extra extrapolated white and black patches.

I'm curious to see how -u will extrapolate down into the dark regions.

About -r option:
http://www.freelists.org/post/argyllcms/Verifying-profile-quality-of-LUTbased-scanner-and-printer-profiles,1

Quote
[argyllcms] Re: Verifying profile quality of LUT-based scanner and printer profiles
With the currently available release of Argyll, it is probably advisable
to specify a fairly high level of smoothing, by using -r 1.0 or so.
The next version will have better defaults in this regard, and
shouldn't usually need a -r parameter. This should result in a smoother
profile with a higher self fit dE.


    But is there any way to verify the 'smoothness' of the profile? In particular, I'm thinking about the discontinuities that might exist in the LUT tables.


The interpolation algorithm doesn't really allow discontinuities,
but it can have "overshoot" or "ringing".

The -r parameter specifies the average deviation of device+instrument readings from the perfect, noiseless values as a percentage. Knowing the uncertainty in the reproduction and test patch reading can allow the profiling process to be optimized in determining the behaviour of the underlying system. The lower the uncertainty, the more each individual test reading can be relied on to infer the underlying systems color behaviour at that point in the device space. Conversely, the higher the uncertainty, the less the individual readings can be relied upon, and the more the collective response will have to be used. In effect, the higher the uncertainty, the more the input test patch values will be smoothed in determining the devices response. If the perfect, noiseless test patch values had a uniformly distributed error of +/- 1.0% added to them, then this would be an average deviation of 0.5%. If the perfect, noiseless test patch values had a normally distributed  error with a standard deviation of 1% added to them, then this would correspond to an average deviation of 0.564%. For a lower quality instrument (less than say a Gretag Spectrolino or Xrite DTP41), or a more variable device (such as a xerographic print engine, rather than a good quality inkjet), then you might be advised to increase the -r parameter above its default value (double or perhaps 4x would be good starting values.)

Smoothing by -r might help get rid of the reversal at GS21-23?

I'm going to give these options a try this weekend.
« Last Edit: June 03, 2011, 06:34:11 pm by crames »
Logged
Cliff
Pages: 1 ... 3 4 [5] 6 7   Go Up