... extracting the truly raw RGB values read from the scanner and using them to build a profile. Doing so often produces superior results than beginning with gamma-corrected values.
Profiles are, by their nature, more suited for performing color adjustments that the gross tonal shifts required to shove linear gamma input into gamma 2.2 output.
By the way, does Nikon Scan include an option to use the scanner’s native color space (i.e., no CM) and include a corresponding profile that you can assign in PS? Just curious.
There does not appear to be a profile for Scanner RGB, but I'll hunt around and see what I can find.
[/i]
If what I've read is correct, maybe Coca was seeing that incorrect sRGB tag and it caused your faulty profile.I'm reasonably certain the Coca ignores embedded profiles because, as a test, I generated two profile from a scanned target, one with and one without an embedded profile and the Coca profiles came back the same (as far as I could tell by applying them to an image).
Now, when you open an IT-8 scan in PS and apply the profile you created, do you get a scan without major color casts?
Given that all seven had no perceptible difference, I think I'm safe in assuming that my profiling of this scanner is about as good as it is likely to get.
Bug in NikonScan?
Finally, when I turn CM on and select Scanner RGB in NikonScan, there is no profile applied when I open the image in Photoshop, both on the Mac and PC. Must be a bug in NikonScan. For all other colour-managed spaces, the image arrives in PS with the appropriate profile embedded.
A Question
I've been trying to install profiles onto the PC I've been given. I've tried right-clicking then selecting Install, but Windows comes back with an error message "This is not a valid profile". So I manually put them in this folder: \Windows\system32\spool\drivers\color, and then they can be applied. Any idea why right-click > Install won't work?
My Velvia 50 IT-8 slide is also different from your Kodachrome IT-8 slide. So, in my comparison I focused on the grey scales.
I expect that few if any of the grey scale patches are perfectly grey, with each of the RGB values being equal, but I assume that they should be close.
Kodachrome GS5: 69.79, -8.84, 3.65
Velvia GS5: 65.89, -1.54, 2.07
When I typed those numbers into PS ...
GS5 COLOUR NUMBERS
The figures you quoted for GS5 colour numbers for my Kodachrome scan and your Velvia scan may not be truly representative of the actual colour numbers of Patch 5. Two reasons:
1. There is quite a bit of variation within each patch. If you move around inside a patch, the colour number change significantly. This is best seen by looking at the histogram of a patch which will show a spread of values.
KODACHROME BLUE
Hutchcolor may be correct about grain and profiles, but he's wrong about "Kodachrome blue":
On most scanners, Kodachrome® transparencies produce a strong blue or blue-magenta cast, because the yellow dye used in Kodachrome film emulsions appears weaker through typical scanner filters than it does to the human eye.
Wrong. RWG Hunt makes it clear (p229, Reproduction of Colour) why all scanners will see Kodachrome with a blue cast ...
...
Even people who should know their stuff, appear ill-informed when it comes to scanning Kodachrome. I'm not trying to belittle their comments, I'm just trying to understand the scanning of Kodachrome (and slides in general) and how it is best done, and misinformation like the above doesn't help.
I have just come across an article about scanning and gamma:http://photo.bragit.com/scanning/it8tests.shtml
How do you type these numbers into PS? I'm using PS CS4, 64 bit, and it allows me to enter whole numbers only.
I disagree with your criticism of Hutchcolor. It appears that your criticism is because he said "most scanners", not all scanners. That difference seems inconsequential. Moreover, unless someone has tested every single model of scanner ever produced, they'd be safer in saying "most scanners", not all scanners. So, IMO, Hutchcolor's use of the words "most scanners" is a positive sign, and certainly doesn't warrant criticism.
When he describes setting up his scanner and "Optimizing lamp lightness", he wrote: "The exact lamp setting for my unit is: +8 units overall, plus an additional +6 and +4 units respectively for green and blue." This approach is contrary to the usual recommendation and contrary to my practice. Is it wise to make color corrections before profiling? I'll need to read his article more carefully, but a lot of what I've glanced at raises questions.
Crames – just a quick reply. From memory, the scans from which I generated the profiles were 2000 dpi 16-bit. The scans I uploaded were probably reduced in size to make them smaller. I'll upload the full-resolution scans today.
P.S. I finally finished Hunt. Did you see my reply to the thread where you asked about the effect of dim surrounds when editing in dark surrounds (or vice versa)?
I have uploaded the 16-bit scans to a folder called Kodachrome IT8 Scans 16-bit, 57 MB, which can be accessed here: http://www.mediafire.com/?thogddgfozxi2
I was hoping to see the ref 05 scans in 16 bit, but the IT8 Scans 16-bit are definitely much smoother."
I'll upload some of the reference scans in 16-bit if you want to play around.
Try as I might, I cannot get a Gamma 1.0 scan of a high-contrast slide to look as good in the shadows as a Gamma 1.8 scan. And in my experience, it's the shadow area that is very important for a slide to look good on screen. Whether it's my scanner or Argyll I don't know. Will post the results for all to see as soon as I am confident of the results. There's no particular reason I chose gamma 1.8. It was just one of a range of values I was experimenting with.
I've been wondering if there is an optimum gamma (other than 1), so I've been playing around with some equations to work out how closely a gamma curve matches the L* curve. A close match would mean that when profiling the scan in Lab coordinates, the errors would be minimized. I have attached a plot showing the gamma 2.2 and the L* function, and a third curve showing the errors between the two. By plugging in a range of gamma values, it turns out that the optimum match between gamma and L* occurs around gamma 2.5.
There is validity to that approach, but I'm not sure how it applies for different kinds of Argyl/Coca generated profiles. It might not help a measurement-based 3D look-up table type of profile, but it could help a matrix-shaper profile. See the discussion in the Hardeberg thesis: Acquisition and reproduction of colour images (http://pastel.enst.fr/26/00/Jon_HARDEBERG.pdf), Chapter 3 (3.2.3.5), where he reduced perceptual error by using an exponent of 1/3 on the scanner values.
Cliff: if you keep giving me articles to read, I'll never finish testing.
The Hardeberg thesis is very informative. He modified the linear RGB values from the scanner by applying an exponent of 0.33, trying to approximate the relationship between L* and Y/Yn which has the same exponent. But a closer correlation comes at ~0.4 (gamma = 2.5) if the curves I have generated are correct (see attachment). Maybe there are good reasons why he chose 3.0 instead of 2.4.
If you go to the trouble of mathematically finding the "best" gamma, you will find that its value depends on your definition of "best."
If best means minimizing the largest error, then the best gamma is 2.1723.
If best means minimizing the RMS (root mean square) error, then the best gamma is 2.3243.
For my own reference I have summarised Hardeberg's Chapter 3, in which he explains how he went about generating a profile from an IT8 target. Page references are to Hardeberg, but in what follows there are no quotations.
the XYZ colour space is very poorly correlated to visual colour differences
Another link to varying gamma while scanning a target: http://lprof.sourceforge.net/help/lprof-help.html. Apart from the hyperbole "huge loss of details in shadows", it seems as if the author knows his stuff.
Gamma: On most scanners you can select the gamma to be used for scanning the image. In general you should use a gamma between 2.2 and 3.0. A Gamma 2.2 has the additional benefit of being close to the sRGB gamma, and this means the uncorrected Image will "look nice" on an "average" monitor. It is also near to perceptual gamma. Gamma 2.4 has the additional benefit of being closest to perceptual space, and this is a very good reason to use this value. Less that 2.2 (and of course the infamous 1.0) can generate huge loss of detail in shadows, only to give a slight bettering of highlights. Don't use this unless your are using 16 bits per sample, and even in such case, don't do it unless you know what are you doing! Gammas around 2.4 are best for flat bed scanners and film scanners with limited dynamic range. With high dynamic range film scanners values closer to 3.0 may be best. Hutch Color, for example, recommends a gamma of 2.8 for high dynamic range scanners. But for flat bed scanners more than 2.4 (up to 3.0) looses some highlight detail with no gains in shadow detail.
Cliff: I like the Bruce Lindbloom site. Wonderful stuff. I can't image any colour site more authoratative that his. I was wondering how Bruce RGB got its name (it's one of the options in my NikonScan software), and I've also been wondering why there was a discontinuity in the L* function at L* = 8 which I noticed when I was plotting derivatives. As Bruce explains at length, http://www.brucelindbloom.com/index.html?LContinuity.html, it's because the figure of 903.3 (which I took from Hunt) should actually be 24389/27.
I'm also wondering (I do a lot of wondering) whether sRGB is a better fit to the L* function than a pure gamma, but Bruce's calculator doesn't allow me to plug in sRGB.
Talking about sRGB, since all my Kodachrome scans will only be viewed digitally (no printing envisaged) via BluRay at home or the local cinema, I'm assuming the best colour space for editing would be sRGB – so as not to run into the problem of out-of-gamut colours.
Lindbloom's color space is BetaRGB. BruceRGB is a different space, created by Bruce Frasier (http://www.brucefraserlegacy.com/).
There are too many Bruces in the colour world.
Attached is a comparison of sRGB and L*, using the exact specifications for each. Of all the gamma-type curves, sRGB has the best approximation of L* in the shadows.
There is a small problem though - in order to use the profile you first have to apply the sRGB TRC to the scan. Maybe Argyl, with it's almost infinite options, has a way to incorporate the preconditioning within the profile itself.
Wouldn't scanning into sRGB space achieve the same thing?
QUES 1: Where did all those multi-decimal numbers in your post come from? Data analysis from within LPROF?
It's good to see the improvement where I was hoping it would be – in the shadows – but I now think the reason my shadows had poor detail has less to do with the profiling and more to do with the density range of my scanner or the quality of my target. For Gamma 1, there is barely a difference between GS20-23 (i.e. those patches are virtually "blocked"), but there is a reasonable difference when scanned at Gamma 1.8 and above. I can't explain that, because I assumed that for the higher gammas the scanner would derive them from the Gamma 1 figures. These are the mean RGB figures, measured in a small area of each patch in Photoshop > Histogram, Gamma 1 first, then Gamma 1.8:
GS19 … 2.25, 18.19
GS20 … 1.43, 14.82
GS21 … 1.32, 13.44
GS22 … 1.33, 12.24
GS23 … 1.33, 12.42
What intrigues me is – for my scanner, the Gamma 1.8 figures are not derived directly from Gamma 1.0. If they were, GS22 would be higher than GS21.
This leads me to ask two more questions:
QUES 2: What scanner do you use?
QUES 3: What are the RGB values of GS20-23 patches for one of your IT8 scans using Gamma 1.0?
• For GS20-23 scanned with low gammas (1.0 -?), my scanning/profiling process is not accurately transforming the scanner RGB values into IT8 L* values. I suspect these is an optimum value of gamma for my setup of around 1.8. To test this, I'll have to scan the target at various gammas to see which gives the closest match.
That white streak between GS22 and 23 has been annoying me for a while. To fix the problem of the streak casting a flare on GS22 and 23, would there be any problem with working out the average L* in a representative small area in the centre of each, and then filling the whole of both patches with that average value?
It was your target scan that was used in the above post (http://www.luminous-landscape.com/forum/index.php?topic=54040.msg444279#msg444279) that showed error results for those patches. With the sRGB TRC, respectable DE errors of 2 or less are achievable in those patches, with your equipment. Are you still using the 8-bit target scans to make profiles?
Then I thought: Well, I can't be sure it's Argyll; the error might be occuring somewhere else in the entire scanning/profiling process. So I changed the wording.
And a note to myself and anyone else reading all previous posts: figures given for average RGB values may be in error by as much as 0.5 RGB units because of the incorrect way Photoshop calculates Histogram > Average – it uses 8-bits instead of the maximum available.
Now for a rant. Don't you just love the way PS reports 16-bit L* values as 0-32768 when L* should range from 0-100? Can you imagine someone wanting a more accurate percentage reading of some measurement, say, 67%, so they are told a 16-bit accurate figure is 21953%. Ridiculous. And another thing about 16-bit L* and Photoshop: the values are actually 15-bit, not 16-bit.
Q1: When I finally decide on a scanning method, would you be able to generate what you think is the best possible profile from my IT8 scan using LPROF, and then generate error reports for your profile and my profile? I want to compare LPROF and Coca/Argyll. I would upload the scan at 2000 dpi, 16-bit Tiff with embedded profile.
Q2. Any idea why the IT8 designers put that white line between GS22 in GS23? It's in the worst possible place – bright white next to the darkest grays.
Q3. I've asked the lads over at the Photoshop forum about this next one, but nothing useful came of it. After a bit of a lead in about scanning IT8 targets, I said:
"I've been working for years with my screen set to a certain profile (My Books) set to colour temperature D50 and gamma 1.8 from memory, that quite accurately mimics what I see from a Xerox iGen printer with what I see on screen. Now, however, the only destination for my images will be my monitor at home, friend's High-definition TVs, or the digital projector at the local cinema, which most likely all have an image space of Rec. 709 or its virtual equivalent, sRGB, both of which use a gamma of 2.2 (or thereabouts) and D65. So I changed my monitor setting to sRGB with the result that the images that I have edited in PS (with monitor set to My Books) now look different. And I can't figure out how to make images in PS with monitor set to sRGB, look the same as when monitor is set to My Books."
Is it possible to mimic a monitor space from within another monitor space? I've tried soft-proofing, applying and converting to profiles, nothing seems to give the same look within sRGB as when looking at the image in My Books, which give a yellow glow to the images, a bit like when viewing a slide.
Yes. Why not scan at the 4000dpi maximum?Coca doesn't accept 4000 dpi files, at least on my setup. Plus they would be 100 MB files. Time consuming to upload, but possible.
Did you see Mark Segal's review of the Plustek 7600 (http://www.luminous-landscape.com/images-105/Plustek-7600.pdf) here on LuLa, where he carefully evaluates Silverfast profiles for Kodachrome and Fujichrome on several scanners, including the Nikon 5000?
[re white line] Don't know. I agree, it's a big problem.I was thinking of drawing over it with a felt pen, but the line is only 0.1 mm wide. Could be tricky.
I provided a profile for your scanner that has a tungsten white point. The idea is similar - when you convert using Absolute intent the tungsten yellowness is carried over to the target color space. It could turn your digital projector into a tungsten-bulb slide projector. Did you try it?Tried it, and had a horrible feeling it is an accurate representation of how slides look compared to digital – washed out, fuzzy, and dim. What I'm going to do is to get a scanned image on screen, and project the same image onto a mini slide-screen I've built which sits on the desk next to my monitor. Then I'll take a photo of both, side by side, and post.
Tried it, and had a horrible feeling it is an accurate representation of how slides look compared to digital – washed out, fuzzy, and dim. What I'm going to do is to get a scanned image on screen, and project the same image onto a mini slide-screen I've built which sits on the desk next to my monitor. Then I'll take a photo of both, side by side, and post.
Shoot Out
Might be useful, but from what I've learned so far, there is so much optimising that has to be done, you may never be able to fairly compare various devices – unless each has been exhaustively optimised. I was hoping to put together a PDF, The Art and Science of Scanning Kodachrome, that would work for any setup, but I'm not sure that will be possible now. I'll still put it together, but the title might have to change. Something like Kodachrome and Black Magic – How Scanning and Satanic Practices Intertwine. Forget the science, it mostly black magic in my experience.
Another horrible Feeling
After all this time spent profiling, I find I'm ending up with mediocre images that require significant editing in PS – though I do have a technique that is semi-automatic and only takes 5-10 minutes per slide. I look at the editing required and I think: Why bother profiling? So that's another set of tests: to compare the same editing approach to profiled and unprofiled images. The former had better give better results, or I'll start practising satanic rites on my IT8 targets – starting with surgery to remove that white streak.
Next step - faking a GS23 patch
1. I need to determine the maximum density that the Coolscan can measure by scanning a dense black from another slide, to make sure it is the same (or darker) than GS23.
2. When I have established that the density of GS23 is within the Coolscan's capabilities, I can alter the Lab value of the scanned GS23 patch in Photoshop so that it approximates the real figure.
3. Then get rid of some of the flare on the GS22 side of the white streak (again in PS), and reprofile. This should allow the profile to more accurately model the densest blacks because there won't be the unexpected density reversal of GS22 and GS23.
Do you see any difference in GS17-23 as you tab from one to the next? I certainly do on my system. As I tab from Gamma 1.6 -> 1.4 -> 1.0 the blacks become lighter and a haze is cast over them – the same effect I see in the darkest area of some slides.
I have uploaded 4 sets of reference images to: http://www.mediafire.com/?35wnw6cxd7vm1
...
Happy shadow looking!
QUES
That means Nikon sRGB and sRGB are identical, doesn't it?
I'm betting on the latter, as it only involves one slight modification to one patch. And I'm betting that as long as GS23 is lower than GS22, it won't make much difference what value it is. I don't think any of this will make much difference anyway, but it's a good way of learning about profiling.
If you think it is worthwhile trying both methods for the profile shootout, I'll give it a go. Looking at the graphs, how much lowering do you reckon? My idea is to remove GS23 from the graph, then insert its "X" value into the calculator (0.51), and see what "Y" value pops out – which will become the new L value for my fake GS23.
So we have been evaluating profiles quite differently! I always convert to a working space (either Prophoto RGB or a linear variant of it) before anything else.
We must be inhabiting the same ether space and unconsciously communicating. Last night I was thinking about what happens when you change colour numbers in the profile space and thought: colours must be moving away from what the profile is trying to correct, but couldn't quite convince myself that it was true. Now I know it is. Yet another trap for a novice Kodachrome scanner – don't edit in the profile space, even when testing. It will be interesting to see if editing in another space makes much difference to the shadows.
QUES
Any idea how profilers average a patch? How much of the patch do they average? Do they discard values exceeding certain limits? Do they correct the average if the histogram shows a bias (it should be bell-shaped, I assume)?
It seems to me that to obtain an optimum profile, you should precondition the RGB data and the darker IT8 patches.
Normally scanin computes an average of the pixel values within a sample square, using a "robust" mean, that discards pixel values that are too far from the average ("outlier" pixel values). This is done in an attempt to discard value that are due to scanning artefacts such as dust, scratches etc. You can force scanin to return the true mean values for the sample squares that includes all the pixel values, by using the -m flag.
Since only 2 or 3 commands are needed to build profiles, it takes only a short time to install and learn to use Argyll.
Just the intensity of Argyll's documentation puts me off. Pity, but that's the way it is. I'm stuck with Coca.
Resolution
It is designed to cope with a variety of resolutions, and will cope with some degree of noise in the scan (due to screening artefacts on the original, or film grain), but it isn't really designed to accept very high resolution input. For anything over 600DPI, you should consider down sampling the scan using a filtering downsample, before submitting the file to scanin.
Preconditioned or not?
Normally scanin has reasonably robust feature recognition, but the default assumption is that the input chart has an approximately even visual distribution of patch values, and has been scanned and converted to a typical gamma 2.2 corrected image, meaning that the average patch pixel value is expected to be about 50%. If this is not the case (for instance if the input chart has been scanned with linear light or "raw" encoding), then it may enhance the image recognition to provide the approximate gamma encoding of the image.
Preconditioning. Guy Burns reckons 1.6 is optimum for his setup...
I don't reckon I'll see any improvement, because I based my choice of gamma 1.6 (and earlier, 1.8 ) by just looking at the raw, profiled, unedited scans. After I made the choice of what I thought was the best raw scan, only then did I try editing.
L* Errors Caused By The Surround
Attached is a graph of L* errors for a variety of GS patches, measured with respect to each patch. This graph is not comparing L* with the IT8 value; the graph shows the variation in average L* in each patch as the linear size of the selection changes: 100%, 80%, 64%, 51% and 41% (funny numbers because I scaled by 0.8 each time). As expected, the 100% patch is most effected by the surround, and the effect is most pronounced for darker patches. Starting from a 100% selection (in reality about 95%), as the selection was made smaller for each patch, the errors became smaller. Below ~40%, there was very little change in the average value, so I chose 40% as my reference.
Summary
To avoid L* errors caused by the surround being of a different luminance than the patch itself, the optimum size of the selection should be around ~40%, resulting in a maximum error of less than 0.1 L* units. Errors remain low up to an 80% selection, but anything above that should be avoided, particularly for the darker patches, because the errors become significant.
Hmm. Without cranking up the lightness to evaluate the dark tones?When initially comparing the results of IT8 profiling, and in all subsequent comparisons, I simply opened my reference files (scanned at the various gammas), applied the relevant profile, and had a look at the image, whether it be light, dark, or in between. No cranking up the lightness, no bringing down the highlights, I evaluated all images unedited. Ref 18 Gamma 1.0 (for example) immediately struck me as having something wrong with it. When a similar effect was observed in some of the other Gamma 1.0 references, I decided that Gamma 1.0 was not optimum. Editing may have been able to bring it up to look similar to the others, but I haven't tested that yet because at this stage I want to compare the effect of IT8 profiling alone to determine the best starting point. Comparing editing at various gammas is a test yet to come, after I work out how to optimally scan using my setup. But what I have done, is to play around with the gamma I consider to be the best starting point, to see what improvements can be had. When I work out how to edit in the most effective, efficient manner, then I'll test that editing method with all gammas, subsuming the IT8 profiling and editing into one overall valid comparison. If Gamma 1.0 comes out on top, I'll be a Gamma 1.0 man. I'm not in love with a certain gamma – I want the best end result.
When initially comparing the results of IT8 profiling, and in all subsequent comparisons, I simply opened my reference files (scanned at the various gammas), applied the relevant profile, and had a look at the image, whether it be light, dark, or in between. No cranking up the lightness, no bringing down the highlights, I evaluated all images unedited. Ref 18 Gamma 1.0 (for example) immediately struck me as having something wrong with it. When a similar effect was observed in some of the other Gamma 1.0 references, I decided that Gamma 1.0 was not optimum.
If this is why you reject Gamma 1.0, I think you should take another look at it. This issue continues to pop up, and I think it would be good for the conversation to get past it once and for all…No, that's not what I am seeing. What I see is an obvious, but subtle, change in contrast and colour in the shadows that zooming-in doesn't fix, and that changing colour space/editing doesn't appear to be able to improve, but I haven't fully tested the latter across a range of images.
Here are some images - does the first one show what you are seeing?
Problems with an Analytical Approach
One of the problems with the purely analytical approach of evaluating profiles by their calculated errors, is the possibility of internal errors being locked into the loop and not revealing themselves. Generating a profile is a closed loop: you assume you have an accurate IT8 target which is then scanned, you generate a profile, and the profiler compares the resulting colours with the IT8 colours and generates a profile. A closed loop works well if everything is reasonably accurate. Errors between target and scan (after profiling) will be reported as being low. If however, one of the patches has been incorrectly measured by Kodak, say, or has "surround" problems, then a low error may still be reported because there is nothing to tell the profiler that something is wrong. The profiler may be generating a very accurate profile for the target – it corrects for the inaccuracy, not aware of the inaccuracy – but the profile may be inaccurate for real-life slides. I'm reminded of a comment by the author of an Olympus book on SLR cameras from the 1980s in which he explains the use of 18% gray cards, and finishes with: "I can guarantee that if you go around photographing gray cards, you'll have perfect exposure every time". In the case of IT8 targets, the sentiment could be paraphrased: "If you profile IT8 targets, and generate error reports, you'll always get near perfect results." It's closed loop. Unless there is something wrong with the profiler, you must get good results.
Low Errors, Poor Profile
Let's say that Kodak reckons GS23 has an L* value of 0.5, when in fact the actual patch is 3.0. Let's assume GS23 scans at 3.0 (good scanner), and that the profiler does a good job at profiling. When the errors are calculated, they will therefore be reported as being low. But come to the scan of a real-life slide, which may also contain actual L* = 3.0 values, every time the profile sees L* = 3.0 in the real image it says: "That should really be 0.5". So it alters the colour by darkening, causing a visible problem in real-image shadows, but no obvious problem in the target (the 3.0 patch will be corrected to 0.5 to make the target look like it should).
The above problem happens in practice with my Kodachrome IT8 target (at least that's my explanation. Here is what I think is going on:
1. Kodak manufactures a master IT8 slide using a certain process that will be repeated in the manufacture of subsequent slides.
2. Each patch is measured with a light instrument, and an IT8 data file is created. Now, how is the L* measurement made? I don't know, so I'll take a guess. Does Kodak have a physical mask which sits over the slide and only allows D50 light to come through one entire patch at a time? Maybe, but I think unlikely (remember, I'm just guessing). I reckon they would throw a small circle of light through each patch, measure the colour of the circle of light, and make sure the circle doesn't approach too closely to the sides of the patch. Thus, Kodak measures the colour of only a small portion in the centre of each patch, ignoring the "surround" problem.
4. GS23, the blackest, is most affected: a bright white on the left, and three light grays on the other sides. With my scanner, GS23 scans higher than it should by 3 or 4 L* units – and worse, it scans lighter than GS22 (which has the same bright white on the right, and two light grays, top and bottom). Being lighter, GS22 is less affected. GS21 has light grays top and bottom only, and because it is lighter than GS22, is less affected again. GS21-15 are similarly affected top and bottom, but because they are progressively lighter, the "surround" problem is not as prominent.
5. This surround problem is most prominent for GS15-23. It should not cause problems in other areas, but it is probably best that only the central area of each patch be averaged by a profiler.
Explanation in Figures
We might have a situation where Kodak provides an L* number measured in a small central area, but the scanner/profiler combination profiles a different area, which, because of flare and the surround effect (inbuilt and from the scanner), causes the two values to be reconsilable for the target itself, but not reconsilable when applied to other images.
I'd better explain with figures (made up, just for explanation), which refer to three patches, the darkest two of which show an unexpected (but real) reversal in density, as do GS22 and GS23, caused by flare and surround problems. The first figure is an assumed scanned value of L*, the second is the target value. The figure in brackets is what an ideal profiler would do to the scanned value.
10.1 -> 9.3 (sent darker by 0.8 )
3.6 -> 1.1 (sent darker by 2.5)
3.7 -> 0.5 (sent darker by 3.2)
The IT8 profile, when applied to the scanned target, works perfectly. The image on screen looks just like the slide. The profiler has accommodated the flare, the scanner errors, the surround problem. Congratulations all around.
But something comes along to spoil the fun: a real-life slide. It also has L* areas of 10.1, 3.7 and 3.6, but they have not been compromised by flare or surround problems, just slight scanning errors. i.e. they scan at close to these values. Let's assume they scan at exactly these values for the sake of explanation. They are a part of a forest scene in shadow, and 10.1 is brighter than 3.7, which is ever so slightly brighter than 3.6. What does the profile do to them? Well, it transforms them by its inbuilt routines, the same as in the example above:
10.1 -> 9.3
3.7 -> 0.5
3.6 -> 1.1
The shadows of the real scene now have problems. The 10.1 shadow is close to what it should be, but the other two shadow details have been reversed in their brightness and made blacker. The profile, by correcting the IT8 target for flare, surround, and scanner errors, will apply those corrections to all other slides even when the first two problems don't exist.
Summary
Colour correction by profiling, in the presence of flare and surround problems, introduces errors into the shadows, errors that are not present in an uncorrected scan. The errors can be minimized by averaging only the central area of each patch (or by replacing the whole patch with a colour value taken from the centre area), and by faking GS23 to be more in line with what it should be (0.51).
SAMPLE_ID XYZ_X XYZ_Y XYZ_Z RGB_R RGB_G RGB_B BOX_SHRINK 3.5 GS20 0.50000 0.47000 0.34000 0.79935 0.50474 0.56849 GS21 0.33000 0.31000 0.22000 0.74026 0.42002 0.42761 GS22 0.13000 0.12000 0.10000 0.77738 0.34446 0.30561 GS23 0.06000 0.06000 0.08000 0.79995 0.37243 0.32470 BOX_SHRINK 7.5 GS20 0.50000 0.47000 0.34000 0.78572 0.49316 0.55793 GS21 0.33000 0.31000 0.22000 0.72382 0.41017 0.41799 GS22 0.13000 0.12000 0.10000 0.73651 0.32489 0.28964 GS23 0.06000 0.06000 0.08000 0.75358 0.34378 0.29991 BOX_SHRINK 12.0 GS20 0.50000 0.47000 0.34000 0.77368 0.48801 0.55766 GS21 0.33000 0.31000 0.22000 0.71336 0.40633 0.41275 GS22 0.13000 0.12000 0.10000 0.71942 0.31878 0.28413 GS23 0.06000 0.06000 0.08000 0.73492 0.33332 0.29179 |
I did some tests with Argyll by changing the size of the measurement area on the patches. I did this by editing the BOX_SHRINK parameter in the it8.cht file. Coca uses the same it8.cht file, which you can find and change it in the Coca/Argyll/ref directory if you're feeling adventurous… Unfortunately, although it seems some flare appears to be excluded, reducing the size of the measurement area this way still doesn't prevent GS23 from being lighter than GS22.If I can find the file, I'll certainly change it. Where do you find the patch size that Coca uses? Or is it the same as what I would measure in Photoshop? My graphs in a previous post, indicate you have to go down to 40% patch size before the RGB values stop decreasing. That means a box-shrink of about 28 to fully remove the effect of the gray surround and flare. But it doesn't work for GS23. The only way to fix GS23 would be to replace it with a new artificial patch of suitable value.
Scanning an unexposed slide to determine GS23/DMAX could be very helpful. Do you have one?I'll have a look through my slides. Must be a piece of unexposed Kodachrome somewhere. If not, I know someone in Hobart who has a roll of unexposed Kodachrome. I'll ask him to snip a piece off the end. That would be suitable, wouldn't it?
If I can find the file, I'll certainly change it. Where do you find the patch size that Coca uses? Or is it the same as what I would measure in Photoshop? My graphs in a previous post, indicate you have to go down to 40% patch size before the RGB values stop decreasing. That means a box-shrink of about 28 to fully remove the effect of the gray surround and flare.
But it doesn't work for GS23. The only way to fix GS23 would be to replace it with a new artificial patch of suitable value.
What I have done is to manually replace the whole of GS15-22 with the average of a centre 40% patch from each, and completely replace GS23. When I finish testing I'll upload the "improved" IT8 target. Initial testing has shown that it has no effect on anything other than the last few GS patches – looking good!
What would be the best way to generate a fake GS23? I used your Lab generator to generate the actual IT8 value. A better way, I assume, would be be work out the ratio between GS22 and GS23 and do it that way. Is the following technique valid?
Given the L* values of:
GS22 (IT8) = 1.07
GS23 (IT8) = 0.51
These are very close to 2:1, i.e. one exposure stop.
1. From a Gamma 1.0 scan, select 40% of GS22, and average it.
2. Open Colour Picker and set the foreground colour to the average of the 40% patch. Step Backward to undo the averaging on GS22.
3. Make a 100% selection of GS23 and Edit > Fill with the foreground colour.
4. Apply an Exposure correction of -1.0
Result: an artificial GS23 that should correlate well with its closest neighbour.
I'll have a look through my slides. Must be a piece of unexposed Kodachrome somewhere. If not, I know someone in Hobart who has a roll of unexposed Kodachrome. I'll ask him to snip a piece off the end. That would be suitable, wouldn't it?
The only problem I see is working in Lab - the L* values are easy to interpolate but what do you do about the a* and b* values?
Regarding Box Shrink – I found it so I'll give it a go. Windows said it couldn't open the text file, so I opened it with Notepad. Since I don't like playing around with such things unless I am reasonably sure of what I am doing:
Q1: Will Coca still recognise the data file if I save it from Notepad?
Q2: Is this IT8.cht file newly created each time Coca is run, or is it a permanent fixture which Coca originally installs and then just looks at?
Q3: What's the mathematical relationship between Box Shrink and the size of the box?
The physical units used for boxes and edge lists are arbitrary units (i.e. pixels as generated by scanin -g, but could be mm, inches etc. if created some other way), the only requirement is that the sample box definitions need to agree with the X/YLIST definitions. Typically if a scanned chart is used to build the reference, the units will be pixels of the scanned chart.
The BOXES keyword introduces the list of diagnostic and sample boxes. The integer following this keyword must be the total number of diagnostic and sample boxes, but not including any fiucual marks. The lines following the BOXES keyword must then contain the fiducial mark, diagnostic or sample box definitions. Each box definition line consists of 11 space separated parameters, and can generate a grid of sample or diagnostic boxes:
kl lxs lxe lys lye w h xo yo xi yi
-SNIP-
w, h are the width and height of each box in the array.
-SNIP-
The keyword BOX_SHRINK marks the definition of how much each sample box should be shrunk on each edge before sampling the pixel values. This allows the sample boxes to be defined at their edges, while allowing a safety margin for the pixels actually sampled. The units are the same arbitrary units used for the sample box definitions.
Q4: If I change the image resolution, I assume the Box size changes, so I could end up in strife if I have a Box Shrink that is too big.
Where is the Box size stored so that I can check the size?
Thinking about it – and there'll be more questions for sure – it might be easier if I just replace the sus patches with a 40% average. That, I can do, and I know it works.
For a 64% vertical patch, the horizontal scale will be 0.64-0.5 = 0.14. Too narrow. For 80%, it will be 0.3, which should be okay. So the optimum Box Shrink would appear to be 25.63 (1 - 0.8 ) = 5.1, say 5. This minimizes the L* error caused by the surround, and maximises the horizontal box dimension. But it won't be as good as using a true 40% box. On the other hand, the scaling will apply to all patches (and not just a few of the GS patches as I did manually), so overall the problem of flare and contamination may improve.
I made an error in the formula relating FH and FH, now corrected, which means the optimum BS is 7.7 (not 5.5). Not much difference.
GS Boxes off centre? How come all the others aren't off centre? Maybe it's best to keep BS at 3.5 if Argyll does things like shift boxes around.
The next release will by default add some extrapolation patches up to the
device min/max values along the neutral axis when -u is used with input
profiles, to overcome the sometimes unexpected default extrapolation behaviour. You can
always override this with extra patches though, if you don't like what it does.
cheers,
Graeme Gill.
-u: cLUT style input profiles will normally be created such that the white point of the test chart, will be mapped to perfect white when used with any of the non-absolute colorimetric intents. This is the expected behaviour for input profiles. If such a profile is then used with a sample that has a lighter color than the original test chart, the profile will clip the value, since it cannot be represented in the lut table. Using the -u flag causes the lut based input profile to be constructed so that the lut table contains absolute color values, and the white of the test chart will map to its absolute value, and any values whiter than that, will not be clipped by the profile, with values outside the range of the test chart being extrapolated. The profile effectively operates in an absolute intent mode, irrespective of what intent is selected when it is used. This flag can be useful when an input profile is needed for using a scanner as a "poor mans" colorimeter, or if the white point of the test chart doesn't represent the white points of media that will be used in practice, and that white point adjustment will be done individually in some downstream application.
-un: By default a cLUT input profile with the -u flag set will extrapolate values beyond the test chart white and black points, and to improve the plausibility of the extrapolation, a special matrix model will be created that is used to add a perfect device white and perfect device black test point to the set of test patches. Selecting -un disables the addition of these extra extrapolated white and black patches.
[argyllcms] Re: Verifying profile quality of LUT-based scanner and printer profiles
With the currently available release of Argyll, it is probably advisable
to specify a fairly high level of smoothing, by using -r 1.0 or so.
The next version will have better defaults in this regard, and
shouldn't usually need a -r parameter. This should result in a smoother
profile with a higher self fit dE.
But is there any way to verify the 'smoothness' of the profile? In particular, I'm thinking about the discontinuities that might exist in the LUT tables.
The interpolation algorithm doesn't really allow discontinuities,
but it can have "overshoot" or "ringing".
The -r parameter specifies the average deviation of device+instrument readings from the perfect, noiseless values as a percentage. Knowing the uncertainty in the reproduction and test patch reading can allow the profiling process to be optimized in determining the behaviour of the underlying system. The lower the uncertainty, the more each individual test reading can be relied on to infer the underlying systems color behaviour at that point in the device space. Conversely, the higher the uncertainty, the less the individual readings can be relied upon, and the more the collective response will have to be used. In effect, the higher the uncertainty, the more the input test patch values will be smoothed in determining the devices response. If the perfect, noiseless test patch values had a uniformly distributed error of +/- 1.0% added to them, then this would be an average deviation of 0.5%. If the perfect, noiseless test patch values had a normally distributed error with a standard deviation of 1% added to them, then this would correspond to an average deviation of 0.564%. For a lower quality instrument (less than say a Gretag Spectrolino or Xrite DTP41), or a more variable device (such as a xerographic print engine, rather than a good quality inkjet), then you might be advised to increase the -r parameter above its default value (double or perhaps 4x would be good starting values.)
Mentioned are two options for the colprof command that might be worth trying: -u and -r.Thanks for the interesting info. I have purposely avoided the Argyll mailing list, assuming it would be way beyond my understanding. But filtered through you, there might be something of interest. Are you able to tell what defaults Coca uses for -u and -r when it calls on Argyll? Can they be changed by me by altering a file, similar to the way I can alter Box Shrink? Do you know whether Coca uses absolute intent? I was hoping the profiled highlights would not be showing any significant errors, but from what you've quoted, highlights on a real slide that are brighter than GS0 may be clipped. I don't want that. Maybe I should be faking GS0 as well.
Smoothing by -r might help get rid of the reversal at GS21-23?
Apparently Kodak measured the target with some highly-specialized equipment, as the Q60 data are calculated from spectral measurements. The spectral data is available from Kodak in a QSP file.Thanks for that suggestion. I already had a copy of the QSP file, but I didn't take much notice of it. I've just had a detailed look. So, the numbers are spectral data! I could sit down and plot some spectrums, like the ones that appear in Hunt's book. I might just do that, for interest.
Another option is the Silverfast Kodachrome targets.When I purchased my non-Kodachrome targets from a particular supplier (I probably shouldn't include the name], I asked about Kodachrome. This is the reply:
I updated the files. Looks better.The numbers in the measurement files: I assume the XYZ values come from the IT8 data file, and the RGB values (scaled 0-100) come from the 16-bit scan. Is that correct?
diagnostic image 7.7
measurements 7.7
Are you able to tell what defaults Coca uses for -u and -r when it calls on Argyll? Can they be changed by me by altering a file, similar to the way I can alter Box Shrink?
Do you know whether Coca uses absolute intent? I was hoping the profiled highlights would not be showing any significant errors, but from what you've quoted, highlights on a real slide that are brighter than GS0 may be clipped. I don't want that. Maybe I should be faking GS0 as well.
I read somewhere in a Hutchcolor document that he recommended manually inserting a pure black patch to improve the profile. I'll see if I can find it again.
The only way I'm comfortable with of getting rid of the GS22/23 reversal is by faking GS23 at its target value, or a value can can be demonstrated to be reasonable. Unless my scanner is reading the value of GS23 incorrectly, it seems to me that GS23 has been compromised beyond being useful.
Re your editing of Ref 18 at gamma 1.0 and gamma 1.8: A difficult slide to digitise accurately. The original is nowhere near as dark as it scans, but because it is underexposed, Kodachrome sent the lower exposures (most of the slide) into darkness – Hunt's 1.5 gamma thing – and the detail is hard to retrieve. The original slide shows obvious detail in the shadow on the tree; the pack is not solid black, it has a definite gradation to dark gray on the top half, and the red of the pack is not mottled with black as in the gamma 1.0 version, but more like the gamma 1.8 version. On a bright lightbox under a loupe this is a very pleasant-looking, moody slide. When projected, it loses a bit; and when scanned and presented on a monitor, it needs a lot of work before it appears as it should.
4. The numbers in the measurement files: I assume the XYZ values come from the IT8 data file, and the RGB values (scaled 0-100) come from the 16-bit scan. Is that correct?
The Hutch Color RGB Scanning Guide (http://www.hutchcolor.com/PDF/Scanning_Guide.pdf) - there's a wealth of info there about dealing with scanner flare, modifying patches, etc. I need to read it again.
There is another program like CoCa called Argyll CMS GUI. In contrast to CoCa it enables most, if not all Argyll options.No good for me. It requires OSX 10.5 running on an Intel Mac. I'm still on OSX 10.4.
I feel that editing can make them equivalent. Which is the better starting point for editing? Which is easier to edit?After several days of comparing editing techniques, I'm coming to the conclusion that the overall problem of getting the best result from a scanned slide lies not with the selection of the best profiling technique, but in the editing itself. Most of my reference scans require serious editing, even given perfect profiling, and I'm finding that sometimes I cannot obtain similarly good results if I select a different scan gamma to start with, or if I change the colour space. i.e. if after editing I choose a particular image as the optimum (to be used as a reference), most of the time I cannot reach that optimum with a different scan technique or in a different colour space.
Like I mentioned before, I think the >1 gammas can be noisier. Is the extra red noise visible?I've seen lots of noise if I try and bring the shadow detail out, and I'll take your results as definite – that Gamma 1.0 has the least noise, as would be expected. Although, I have found that extra noise in the darkest shadows, as long as it is not obviously coloured, improves the image because the noise appears to be detail within the black. i.e. solid blacks are not desirable in most images. If they are broken up with noise, it can give the impression of detail, even though it is false detail.
It seems that the deep shadows can be made very neutral, which would make them easier to edit, compared to the standard-preparation profiles that almost all go red in the dark tones.I have a technique that easily removes the colour cast in deep shadows, but it would be easier to edit if it wasn't there.
I found that even better is to adjust each R,G,B channel individually with its own value. Adjusting the R,G,Bs individually allows you to control the tint in the dark tones to achieve neutrality.Do you think that will work for a variety of slide images? The shadows on my slides show a range of colour casts because of the lighting conditions: red (sunsets or bright red shirts), green (forest), blue (sky). They have to edited away for a more natural look, and such editing may swamp the small improvements available by adjusting the profile. It's good to have the most accurate profile, so I'll give your new plugin a go to see the effect.
When the profile with this built-in flare compensation is used on an image with less flare, the darker tones can be clipped, making ugly artifacts.I'm seeing some of this clipping, I think. It is difficult to edit away, and if the original slide had a lot of shadow detail, it makes for an unsatisfactory image. I am still finding that editing gamma 1.8 scans sometimes gives the best end-result. Not all the time. Gamma 1.0 comes out on top here and there, and so does sRGB. Which is rather annoying. I was hoping for a single method to give optimum results for all slides.
After several days of comparing editing techniques, I'm coming to the conclusion that the overall problem of getting the best result from a scanned slide lies not with the selection of the best profiling technique, but in the editing itself. Most of my reference scans require serious editing, even given perfect profiling, and I'm finding that sometimes I cannot obtain similarly good results if I select a different scan gamma to start with, or if I change the colour space. i.e. if after editing I choose a particular image as the optimum (to be used as a reference), most of the time I cannot reach that optimum with a different scan technique or in a different colour space.
The shadows on my slides show a range of colour casts because of the lighting conditions: red (sunsets or bright red shirts), green (forest), blue (sky). They have to edited away for a more natural look, and such editing may swamp the small improvements available by adjusting the profile. It's good to have the most accurate profile, so I'll give your new plugin a go to see the effect.
I'm seeing some of this clipping, I think. It is difficult to edit away, and if the original slide had a lot of shadow detail, it makes for an unsatisfactory image. I am still finding that editing gamma 1.8 scans sometimes gives the best end-result. Not all the time. Gamma 1.0 comes out on top here and there, and so does sRGB. Which is rather annoying. I was hoping for a single method to give optimum results for all slides.
Clever Kodak?
What are the chances that Kodak purposely designed those flaws into GS18-23 so that when profiled, such a target gave the most pleasing shadow result? Monitors, for example, only have 256 levels. If the end result is to be displayed on a digital screen (in my case, eventually, BluRay via a monitor or projector), there must come a point when you have to say: for the best image when viewed, those extra 16-bit levels have to be compressed in a certain way for optimal results on an 8-bit display. I need convincing that all my attempts at better profiling have not resulted in a degraded 8-bit image when viewed – because blacks on the digital image are now closer to the slide-blacks. Deeper blacks are difficult to reproduce digitally (obtaining rich blacks is the Holy Grail of all digital projectors), and having an excess of them, as does Kodachrome, can only cause problems.
Given a perfect profile applied to a perfect scan of a Kodachrome slide, you will still end up with a very poor digital image because Kodachrome has been optimised for projection, not digitising. Kodachrome scans will always require significant editing because of the way they are, and I'm hoping that such editing doesn't render profiling unnecessary. It's been a good learning experience, but I hope it hasn't all been wasted.
What's happening in my testing (a couple of hours a day for six months), is that I'm moving away from the "science" of scanning Kodachrome, and into the "art" aspect. And that may pose more serious difficulties than any of the science.
I have a present for you, Cliff: Gamma 1.0 scans of two separate unexposed Kodachrome 64 slides: http://www.mediafire.com/?d38ugsiwkc8a7cm
If you're game, a discussion on editing Kodachrome scans would be great.Good idea. I'll start it soon. But this thread has a little distance to run yet.
Gamma 1.8 R G B Gamma 1.0 R G B | raw RGB 1437 993 891 | Argyll XYZ 77 72 182 1274 1008 867 | Argyll Lab 167 60 632 1156 1278 1417 | SCARSE.4 1570 1348 1102 1633 1452 1141 | LPROF 1399 1010 673 1477 1032 661 | inCamera3.1 311 217 316 673 714 652 |
The average RGBs of those scans are:
#01 141, 63, 53
#02 141, 62, 52
The only explanation must be that the Coolscan is throwing a red cast, which profiling should remove. These dense blacks are a very touchy area.
The red cast in the raw scans of deep shadows is an interesting phenomenon. It could be a part of the scene itself, or it could be a problem with the scanner. How is it that a Kodachrome slide taken in near perfect blackness (my two "black" slides of a previous post) show a significant red cast when scanned?
-snipped-
The only explanation must be that the Coolscan is throwing a red cast, which profiling should remove. These dense blacks are a very touchy area.
Removing the non-linearity of the D-H Curve
My next series of tests is to try and remove the non-linear response of the Kodachrome D-H curve, by applying a correction curve in Photoshop's Curves. This would have to be applied after profiling (and the scan would have to be a linear scan for this to work), and before editing. Here I am not trying to correct colour, I am trying to remove the non-linear density characteristic of Kodachrome. This is my thinking:
1. The D (density) axis corresponds to what the scanner sees on the slide. For the red channel of Kodachrome 25, the density range, for example, ranges from about 0.2 to 3.8. These values could be scaled and converted to linear relative-luminance, and would become the horizontal axis (the "input" axis) for Photoshop's Curves, ranging from 0-255.
2. For any value of D on the slide, the actual scene "brightness" for that particular D could be derived from the D-H curve. I am assuming that "brightness" and luminance are directly related to exposure. The problem I have is that there is no upper limit to real-life exposure, whereas there is a limit to Photoshop's 8-bit "brightness" levels: 255. How do you relate the two? My solution is to set the minimum exposure from the D-H curve as being equivalent to Photoshop's brightness of 0, and the maximum exposure from the D-H curve as being equivalent to Photoshop's 255.
Neglecting the complications of dim and dark surrounds and so on, and assuming that the scanned image will be a faithful reproduction of the scene if the relative luminances between scene and image are linear, then the above figures should allow a curve to be set up in Curves to correct for Kodachrome's non-linear density characteristic.
Can you see any problems with this approach?
Another problem: I could go through all the bother of correcting for the D-H characteristic, only to find at projection time (dark surround) that I should have kept the original Kodachrome D-H characteristic as optimum for dark surrounds – assuming that as far as dark surrounds and optimum gamma are concerned, there is no difference between projecting digitally and projecting by a slide projector.
Did you convert from Density to Transmittance by the formula Transmittance = 10^(-Density), then scale Tranmittance by the maximum RGB value? Likewise convert LogE to linear Exposure? The curves will have a completely different shape in a linear-linear plot compared to the log-log characteristic curve.
Does Curves have enough resolution to linearize the dark end of the tone scale?Not if you want to do it accurately – but it may not need to be done accurately. No point worrying too much about that aspect until I prove that the idea works.
If the idea works and I need to generate accurate Curves in Photoshop, would you be able to write a script that accepts Curves "input" and "output" values and passes those values to Curves so that it can draw smooth curves through the values? Could a polynomial equation be sent to Curves?
– for example, the D-H curves show a minimum density of ~0.21 (0.62 relative luminance), yet my scanned slides go much lighter than that.
Below are the Input/Output figures I tried to put into Curves. The first column is the input, the second is Green output, and the third is Red output. Because PS doesn't allow an input of less than 4 into Curves, I combined the first four settings into one for green and red: 4,7, then rounded the others. Note how the differences disappear at Input = 157 (equal to density of 0.21, the lightest shown on the D-H curves). So the corrections only apply to half the brightness range of the slide. Didn't seem right when I calculated the figures, and it certainly didn't remove the blue cast.
You shouldn't get lighter than dmin if you Convert-to-Profile from the scanner profile using Absolute. Are you using the same exposure that you used when scanning the profile target?
Guy, are you still with me?Yes, still here playing around, though I can see an end in sight. At some stage I have to move away from testing and start actual scanning. I've been generating profiles for Agfa, Velvia and Ektachrome and testing them based on the Kodachrome results. A well-exposed Velvia RVP50 slide is so easy to scan and edit compared to Kodachrome. In some cases (see Gil 06: http://www.mediafire.com/?o1y5c7cpstedj3n, one of several Velvia slides I've borrowed from a photographer mate), the unedited profiled scan is superior to my attempt at editing it. I've never come across that when editing Kodachrome, which seems to always need editing.
Cliff, thanks for the four curves. I'll play around with Ektachrome and the Hutch method to see what comes of it.
And maybe I should choose Lab clut as my preferred profile, instead of XYZ (see Knockout Rounds, below).
For a Gamma 1.0 scan (see Ref 03 in the clouds and snow), 16-bit, raw RGB values are R>28,000 and G&B > 30,000, well above what the D-H curves indicate I should be getting as the brightest scan from Kodachrome. And when the profile is applied, those values increase. I'm not sure anything sensible can be gained by a person of my limited colour knowledge trying to correct a scan by using the Kodachrome D-H curves, because I don't know how those curves relate to an actual scan.
Preliminary Overall Results
Targets tested: Kodachrome, Ektachrome, Fuji (Vevlia, Sensia, Provia, Astia), Agfa. I scanned all my IT8 targets at G1.8 with the Coolscan V ED scanner, then made a second "corrected" copy of each target by averaging 40% of certain GS patches and applying that to the whole patch. For Kodachrome I altered GS15-GS23. Alterations for the other films varied, depending on how much flare from the surrounds was present. All films except Kodachrome showed an increase in density from GS22 to GS23; only Kodachrome showed a slight reversal. Because of this reversal, for the corrected version of Kodachrome I replaced GS23 with the colour value of unexposed Kodachrome (Lab 56, 21, 12); for the uncorrected version, I replaced GS23 with a copy of GS22 (so that the "uncorrected" scan wasn't corrected very much).
Knockout Rounds
All S+M profiles when applied to the target showed colour changes in certain colour patches compared to Lab and XYZ (which appeared identical). So S+M was knocked at round 1. For both XYZ and Lab, the difference between "Uncorrected" and "Corrected" was minimal, in most cases undetectable, the only difference being a lightening of the darkest GS patches. Because the "corrected" versions should theoretically give better profiles, and because the differences between "uncorrected" and "corrected" were minimal, the "uncorrected" versions were knocked out in round 2.
That left the "corrected" versions of XYZ and Lab to play off in the final. The difference came down in XYZ's favour because of the way it retained the contrast in certain grainy patches (typically the patches GS17-19) i.e. the XYZ profile kept the grain intact whereas Lab smoothed out the grain. Originally I choose Lab because of this, but after further thought I realised the Lab had the lower contrast in the darkest regions (thus smoothing the grain), so I opted for XYZ as the best.
5. Editing will be in the IT8 profile space with 2-4 Curve layers applied. There are several reasons for not converting to a wider gamut space. I have arrived at this tentative decision after a few hundred test edits, but the reasons are not yet final:
(a) Testing seems to indicate that editing in a wider-gamut space makes editing more difficult. I'm not convinced that this is a real phenomenon (i.e. a change in editing procedure might fix the problem), and until I work out why this might be the case, this finding is open to change.
(b) Editing in the IT8 profiled space by applying Curve layers is non destructive. Converting to another profile alters the colour numbers and the process can't be exactly reversed. By staying in the IT8 profile space and editing only by Adjustment Layers, the colour numbers are always only one step removed (the gamma 1.8 step) from what the scanner sees on the slide. This is a significant space saving when archiving, because I won't have to archive the original scan as it is non-destructively incorporated in the edited scan.
(c) All my scans are destined for Rec 709 output (effectively sRGB) on a digital projector. I don't require a wide gamut.
Additional Tests
1. One of my long-standing photographer mates wants to learn how to scan his Kodachrome slides, so he sent me some slides to play around with and comment on. I asked for "difficult" sides, and he complied. Check out my thoughts at: http://www.mediafire.com/?ymh90cvds5c3w2j
Additional Tests
1. One of my long-standing photographer mates wants to learn how to scan his Kodachrome slides, so he sent me some slides to play around with and comment on. I asked for "difficult" sides, and he complied. Check out my thoughts at: http://www.mediafire.com/?ymh90cvds5c3w2j
I thought this thread was long buried. Well, at least I was hoping it was.
Successful profiling assumes a good quality scanner and an accurate target. The HCT target you mention may well be superior to the other targets, but if the scanner is sus, what's the point? And as reluctant as I am to say this, the Coolscans have problems (and I'll include the 5000 and 9000 here, in addition to my V ED, although that may change once I see the test scans from the upmarket models).
You ever had to change your mind about something that you had always assumed? It can be a long, slow process to come to a different viewpoint. I'm that way with my two Coolscan V EDs. Give me another month or so, and I'll probably come around to accepting that a flatbed scanner – to me the idea used to be anathema – that a flatbed without profiling (the Epson V700), can give as pleasing an image as a dedicated slide scanner with profiling. Often the image is more pleasing. I still shake my head about that.
The Science & (Black) Art of Scanning Kodachrome is going to be a good read when I finish it.
Are you sure? I'm getting results that seem decent enough to me with a Coolscan 9000 and scanning K64 (using Silverfast calibrated with a K64 IT8 slide).I'm as certain as I can be that, taken overall (sharpness, contrast, colour, lack of flare, and general "feel"), that the Epson flatbed unprofiled gives results that are as good as a profiled Coolscan V ED in most cases. Sometimes better, sometimes inferior. I have come to that conclusion after detailed side-by-side comparison, results of which I will make available when my testing is finished.