Pages: [1] 2   Go Down

Author Topic: Starting a search for a smaller, but more optimal, profile patch set  (Read 8033 times)

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2197

I was evaluating the Lab distances of a 3500 patch set and noticed a very large number of patches had lab values that were close to some other one in the patch set. This seems pointless and may even lead to anomalies around colors with different RGB values but nearly identical Lab values when printed. The maximum dE is 12.0 while the minimum is .08.

So, I'm currently recursively eliminating those that are small. I've reduced the RGB patch set to about 2,000 which results in patches that are at least a dE of 5 from all others yet the maximum has remained at dE of 12.

This will create a patch set with "holes" but holes that don't provide much useful info. Question is: can the profiling software handle these?

If so, is the quality of profiles generated as good as that of the full patch set or at least very close? If so, I will expand the set to 10000 or so patches and apply the same removal process. Hopefully, I can get a much smaller patch set with a lower maximum distance that is as good as the extreme patch set. Once I have such a patch set then it should be good to create great profiles on that printer for any future paper with a lot less paper waste.

But it really depends on how well the profiler s/w handles patch sets with "holes."

Anyone ever try this?
Logged

GWGill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 608
  • Author of ArgyllCMS & ArgyllPRO ColorMeter
    • ArgyllCMS
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #1 on: November 12, 2017, 02:15:49 am »

This will create a patch set with "holes" but holes that don't provide much useful info. Question is: can the profiling software handle these?
I'd imagine that many can. ArgyllCMS certainly.
Quote
If so, I will expand the set to 10000 or so patches and apply the same removal process. Hopefully, I can get a much smaller patch set with a lower maximum distance that is as good as the extreme patch set.
...
Anyone ever try this?
Well, ArgyllCMS default ofps algorithm attempts to do the opposite - generate a patch set with well distributed patches. The default criteria using a previously generated profile is more subtle though - it takes curvature into account, increasing the density of patches where the device response is more curved, and reducing it where it is flatter. There is also a (tweak-able) bias towards increased patches near neutral, mimicking the characteristics of CIE DE94/2000/DIN99 in compensating for the increased visibility of near neutral inaccuracy.
Logged

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #2 on: November 12, 2017, 03:19:09 am »

Ethan Hansen published very useful research on optimal patch set sizes (which would include for their underlying configuration) in this Forum. I tested the 1877 set against the 2033 set (the former "optimal", the latter less so according to his information) and determined that he was largely correct. Instead of starting from scratch re-inventing patch sets - which may be useful, maybe not, I decided the most practical and time-efficient approach would be to test what's out there and if satisfactory jut use it. But then again, you may come up with something yet better in some respect - hard to know without trying, so if you do, looking forward to your results, but to answer your question - no I haven't tried re-configuring patch sets but I remain interested in results from others who are doing so.
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Alan Goldhammer

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4344
    • A Goldhammer Photography
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #3 on: November 12, 2017, 07:46:12 am »

I'd imagine that many can. ArgyllCMS certainly.Well, ArgyllCMS default ofps algorithm attempts to do the opposite - generate a patch set with well distributed patches. The default criteria using a previously generated profile is more subtle though - it takes curvature into account, increasing the density of patches where the device response is more curved, and reducing it where it is flatter. There is also a (tweak-able) bias towards increased patches near neutral, mimicking the characteristics of CIE DE94/2000/DIN99 in compensating for the increased visibility of near neutral inaccuracy.
My approach with ArgyllCMS is to use a small patch set to generate the first profile (which is generally quite good) and then a slightly larger profile that incorporates both a B/W patch set and near neutral tweak mentioned by Graeme for the final profile.  It takes an extra day to do this (two sets of paper targets to let dry) but the profiles are quite good for my use.
Logged

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2197
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #4 on: November 12, 2017, 02:49:33 pm »

My approach with ArgyllCMS is to use a small patch set to generate the first profile (which is generally quite good) and then a slightly larger profile that incorporates both a B/W patch set and near neutral tweak mentioned by Graeme for the final profile.  It takes an extra day to do this (two sets of paper targets to let dry) but the profiles are quite good for my use.
I suspect this is may be effective. There is a great deal of fairly sophisticated stuff going on in Graeme's s/w. I'm also curious about the impact of adjusting the instrument plus device uncertainty option he has. The Isis has proven highly repeatable compared to the I1Pro 2 scans I previously did. However, that repeatability has identified printer related errors on both my 9500 and 9800. The errors are of similar magnitude but differ in nature. The 9800's greatest variation correlates to the horizontal position across the paper but is quite stable varying little between consecutive prints or a 24 hr lag, first print. The 9500 II, OTOH, has the most sensitivity to the colors of the patches in the same row. For instance printing successive black patches results in a shift in L*, something I've not observed with the 9800.  It also exhibits some shift between the first and consecutive prints. But then the 9500 II is also much less lumpy in the device response.

So I'm focusing on the 9800 now because it is more repeatable.

As an aside, I've done a bit of testing of drying time characteristics.

"Dry" is a relative term. Inked paper has two components, water and some sort of water soluble glycol used to suspend the pigments. While a print appears dry immediately, it isn't and measuring it immediately after printing results in an average dE76 of about 1 compared to a print dried for a few days. However, by measuring the paper with a thermal imager with 0.05C resolution an estimate can be made of the amount of residual moisture. The temp delta is proportional to the rate of evaporation to a first order. Initially, after printing, the surface is about 2C below ambient due to evaporative heat transfer. After about 10 minutes the temperature delta drops to 1C gradually decreasing over the next 20 minutes to under .05C. However, the glycol evaporative rate is much slower and doesn't show up with the imager.

There is another effect related to humidity I've noticed small shifts when the paper is dried for days at low humidity (10%) with dE's around .2 compared to 50% RH.  Returning it to normal humidity also results in small color changes back towards that of the 50% samples. This occurs within an hour or so and one can see the adsorption of water from the air by noting weight increases with an analytic scale.
Logged

Alan Goldhammer

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4344
    • A Goldhammer Photography
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #5 on: November 12, 2017, 04:30:09 pm »

I suspect this is may be effective. There is a great deal of fairly sophisticated stuff going on in Graeme's s/w. I'm also curious about the impact of adjusting the instrument plus device uncertainty option he has. The Isis has proven highly repeatable compared to the I1Pro 2 scans I previously did.
I'm using an i1Pro to do the readings with the manual chart reader.  I do duplicate readings and use the average utility before running the profile program.  I've always downloaded the readings into Excel and looked for outliers and occasionally there will be one but overall the realtive dEs using the 'profcheck' utility are quite low (I don't have any other profile making or checking software other than ArgyllCMS).  Several months ago I profiled Moab Entrada Natural on my Epson 3880, a paper that I really like.  The average error was 0.18 which is the lowest I've seen. 

Quote
As an aside, I've done a bit of testing of drying time characteristics.

"Dry" is a relative term. Inked paper has two components, water and some sort of water soluble glycol used to suspend the pigments. While a print appears dry immediately, it isn't and measuring it immediately after printing results in an average dE76 of about 1 compared to a print dried for a few days. However, by measuring the paper with a thermal imager with 0.05C resolution an estimate can be made of the amount of residual moisture. The temp delta is proportional to the rate of evaporation to a first order. Initially, after printing, the surface is about 2C below ambient due to evaporative heat transfer. After about 10 minutes the temperature delta drops to 1C gradually decreasing over the next 20 minutes to under .05C. However, the glycol evaporative rate is much slower and doesn't show up with the imager.

There is another effect related to humidity I've noticed small shifts when the paper is dried for days at low humidity (10%) with dE's around .2 compared to 50% RH.  Returning it to normal humidity also results in small color changes back towards that of the 50% samples. This occurs within an hour or so and one can see the adsorption of water from the air by noting weight increases with an analytic scale.
The drying characteristics are also paper dependent as well.  I've not done anything to the same extent that you have.  I only looked at differences at 6, 24, and 48 hours.  I was pretty satisfied that 24 hours was just fine for my purposes.

The glycol evaporation rate is dependent on what types of glycols are in the ink mixtures and Epson don't give much of a clue here.  a couple of years ago I went back to look at the Material Safety Data Sheets and if I recall they only said it was a mixture of proprietary glycols.
Logged

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2197
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #6 on: November 12, 2017, 07:03:09 pm »

I'm using an i1Pro to do the readings with the manual chart reader.  I do duplicate readings and use the average utility before running the profile program.  I've always downloaded the readings into Excel and looked for outliers and occasionally there will be one but overall the realtive dEs using the 'profcheck' utility are quite low (I don't have any other profile making or checking software other than ArgyllCMS).  Several months ago I profiled Moab Entrada Natural on my Epson 3880, a paper that I really like.  The average error was 0.18 which is the lowest I've seen. 
I just ran profcheck on the Argyll profile and it had exactly the same dE00 as the data shown in the histograms. They are calculated the same way. Don't rely on them too much, They only describe the accuracy of the AtoB table. While it's the most accurate, the BtoA tables are the ones used for printing and they tend to be not as good. That's why I run an additional check printing an in-gamut image set in Abs. Col. and measuring them with a spectro for match stats.

What I did with the I1Pro 2 was similar. Multiple scans tended to identify the outliers. Oddly, the i1iSiS tends not to because they repeat and can be quite large. Yet the average dE of sequential scans is almost always <= .05 dE00. Careful positioning on the patches with an I1Pro 2 confirms the outlier is real.  Because the i1iSiS reads the same location on the patches within about  .25mm, outliers are sticky and tend to remain in multiple scans more than they do with the I1Pro 2.

One of the tricks I did with the I1 Pro was make the patches slightly taller (10mm)  then make 2 or 3 scans shifting the alignment rule from the top, center, bottom of the patches. There is no way to do this with the i1iSiS as it tracks the center regardless of height. So to identify outliers you have to print at least 2 charts or one chart with duplicate colors.

Sticky outliers aside, the I1Pro 2 is not as consistent as the i1iSiS! I would be lucky to get repeat reads of charts within .2 dE00 ave with the Pro. That's both bad and good. The bad part of it is instrument variation from the tungsten lamp, the good part is you are reading different parts of the patches so you can identify outliers and average results effectively w/o duplicating. Mixed blessing but really both are excellent instruments.

As for the i1iSiS, changing the width from 6mm to 12mm reduces the size of the outliers but I find it better to just use additional patches of the same color and check their differences to find outliers. This applies most to the 9800 because of it's horizontal position effecting patch color.
Logged

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #7 on: November 13, 2017, 02:44:07 am »

Then again, dE values of 0.2 will be undetectable to the naked eye. There is a limit to how fussy and precise one needs to be about this stuff, and that limit depends on the nature of the photo, the colours being measured and whether there are close comparisons at hand when viewing the version of primary interest.
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2197
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #8 on: November 13, 2017, 10:34:41 am »

Then again, dE values of 0.2 will be undetectable to the naked eye. There is a limit to how fussy and precise one needs to be about this stuff, and that limit depends on the nature of the photo, the colours being measured and whether there are close comparisons at hand when viewing the version of primary interest.
Certainly true. The outliers do need to be located because one, large enough, outlier can create visible ripples in a gradient if one happens to have that gradient in their printed image.  Still, quite rare to see that.

I probably should make clear that much of this is just trying to get the most out of my setup for producing specialized charts. That comes from some prior work setting up QA processes for lenses and imager subassemblies where optimizing buys margin improving process control.

Pretty much no, in gamut, prints I've made in quite a long time from patch sets of 1500 or more can be visibly discerned from each other outside of instrumentation.

Logged

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2197
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #9 on: November 15, 2017, 11:07:35 pm »

Trimming patches with small dEs was a bust. They worked on Argyll but not I1Profiler which just says it can't make a profile. And even on Argyll the benefits were minimal and only occurred with about a 10% pruning.

So I examined the problem more closely. Turns out the 9800 has a very different problem than the 9500. The 9800 color variation is correlated with its location on paper and it is color dependent. By printing a large number of duplicate patches in each color then examining their statistics it turns out that colors exhibiting the largest variation are in the more saturated greens and, to a lesser degree, the oranges.

Specifically, out of a set of 70 distributed colors repeated 13 times in random locations, the mean dE00 of the worst case color, a fairly saturated green, was .38*. However the median color set of 13 patches was .17. Of the 70 sets of colors, 90% had a mean dE00 of .26 or less.

To mitigate this one can create a patch set where colors exhibiting the greatest variation are repeated N times as required to reduce the variance of the mean to the same levels as, say, just over the median color set. For instance 3 duplicates of colors closest to the worst case 10% color sets, and 2 duplicates for colors in the next 20% should significantly reduce the errors that will be baked into a profile.

These numbers may seem small but they are average dE00s.  The variation across the 13 colors typically maxes around 3 times larger. So baking in fewer errors in the profile, which then compound with the unavoidable intrinsic error when printing seems desirable.

Another approach could be just printing two sets of charts with the colors scrambled differently. Then, after drying and scanning them, identifying the deviates and printing a third set of just those, perhaps duplicated once again and averaging the whole lot. Simple, but a lot of paper.
« Last Edit: November 15, 2017, 11:11:44 pm by Doug Gray »
Logged

GrahamBy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1813
    • Some of my photos
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #10 on: November 16, 2017, 07:31:00 am »

I wonder if there might be some value in doing some L*a*b*-space smoothing?

My experience of this is in a completely different domain: software for automatic engine tuning, where you measure exhaust-gas oxygen and use it to apply corrections to a 2-d map of injector duration as a function of rpm and throttle opening. Because of the measurement errors, you can end up with some pretty implausible maps, but these can be improved if you apply a Gaussian smooth to the corrections (which are already averaged for each cell... ideally you could imagine a regression that would take into account the number of observations and their variance, but I didn't go there for various reasons).

One problem is knowing what is variance, since if the original map is "rough", a perfect set of corrections must also be rough... I had the luxury of iteration and damping (ie making only half the suggested change at each step) without the cost and hassle of making repeated prints.
Logged

GWGill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 608
  • Author of ArgyllCMS & ArgyllPRO ColorMeter
    • ArgyllCMS
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #11 on: November 16, 2017, 07:56:06 pm »

I wonder if there might be some value in doing some L*a*b*-space smoothing?
I can't speak for how Profile Maker works (the news that it failed when points were deleted from the chart is interesting), but ArgyllCMS uses a scattered data interpolation routine to turn the patch data into an A2B table (plus per channel curves, which is a tweak with different story). There is a smoothing magic number involved, and the "-r" parameter allows indirect control of that smoothness/accuracy balance. It's this smoothness balance that prevents "over fitting". Using a scattered data routine makes it fairly agnostic about the chart point distribution.

[ I had a clever idea a while ago for auto tuning this smoothing factor in a gamut-local way, and got as far as proving to myself that it would probably work, but turning that sketch into working code is still on my "nice to do sometime" list, as it is not simple, and would mean revisiting every use of the scattered fitting function in the whole code base. ]
Logged

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2197
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #12 on: November 17, 2017, 12:36:45 am »

I can't speak for how Profile Maker works (the news that it failed when points were deleted from the chart is interesting), but ArgyllCMS uses a scattered data interpolation routine to turn the patch data into an A2B table (plus per channel curves, which is a tweak with different story). There is a smoothing magic number involved, and the "-r" parameter allows indirect control of that smoothness/accuracy balance. It's this smoothness balance that prevents "over fitting". Using a scattered data routine makes it fairly agnostic about the chart point distribution.

[ I had a clever idea a while ago for auto tuning this smoothing factor in a gamut-local way, and got as far as proving to myself that it would probably work, but turning that sketch into working code is still on my "nice to do sometime" list, as it is not simple, and would mean revisiting every use of the scattered fitting function in the whole code base. ]
Hi Graeme
Actually it was I1 Profiler that failed. I didn't try PM5.

Indeed, I played a bit with the -r option. Reducing it from the default of .5% to .1% grouped the A2B much closer though still with a distribution greater than I1Profiler.

Indeed, I think I1 Profiler by default overfits in the broader gamut but appears to treat the near neutrals which were at twice the density differently. Overfitting isn't good if there is significant variance in the individual color patches as is the case for the 9800. However, I got good results with a 2500 patch set duplicated and re-randomized then averaged. This reduced the impact of overfitting that would be more problematic with a 5000 patch set w/o duplicates.

Further pursuing the notion that increasing the patch duplication only on colors that exhibit high variance looks very promising. I did a near color check and indeed, patches with high color variance also had closest neighbors with high variance.

Therefor, using a patch set where the expected variance for a particular color is reduced by N duplication through simple lookup to the closest color that I have good statistics on should improve the overall color variance without duplicating all patches the majority of which have only a small variance.
Logged

GWGill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 608
  • Author of ArgyllCMS & ArgyllPRO ColorMeter
    • ArgyllCMS
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #13 on: November 17, 2017, 02:27:00 am »

Therefor, using a patch set where the expected variance for a particular color is reduced by N duplication through simple lookup to the closest color that I have good statistics on should improve the overall color variance without duplicating all patches the majority of which have only a small variance.
I've always wondered whether such an approach is optimal though - if the smoothing is at the appropriate level, it should be better to use the extra patch budget for more unique samples, thereby getting more detail and averaging out variance at the same time.
Logged

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2197
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #14 on: November 17, 2017, 10:55:32 am »

I've always wondered whether such an approach is optimal though - if the smoothing is at the appropriate level, it should be better to use the extra patch budget for more unique samples, thereby getting more detail and averaging out variance at the same time.
Hi Graeme,
I think you're right and that should provide somewhat better results. At least if the profiling software can handle the variation and smooth reasonably. It might be worse if the s/w overfits like I1Profile seems to do.
Logged

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2197
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #15 on: November 17, 2017, 01:08:47 pm »

I wonder if there might be some value in doing some L*a*b*-space smoothing?

My experience of this is in a completely different domain: software for automatic engine tuning, where you measure exhaust-gas oxygen and use it to apply corrections to a 2-d map of injector duration as a function of rpm and throttle opening. Because of the measurement errors, you can end up with some pretty implausible maps, but these can be improved if you apply a Gaussian smooth to the corrections (which are already averaged for each cell... ideally you could imagine a regression that would take into account the number of observations and their variance, but I didn't go there for various reasons).

One problem is knowing what is variance, since if the original map is "rough", a perfect set of corrections must also be rough... I had the luxury of iteration and damping (ie making only half the suggested change at each step) without the cost and hassle of making repeated prints.

Graeme's comment about the -r option in Argyll does implement degrees of smoothing in the device space measurements. The issue with printer targets is that there are occasional large outliers in addition to the bulk of variation which is more consistent with a normal distribution. The outliers are >5 std dev out and appear to be mostly due to variations in the paper surface itself rather than the printer. For instance with Canson rag matte the outliers are close to non-existent. Not sure why they are more frequent on glossy and worse, on semigloss/luster, but they are.  I'm not sure whether just excluding them at some nominal threshold or doing a Gaussian smoothing would be better. Interesting idea though.
Logged

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #16 on: November 17, 2017, 01:45:31 pm »

Whatever test of patches you use, be sure to also include Bill Atkinson's 1728-patch target!
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #17 on: November 17, 2017, 01:47:56 pm »

My approach with ArgyllCMS is to use a small patch set to generate the first profile (which is generally quite good) and then a slightly larger profile that incorporates both a B/W patch set and near neutral tweak mentioned by Graeme for the final profile.  It takes an extra day to do this (two sets of paper targets to let dry) but the profiles are quite good for my use.
I do the same in i1Profiler where my optimization target is larger than the original (Bill Atkinson's 1728-patch target) by nearly 2. And I've seen it produce better results, sometimes on some papers/printers. Both in the soft proof and the output in some areas of color space. It's like the later patches 'fill in the holes' in some cases, not all. So I just always do a two step profile process.
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2197
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #18 on: November 18, 2017, 11:12:20 pm »

I created a patch set with:
1. 15x15x15 grid points, evenly spaced.
2. 30x3x3 neutrals and near neutrals
3. About 2400 Duplicate patches (2 or more) colors with high variance

Measured ave dE00 against reference Lab colors: .40.

This compares to about .45 for duplicate/averaged 2553 patches. Both require 6 US letter size Isis target prints.

Included is a histogram comparing reference Lab colors to measured, printed of the same. Also, the CGATS file suitable for loading into I1Profiler optimized for the Epson 9800. It uses up 6 pages but goes pretty fast with M2 on an Isis.

The largest dE00 values are associated with Lab values of L=95,0,0 which is just outside the paper/printer gamut which is trying to bring the a* and b* in to 0 while the paper's white point is already at L=95.  Also, the Epson 9800 response is lumpy with a particularly strong deviation in b* from -1 to -4 around L*=90 from roughly L*85 to 93 and this is exceeding what the 3D Luts can deal with. Outside of those the ave dE00 is about .38.

This lumpy anomaly in the 9800 is intrinsic. It can be seen running the A2B tables of the canned profiles from 2005 so it's not a function of wear and tear. However, it's weird in that the Advanced B&W is quite well behaved and smoothly transitions along the entire range from 0 to 255 with only gradual, smallish changes in a* and b*. Apparently, it's just something intrinsic in how the microweave color algos are done.

While it's not smaller, it does have better performance than anything I've made less than or equal to targets with 6 US letter pages.

« Last Edit: November 19, 2017, 12:17:40 am by Doug Gray »
Logged

GWGill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 608
  • Author of ArgyllCMS & ArgyllPRO ColorMeter
    • ArgyllCMS
Re: Starting a search for a smaller, but more optimal, profile patch set
« Reply #19 on: November 19, 2017, 03:55:53 am »

This lumpy anomaly in the 9800 is intrinsic. It can be seen running the A2B tables of the canned profiles from 2005 so it's not a function of wear and tear. However, it's weird in that the Advanced B&W is quite well behaved and smoothly transitions along the entire range from 0 to 255 with only gradual, smallish changes in a* and b*. Apparently, it's just something intrinsic in how the microweave color algos are done.
Presumably you're driving it with in RGB, so such weirdness could just be a limitation of the profiling used to create its RGB->CMYKLCLMLLK lookup table.
Logged
Pages: [1] 2   Go Up