Pages: [1] 2   Go Down

Author Topic: The ultimate Linearization: my take  (Read 23328 times)

NeroMetalliko

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 78
The ultimate Linearization: my take
« on: May 08, 2013, 03:37:13 am »

Hello to all the friends here.

NOTE: This argument was previously posted by myself on "dpreview" printing forum, but I have decided to mirror it here because maybe this place is more specifically suited for these kind of arguments.

I bought an Epson R3000 this winter, and a ColorMunki Photo in January.

After some test print in B&W with several top quality fine art papers I realized that even using the special ABW mode the printed output was not perfectly linear, and that there was no easy way to balance this and, in addition, to get a perfect neutral (or sepia split) tone too.

Using ICC profiles and printing in color was not better, even if more potentially flexible.

I tried the QTR linearization tool (following the very useful Keith Cooper article at Northlight Images site) but, I found this solution a little bit cumbersome and the results were not so good/consistent. In addition I wanted something able to neutralize the gray tone (and/or to get some sepia/split tones) in a automated consistent repeatable way.

So I started to think how to find a "final" solution for this issue, and started to learn a LOT of things, an started to develop a tool for this purpose, and testing, and printing, measuring, and learning again and again...

Only to give you a short indication of the insane work involved in this task I list the followings things I needed to develop/do to reach the full goal:

- 4 B&W test strips set of 18, 34, 52 and 68 patches (1 ,2 ,3 and 4 strips of 18 patches each with redundand black and white patches), optimized for the ColorMunki in 16bit TIFF.

- print the first strip, use X-Rite ColorPicker to measure the strip and export the proprietary X-Rite ".cxf" file of the measured values.

- 6x averaging of the measurements in 3x forward-backward sequence to minimize measurements errors.

- developing/coding a ".xml" filter to import the ".cxf" file in OpenOffice Calc for data manipulation/graph and ".csv" conversion (ColorPicker for windows does not directly export to ".csv")

- developing/coding some ".m" scripts in Octave (an open source Matlab equivalent) for a complex adaptive multi-step data processing algorithm providing the required linearization curves (including Adobe RGB and sRGB to/from CIELab conversion thanks to Bruce Lindbloom wonderful site).

- learning/creating a DeviceLink and/or Abstract ICC profile (Adobe RGB and sRGB supported) based on the linearization curves previously calculated (interpolated up to 4096 points). I developed some custom batch files using ICCxmltool for this task.

- apply of the Linearization ICC profile and printing the second strip. A DeviceLink ICC profile is like a Curve on steroids, with each RGB channel automatically calculated based on the strip measurements. No more empiric trial and errors.

- repeat to improve the linearization if needed (yes, this is a multi-step capable system). Typically the first pass (18 patches) corrects roughly 80%-90% of the non-linearity. A second step (34 or 52 patches) is capable to reach a near perfect results. For reference/paranoid results you can add a third step of 68 patches. At each step is possible to include, if desired, the gray tone neutralization for ICC color profiles prints.

The great thing of this system is that all the calculation are made in CIELab colorspace, allowing a separate paper linearization compensation between Lightness and color for every kind of print: ABW B&W and ICC color.

- For B&W prints using ABW and Gray Gamma 2.2 grayscale you can fully linearize the Lightness (the color toning is not under external control in ABW mode).

- For B&W prints using RGB ICC (custom or canned) profiles you can choose to linearize the Lightness AND (or not) neutralize the gray tone (or produce a custom/split toned curve too) still maintaining the Lightness linearization.

- for Color prints using RGB ICC (custom or canned) profiles you can choose to linearize the Lightness AND (or not) neutralize the gray axis too (compensating for paper/profile color cast) still maintaining the Lightness linearization.

As you can easily realize I have spent more than 4 full months of learning, coding, testing, re-learning, thinking, recoding, retesting, but today I can share with you some early results of this project.

In attachment you can find a jpg with 4 graphs. The paper used in this example is Epson Hot Press Bright 330 Signature Worthy (matte black).

The first two graph are related to ABW B&W print (neutral dark). The two graph in the second row are related to a B&W print using ICC (Epson canned profile).

In black is showed the Lightness (L*) and in green/blue the a* and b* values. X axis is Gray expressed in L* (from black to white). Y axis is measured Lightness (0-100 to the left) and -5/+5 for measured a* and b* (as visible to the right).

- As you can see in "ABW - BEFORE" (first graph, 18 patches) you have a typical dark print, reaching more than -8 L* units error in the middle when compared to the theoretical straight line from black to white (dotted line). Gray tone is not perfect but not too bad. Look at the yellowish tone of the paper too (blue line).

- The "ABW - AFTER linearization" is the results after 2 step of the linearization process (reaching 52 measured patches). The Lightness is perfectly straight now. Gray tone obviously is not affected here and remains near the same (neutral dark).

- The "ICC (canned) - BEFORE" print (second row) is related to the same 18 patches strip as the ABW one, but printed using the canned ICC (relative colorimetric, black point compensation on). There is a slightly weaker black value and some more gray tone deviance, but the overall Lightness non linearity is similar to the ABW print.

- In the "ICC (canned) - AFTER linearization AND tone neutralize" graph you can see the effect after 3 steps (up to 68 patches) of the combined action of Lightness linearization AND gray tone compensation. Here the black point is comparable to the ABW one, the Lightness is perfectly straight and the gray tone is perfectly gray. The roll-off to the paper yellow color is allowed and specifically designed to prevent an abrupt transition from full neutral gray to the yellow paper tone (we cannot change the paper white tone).

Keep in mind that tagging this whole object as "alpha" is not wrong: the usage of this multiple tool is very complex at the moment, because involves a lot of passages between different applications and custom scripts and human supervision, and I don't know if it could never became a single straight-forward application in the future, this will require a lot of time for sure and I don't know if there could be a real world interest in something like this, apart from some crazy people like me. :)

Let me know what do you think, every comment/suggestion/question is welcome.

Thanks for the patience.

Ciao :)
Logged

Ernst Dinkla

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4005
Re: The ultimate Linearization: my take
« Reply #1 on: May 08, 2013, 09:51:23 am »

Linear and perceptually correct are different choices. QTR does the last with its calibration plus profiling.

Ernst, op de lei getypt.
Logged

NeroMetalliko

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 78
Re: The ultimate Linearization: my take
« Reply #2 on: May 08, 2013, 01:03:25 pm »

Hello Ernst,
thanks for your comment.
However it is not completely clear to me what you really mean.

My linearization is performed in L*, so this is the best way to get what you can consider "perceptually correct".

In any case, this still is not the real point.
If you "order" from a file to print a gray patch which L* value is 50 and then, after measuring the real printed value with an instrument you get L* = 40 you have an error.
This is simply wrong and has nothing to do with "perceptually correct".
My linearization does exactly this: if the gray value of the file is L*=X then you get exactly L*=X in the print (as measured from a spectrophotometer).

Now, please, any other comment is welcome, and let me know if I'm missing something in your opinion.

And let me thank you for your precious  "Spectrumviz", I have found it simply the most relevant source of real world paper informations and it helped me a lot to learn/choose some paper to test.

Ciao :)


Logged

MHMG

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1285
Re: The ultimate Linearization: my take
« Reply #3 on: May 08, 2013, 01:55:31 pm »



My linearization is performed in L*, so this is the best way to get what you can consider "perceptually correct".
Now, please, any other comment is welcome, and let me know if I'm missing something in your opinion.

You've got the right basic idea, but you are overlooking the perceptually desired gamma = 1 relationship  between L* input and L* output that most people prefer to see in a reflection print, particularly for the midtones in the print. For all digital working spaces i.e., sRGB, aRGB, prophotoRBG, and even gamma2.2 grayscale, the working space covers an L* range of 100. In other words, pure white is defined as L* = 100 while pure black is defined to be 0.  No reflection print can cover that range so some gamma compression is going to be required. Your choice to linearize L* over the total input/output range means you are forcing the loss of contrast in a visually linear way evenly across the highlights, mid tone, and shadows. However, most people will view that print as too "flat" i.e., lacking in good overall visual contrast. Now, take a look at Epson's ABW ramp. That ramp linearizes L*input/L*output gamma for midtones and hlghlight closer to gamma =1.0 by allowing more compression into the shadows but thereby picking up the contrast in midrange and highlights. That's a better starting point for most people. One could also use a gentle "S" curve to hit midtone gamma = 1 and spread the necessary compression over both shadows and highlight zones.  Ultimately, one needs to pick what's a good starting point, because no one method of compression will be good for all images.  Anyway, for a great print, after applying the chosen global curve you will still have to go back with dodging and burning in local areas to help restore the visual appearance of good contrast even further.  For the Epson curve, I'd have to concentrate pulling up the shadow detail, but midtones and highlights would be in pretty good shape already. For your full linearized ramp, I'd have to first add a correction curve to boost midtone contrast thus initially pushing the result closer to where Epson started or closer to a global S-shape depending on the image content. No free lunch.

That said, where a linearized L* ramp makes a wise starting point is when initially linearizing a printer's ink ramp prior to building the output profile. As Ernst noted, some RIPs do use the L* linearizing methodology to dial in the printer's initial ink ramps. But the subsequent profiling process then usually adds more "secret" sauce contrast scaling to the perceptual rendering tag in order to cover what has been discussed above.
Logged

NeroMetalliko

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 78
Re: The ultimate Linearization: my take
« Reply #4 on: May 08, 2013, 05:05:37 pm »

Hello,
many thanks for your comment, I have really appreciated it.

First of all I want to highlight here that having the possibility to share opinions with people like you, Ernst and a lot of others who have literally a lifelong experience and terrific knowledge regarding these themes is something really invaluable for me and it's exactly the reason why I have "dared"  to post my humble thoughts and early test results here.

After reading your answer I better understand even the Ernst answer, and I want to apologize for an imprecise example I have used in that case, when I have stated that my linearization allows to get the same measured L* value of the one contained in the image file. This is obviously not the case as you have correctly pointed out.

So, let me clarify a little bit some things, because I think it could be useful.
In a printed image we are always limited to a minimum L* value, mainly related to the max amount of ink the paper coating can accept before saturating, and the maximum L* value related to the paper white. In my example these limits were around 13 and 96.
A full image L* ramp spans between 0 and 100 in a linear way, and we could, for the moment, consider this something perceptually correct enough.

When I have started to work on this project I have made some first assumptions/decisions in order to allow me to set up the things (from zero) in the simplest way for the purpose to develop and see if the system was correctly working or not.
As noted, the first chosen approach was targeted around a simple L* "linear" rule constrained between the minimum and maximum L* values obtained from the given paper/printer/ink combination (13 and 96 in the example).
This is my current "take", this is what I have showed in my "early results", and this was mainly intended to show that the system works correctly because I was able to perfectly reach my target linear L* rule (that we can obviously discuss), both in ABW and ICC color prints.
In addition I have achieved the really interesting result to get even a full neutral (or custom colored) gray tone compensating the a*, b* paper/ink deviations too, which is not bad at all (and I don't know if this is something automatically done by QTR, for example).

So, the project is clearly in "alpha" stage: all this stuff is based on free open source tools, the spirit of my post was to share the work I have done and the first results I got, confirming that I have the control of these variables in a repeatable (measurement based) way.
As per what can be worth here, I can assure you that, in my humble opinion, the prints done with this first simple L* "Linear" target are way better overall matching the displayed image on calibrated screens, that in the shadows details there is simply no contest and that the potential "flat" side effect you rightly pointed out is perfectly acceptable given the shadows advantage (no free lunch). But here we are speaking of subjective tastes of course.

The good news is that I can develop a different "s" shaped L* target rule too. I could call it "quasi-absolute", aimed to mimic a 45 degrees slope line in the middle tones and slightly compressing the deep shadows and deep highlights to roll off to the real achievable ink/paper limits. This could be something really interesting and preferred by someone.

Let me know what do you think. In addition, if someone is willing to try some of my linearized ICC profiles I can take in consideration the possibility to provide a first 18 patches test strip file to be printed and measured, and, by sending me the measured exported "cxf" file, I can process the data and send back you a measurement graph and a correction profile for personal evaluation.

Any further comment/suggestion is always welcome,
thanks again.

Ciao :)
Logged

Alan Goldhammer

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4344
    • A Goldhammer Photography
Re: The ultimate Linearization: my take
« Reply #5 on: May 08, 2013, 06:34:13 pm »

I use the QTR profile making tool for my Epson 3880 and have produced ABW profiles that result in a linear result as per the spectral measurements of a 21 step B/W wedge.  going to a 51 step B/W patch set did not produce any measurable difference in my testing.  I use ArgyllCMS to generate and read the patches and the QTR tool for making the profiles.  Of course such profiles can only be used on Windows OS machines these days.
Logged

NeroMetalliko

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 78
Re: The ultimate Linearization: my take
« Reply #6 on: May 09, 2013, 06:22:30 am »

I use the QTR profile making tool for my Epson 3880 and have produced ABW profiles that result in a linear result as per the spectral measurements of a 21 step B/W wedge.  going to a 51 step B/W patch set did not produce any measurable difference in my testing.  I use ArgyllCMS to generate and read the patches and the QTR tool for making the profiles.

Hello,
thanks for your comment, it gives me the opportunity to explain some additional little things.

The workflow you have described is exactly the one I have started with at first, before deciding to jump on my self-developed solution.
I like to note that in this way you get results similar to my "linear" L* rule discussed above, which I still consider a very reasonable choice and a very good overall tradeoff.

That said, the main problem creating and reading the strips with Argyll is that, if you want a particular custom set of patches (adding some redundant patches or getting L* equally spaced wedges for example) you still have to manually edit the generated tiff files and to modify some other text configuration files accordingly.

In addition the 16bit tiff files generated by Argyll and edited (or created) with other popular editing software (like Photoshop) are not truly full 16 bit ones.
I want not to flame/bother too much on this argument, but if you look at them with an hex editor you discover some little deviations from the theoretical perfect 0-65535 full range. It is not something noticeable in real world scenario of course, but is something I liked to avoid and address anyway.

For these reasons I have done a script in Octave to mathematically generate/requantize perfect 16bit values in my strip files.

In this way I have the flexibility to choose any custom set and, even if not so comfortable, I ended preferring to read the strips with ColorPicker, because I can perform multiple readings with forward-backward sequence to average measurements errors, I can repeat/combine the final palettes if something goes wrong during measurement, and export a single cxf file for data processing without additional manual edits.

If you consider that, as per what I have experienced, Argyll cannot create correction profiles based only on gray patches, forcing me in any case to externally perform the math and the profile creation, you can easily understand why I have abandoned this route.

Regarding QTR, let me remark here that I think it is a really glorious piece of software, full of powerful features, highly effective and almost mandatory if you want to go for the Piezography route, for example.
Kudos to Roy Harrington for his wonderful work.

Unlikely, after few tests and a short mail exchange with Roy, there were some little things I don't liked in using QTR for printing instead of using the Epson drivers.
Even if you can tag it as pure marketing vaporware QTR does not seemed to me to support the full max resolution of the R3000.
I have experienced some little quirks with the trays, and the process to get a full profiled setup is not so immediate, including the curves setup and the ink limits values and so on.
In addition I prefer to have a single workflow for printing color and BW, using my preferred application and the Epson drivers, with possibly the most interchangeable limited set of correction profiles for both (color and BW), possibly generated in the less empiric and most automated measurements based way.

This lead me to abandon QTR as printer driver (but this is only my personal choice) and eventually only using their linearization icc making tool alone.
Please, correct me here if I'm wrong, but after some test as per what I have experienced, the QTR icc rgb making tool alone is based on the same L* linear approach I have implemented in my scripts (above described).

But there were some other things I wanted to address: the gray tone compensation.
Again, correct if I'm wrong, but I don't think the QTR tool is able to do this.
Even if it creates a RGB icc profile (for compatibility problems with latest CS6 and grayscale profiles), it is a Lightness only correction profile, because it was mainly intended to work for grayscale images.
If you print in ABW there are no further possibilities to neutralize the gray tone, you can only dial the tone settings in some way and apply an empiric trial and error procedure to your tastes.

But if you print in ICC RGB color this is something really possible, and my system create a neutral gray tone full correction in an automatic and perfect way. This kind of gray tone correction is something I can manage independently from the Lightness (L* linear) correction by integrating the two or leaving alone the two as preferred. 

It is useful to add that in the same way I can create a "toning" profile with every mathematically generated tone curve you can imagine.

The nice thing is that my correction curves, being DeviceLink ICC profiles,  are perfectly able to compensate even external chemical-based printing services that usually accepts only jpg files with embedded sRGB color space.
If they allow you to print in "native" mode (without automatic corrections) in 1 or 2 steps you can create a correction curve for Lightness and/or color cast perfectly matched to the external service.

An additional (under development) feature is the ability to handle the "over-inking" issue automatically, by shifting the correction profile accordingly. This is something the QTR tool alone could simply not handle (it gives you an error if the L* ramp measurement curve is not monotonic). To address this you have to use the QTR driver and set up correctly all the ink limits in a not banal and immediate procedure.

Not to mention that using my scripts in Octave all the math is done in floating point, with virtually zero errors across color space conversions (Adobe RGB, sRGB and CIELab) and very sophisticated interpolation algorithms.

Just for the sake of curiosity try to open a full grayscale ramp patches in Adobe RGB using Photshop CS6, and to perform a Lab conversion and a Adobe RGB back conversion again. Look at the histogram between the first and the last and look at what happens to the patches 13,13,13 and 14,14,14 and 15,15,15 during this theoretically neutral trip :)

For all these reasons I slowly proceeded with my humble work, which early results I have showed here in my first post.
I know that there is a lot to do, mainly in simplification of the process (which is something really important and currently not fully achieved). I will maybe even try to develop a "s" shaped L* target as suggested.
 
Sorry for the long bothering post.
Any further comment/suggestion is still welcome and appreciated.
Many thanks again.

Ciao :)
Logged

samueljohnchia

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 498
Re: The ultimate Linearization: my take
« Reply #7 on: May 09, 2013, 09:43:49 am »

Hi, thanks for sharing. Very interesting work. So if my source file for printing has L* values from 0 to 13, and from 96 to 100, based on your Lmin of 13 and Lmax of 96, will the source values get clipped, using your first proposed method?
Logged

NeroMetalliko

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 78
Re: The ultimate Linearization: my take
« Reply #8 on: May 09, 2013, 11:08:36 am »

Hi, thanks for sharing. Very interesting work. So if my source file for printing has L* values from 0 to 13, and from 96 to 100, based on your Lmin of 13 and Lmax of 96, will the source values get clipped, using your first proposed method?

Hello,
Lmin and Lmax values (which in this example are 13 and 96) are something we cannot choose or avoid, because are the real world limits of each paper/ink combination.
Every time you print a image file with full (0-100) L* range you are forced to accept a "compression" according to the media/ink limits.
So, the only option you have is to choose "how" to handle this range compression.
A "Linear" approach as the one I have showed means an L* (perceptually good) equally distributed compression from 0-100 over the entire allowed range (13-96).

In this case the best answer I can give you is that the 0-13 portion of the source image, as visible in the linearized graph, is remapped in linear way over an approximate 13-24 range; the 96-100 range is remapped approximately to 93-96. The central zone is linear compressed accordingly. No clipping occurs.
If you choose a different (not fully linear) remapping approach you get different results based on the shape of the remapping curve, with advantages/disadvantages in some zones versus others as highlighted in some post above.

I hope it will help.

Ciao :)
Logged

samueljohnchia

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 498
Re: The ultimate Linearization: my take
« Reply #9 on: May 09, 2013, 11:24:53 am »

Hello,
Lmin and Lmax values (which in this example are 13 and 96) are something we cannot choose or avoid, because are the real world limits of each paper/ink combination.
Every time you print a image file with full (0-100) L* range you are forced to accept a "compression" according to the media/ink limits.
So, the only option you have is to choose "how" to handle this range compression.
A "Linear" approach as the one I have showed means an L* (perceptually good) equally distributed compression from 0-100 over the entire allowed range (13-96).

Thanks for the clarification. So L* 0 to 100 will be linearly compressed into the dynamic range of the paper-ink combination, with equal steps for every unit increase in luminosity. That seems to be exactly the approach Eric Chan took for his Epson ABW profiles. BTW I liked it so much I made linearization curves to apply to images in Photoshop before printing, based on the same approach.

Quote
If you "order" from a file to print a gray patch which L* value is 50 and then, after measuring the real printed value with an instrument you get L* = 40 you have an error.

But that would mean that L*50 in the source will not be exactly L*50 measured from the print,...but it will be close. Is that correct?
Logged

NeroMetalliko

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 78
Re: The ultimate Linearization: my take
« Reply #10 on: May 09, 2013, 11:30:08 am »

But that would mean that L*50 in the source will not be exactly L*50 measured from the print,...but it will be close. Is that correct?

Yes, you are right, sorry but this was an unfortunate example I wrote in my first response to Ernst.
I have already apologized in my second answer for the confusion this statement could create.

Ciao :)
Logged

samueljohnchia

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 498
Re: The ultimate Linearization: my take
« Reply #11 on: May 09, 2013, 12:17:59 pm »

Yes, you are right, sorry but this was an unfortunate example I wrote in my first response to Ernst.
I have already apologized in my second answer for the confusion this statement could create.

Ciao :)

No worries, I just wished to clarify the issue, because your reply wasn't clear to me. This is great work.

Logged

NeroMetalliko

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 78
Re: The ultimate Linearization: my take
« Reply #12 on: May 09, 2013, 12:59:25 pm »

Quote from: samueljohnchia
Thanks for the clarification. So L* 0 to 100 will be linearly compressed into the dynamic range of the paper-ink combination, with equal steps for every unit increase in luminosity. That seems to be exactly the approach Eric Chan took for his Epson ABW profiles. BTW I liked it so much I made linearization curves to apply to images in Photoshop before printing, based on the same approach.

Thanks, this is nice to know.

In fact I agree with you:
every time I compare a "Linearized" print vs the not corrected one, including ABW prints, I like the results very much.
The big difference is really noticeable between the on-screen image, in which you see a lot of shadows detail (on gamma 2.2 120cd/m2 calibrated monitor, which yes, maybe is a little bit too bright for print comparison, I know) and the two prints.
The not linearized print has always a slightly dark overall appearance and the shadows are often too dark and compressed when compared to the screen.
The Linearized print is, even if not exactly as the screen because of their nature being a reflected light image, absolutely more close to the screen appearance, with subtly increased but clearly discernible shadows detail, and without appearing "flat" at all (in my opinion at least). In fact the total range from black to white is the same, obviously, but the Linearized print is more balanced and pleasant to my taste.
I have compared even Piezography prints and, from this point of view, I dare to say that I prefer my Linearized appearance.
In any case my ICC could be usable even in a Piezography workflow (which I don't have), if desired, with all the advantages of 7 blacks inks.

In conclusion I like to add that I prefer the Linear approach even because I prefer to have the best match between image and print, allowing me to edit the image on screen to get the look I want, knowing that the print will be the most accurate possible, without having to deal with some more or less hidden secret sauces applied in the printing process in order to make it look more or less captivating, which I cannot fully preview and/or control before launching the print.

It is funny that I have started all of this exactly in the same manner you described: by manually (and in empiric trial/error way) building compensation curves in the image editing software that I applied before printing the image.

Ciao :)
« Last Edit: May 09, 2013, 01:02:06 pm by NeroMetalliko »
Logged

Ernst Dinkla

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4005
Re: The ultimate Linearization: my take
« Reply #13 on: May 13, 2013, 04:53:11 am »

Sorry, I was on a trip and not able to write a more specific message. I shouldn't have made such a cryptic comment at that point. Meanwhile the thread developed to way more than what I could have written and added the nuances to what is an interesting approach anyway. A tool like that will be welcome. Wonder whether it could be linked to the HP Zs spectrometer use with appropriate targets. I have been working on that for QTR's tools but dropped it at some point.

--
Met vriendelijke groet, Ernst

http://www.pigment-print.com/spectralplots/spectrumviz_1.htm
December 2012, 500+ inkjet media white spectral plots.
Logged

MHMG

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1285
Re: The ultimate Linearization: my take
« Reply #14 on: May 13, 2013, 08:25:30 am »

I should also have mentioned that Relative Rendering with Black point compensation (Photoshop's BPC feature) essentially produces the relative L* linearization from Lmin to Lmax that the OP prefers, while relative rendering w/o BPC follows a gamma = 1 ramp normalized to Lmax but causes clipping in the shadows when the digital image records lower L* values than the print media can achieve. Perceptual rendering intent(s) from different vendors tend to try to lift the mid tones to a gamma = 1 ramp thus rolling off in shadows sometimes in highlights as well. Hence, the reason that all of these rendering intents are offered is because each one is well suited to some images but not always the best choice for others. And, of course, when approaching a unique monochrome workflow where ICC profiles no longer apply, then one does need to find a way to recreate these various rendering intents via a different calibration method.

Nice work.

best,
Mark
http://www.aardenburg-imaging.com
« Last Edit: May 13, 2013, 08:32:23 am by MHMG »
Logged

NeroMetalliko

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 78
Re: The ultimate Linearization: my take
« Reply #15 on: May 13, 2013, 10:59:25 am »

A tool like that will be welcome. Wonder whether it could be linked to the HP Zs spectrometer use with appropriate targets.

Hello Ernst,
many thanks for the comment,
apart the time needed to do it, there are virtually no limits to adapt the tool to everyone needs, at least in this alpha stage.

I have created an importer for cxf file because this was the only choice I had using ColorMunki, but if your spectrometer allows you to export a file in whatever format (like csv or text for example) containing a table of L*,a*, b* measured values of a given target that's all you need.

Currently my strip-set are optimized for ColorMunki (14x14mm size of each patch) at 360ppi (Epson) and 16bit arranged in order to fit well on A4 (long side) and A3/A3+ (short side) paper sheets. This was decided because it was my typical need.

It's obviously possible to create different arranged strips optimized for different printers (300ppi or 600ppi for Canon, 360ppi or 720ppi for Epson, don't know for HP) and patch size matching the requirements of other spectrophotometers.
Optimizing the strips in order to have the best practical fit to the most used paper size is another thing to keep in consideration.

So, all what is needed to possibly test the system is:
- a spectrometer able to export a measurement L*,a*,b* file (cxf for X-rite, csv or text for others)
- some strip set defined by: print resolution (ppi), patch size (mm) and optionally paper size for practical arrangement
- a printing environment able to support 16bit tiff and to apply DeviceLink ICC profiles like Photoshop (an 8bit workflow is possible but not recommended)
- the use of Adobe RGB or sRGB as color working spaces (I still don't have developed the scripts for ProPhoto support)

Let me know if you have additional questions.

Ciao :)
 
Logged

NeroMetalliko

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 78
Re: The ultimate Linearization: my take
« Reply #16 on: May 13, 2013, 11:43:03 am »

I should also have mentioned that Relative Rendering with Black point compensation (Photoshop's BPC feature) essentially produces the relative L* linearization from Lmin to Lmax that the OP prefers, while relative rendering w/o BPC follows a gamma = 1 ramp normalized to Lmax but causes clipping in the shadows when the digital image records lower L* values than the print media can achieve. Perceptual rendering intent(s) from different vendors tend to try to lift the mid tones to a gamma = 1 ramp thus rolling off in shadows sometimes in highlights as well. Hence, the reason that all of these rendering intents are offered is because each one is well suited to some images but not always the best choice for others. And, of course, when approaching a unique monochrome workflow where ICC profiles no longer apply, then one does need to find a way to recreate these various rendering intents via a different calibration method.

Hello Mark,
many thanks for your contribution, I really appreciate it.

Feel free to correct me if you think I have got it wrong, in my opinion the L* linearized approach I have currently developed is comparable to a "perceptual colorimetric" intent as concept, in the sense that all the theoretical 0-100 L* image range is evenly compressed to the real output range allowed by the paper/ink combination.
A potential "s" shaped approach could be comparable to a "relative colorimetric" intent as concept, trying to match the real 45 degrees slope of the theoretical 0-100 ramp in the midtones and clipping somewhat the shadows and highlights in order to fit the allowed paper/ink values.

I agree with you that having the possibility to mimic those two basic behaviours in a B&W route, where ICC are not allowed (as is by using ABW) it is useful indeed and it was something I realized when I decided to start my work.

And, please, let me add here that your Aardenburg database is something literally priceless and I consider it simply the best real world lightfastness information source available: many thanks for all your efforts and the really excellent job.

Ciao :)
« Last Edit: May 13, 2013, 11:45:25 am by NeroMetalliko »
Logged

samueljohnchia

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 498
Re: The ultimate Linearization: my take
« Reply #17 on: May 14, 2013, 05:18:43 am »

I should also have mentioned that Relative Rendering with Black point compensation (Photoshop's BPC feature) essentially produces the relative L* linearization from Lmin to Lmax that the OP prefers, while relative rendering w/o BPC follows a gamma = 1 ramp normalized to Lmax but causes clipping in the shadows when the digital image records lower L* values than the print media can achieve.

Mark, with all due respect, I don't think that Adobe's Black Point Compensation can replicate NeroMetalliko's linearization technique. The algorithm, if I didn't interpret wrongly, has to estimate the destination black point, and uses a different mapping method for 0 ≤ L* ≤ 8 and for L* > 8. Nero's method depends on actual measurements of grayscale steps are produced when printing with a profile for color images, without when using the printer driver monochrome mode. BPC is actually less linear overall when it is on, compared to when it is off. Take a look at GamutVision's graphs.

Moreover, BPC can cause unwanted shifts in color.

If anything, Nero's work could replace BPC completely, and achieve equally space separation throughout the tonal scale. Whether it can avoid the large chroma shifts that is cause by BPC remains to be tested.

I feel that it is better to start with a profile that separates equally spaced grayscale tones uniformly from darkest to lightest. If one prefers additional contrast for that extra punch, or to maintain "perceptual contrast", profile building tools already have the option to increase the profile's contrast, or one can apply curves adjustment in Photoshop/LR. Starting the other way, it would be very difficult to tease out separation without some workarounds, and iterative measurements/corrections.
« Last Edit: May 14, 2013, 09:01:20 pm by samueljohnchia »
Logged

NeroMetalliko

  • Jr. Member
  • **
  • Offline Offline
  • Posts: 78
Re: The ultimate Linearization: my take
« Reply #18 on: May 17, 2013, 04:48:39 am »

Hello,
I'm happy to share with you some new UPDATES regarding the Linearization development.

 - I have developed my ColorMunki 16bit optimized strip-set (patch size 14x14mm) in 300ppi too, in order to better match Canon/Hp native driver resolution
 - I have derived some reduced-dimension strip-set (patch size 10x10mm) in order to better fit i1-Pro spectrophotometer capabilities, still 16bit, for both 360ppi and 300ppi
Please, note that I don't own a i1-Pro: currently these derived strips are only modified in the patch dimension, but arranged in the same way as the ColorMunki ones, so the true practical efficacy in real world use of these strips is still to be tested (and probably improved).
 
 - I have added Prophoto RGB to the supported colorspaces! :)
Ok, I don't use ProPhoto RGB as working space, but I know that a lot of professional users have adopted it as their working space, so I think it could be a good thing to have it supported (currently supported colorspaces are: sRGB, AdobeRGB and ProPhotoRGB).

Tech Note:
all my colorspace conversions scripts are able to match the Adobe ACE CMM slope limit,
applied by Adobe CMM to ALL gamma based colorspaces conversions, such as AdobeRGB and ProPhotoRGB.
This means that different CMM could use different slope limits (for numerical reasons).
Currently my implementation is tested and verified to match the Adobe one, as per AdobeRGB specifications
http://www.adobe.com/digitalimag/pdfs/AdobeRGB1998.pdf
(see Annex C, pag. 20)

This slope limit is confirmed to be applied by Adobe ACE (so Photoshop and Camera RAW) to all gamma modeled colorspaces (as ProPhoto).
I have found a confirmation in Adobe forums as per Chris Cox statements here
http://forums.adobe.com/message/1657509

Finally, I have numerically tested and verified it in Octave too during my internal development test.

Note that, the affected (slope limited) ranges are RGB 0-14 for AdobeRGB and RGB 0-4 for ProPhotoRGB. Eventual small differences potentially occurring by using different CMM are limited to those ranges.

sRGB already include a linear segment in their gamma curve inside the profile by definition, so it is not affected.


That said, In order to test the ProPhoto Linearization I have chosen to perform a try using a new and "hot" paper like Ilford Gold Cotton Smooth 330 matte, using the Ilford provided ICC profile (CS6 manages color, Relative colorimetric intent, Black point compensation ON).

I have used an Epson R3000 (with Epson original inks) and Matte Black ink.
Ultra Smooth Fine Art Paper as media setting.
All settings set to max quality (which is my standard setup for all papers):
SuperPhoto - 5760x1440dpi; MicroWeave ON; HighSpeed OFF; EdgeSmoothing OFF; FinestDetail N/A (grayed out)

In attachment you can find the final results after a 2 step procedure (18->34->68 wedges) 3 step procedure (18->34->-68->68 wedges) of Linearization,
including a gray tone "neutral smooth" correction.

Please, read my first post on top of this thread to learn how to interpret the graphs.

As side note:
There is a thread started by Ernst, regarding this paper and their LL review here on the forum:
http://www.luminous-landscape.com/forum/index.php?topic=78352.0
I can confirm that Black, White and Dmax measured values are nearly the same as the ones published by myself in that post (that were derived from another previous linearization I have done in AdobeRGB).

White: L*=96.36; a*=-0.19 ; b*=2.38
Black: L*=17.95; a*=1.87 ; b*=0.94
Dmax=1.58; Dmin=0.05


Ok, that's all for now,
I hope it will be appreciated.
Any comment/suggestion/question is welcome.

Ciao :)
« Last Edit: May 18, 2013, 01:23:59 am by NeroMetalliko »
Logged

Ernst Dinkla

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4005
Re: The ultimate Linearization: my take
« Reply #19 on: May 17, 2013, 06:59:29 am »

Sounds good, the more with the iterative increase of target patches towards the end.

For B&W (and most color) I would stay with AdobeRGB and Gamma 2.2, Images that need/can use ProPhoto for their color are not the ones I associate with B&W output.

Have to read more later on.


--
Met vriendelijke groet, Ernst

http://www.pigment-print.com/spectralplots/spectrumviz_1.htm
December 2012, 500+ inkjet media white spectral plots.
Logged
Pages: [1] 2   Go Up