Luminous Landscape Forum

Raw & Post Processing, Printing => Colour Management => Topic started by: Stephen Ray on May 02, 2016, 01:17:08 am

Title: The terms "linearization" vs "calibration"
Post by: Stephen Ray on May 02, 2016, 01:17:08 am
I’m researching printer calibration. As I pore over books, manuals, and guidelines from various manufacturers of RIP software and printing machines, I’m finding many seem to use the terms “calibration” and “linearization” interchangeably.

Do the experts use these terms as entirely interchangeable nowadays or are they differentiated somehow?

Comments, clarifications anyone?
Title: Re: The terms "linearization" vs "calibration"
Post by: Bart_van_der_Wolf on May 02, 2016, 04:12:00 am
I’m researching printer calibration. As I pore over books, manuals, and guidelines from various manufacturers of RIP software and printing machines, I’m finding many seem to use the terms “calibration” and “linearization” interchangeably.

Do the experts use these terms as entirely interchangeable nowadays or are they differentiated somehow?

Comments, clarifications anyone?

Hi Stephen,

Calibration establishes the relationship between input and output. It then allows to convert from one (the reference or standard) to the other (the unit under test), or vice versa.

The meaning of Linearization on the other hand depends on context, IMHO (I'm not an expert though). It modifies the relationship that was found by calibration into uniform intervals (in the context of numerical, or perceptual, or ...). So that would require a modification in the weighting parameters, especially when interpolating between known reference values.

Cheers,
Bart

P.S. Although analogies are often shaky, maybe something like this will clarify.
Input  Output
0 <--> 0
1 <--> 1
2 <--> 4
3 <--> 9
That reveals a power of 2 calibration. To linearize the output, one takes the square root of the output.
Title: Re: The terms "linearization" vs "calibration"
Post by: GWGill on May 02, 2016, 09:15:14 am
I’m finding many seem to use the terms “calibration” and “linearization” interchangeably.

"Calibration" has some agreement as to what it means - adjusting the behavior of the device. "Linearisation" is typically more of a hand-waving term for "making the device be better behaved". Most print processes use screening to convert continuous tone images into discrete dots of ink, and screening introduces dot gain. Some dot gain can be beneficial in giving the printing process a more perceptual response (that's where the Mac's 1.8 display gamma came from), but too much (because of a high DPI and/or lots of dot overlap) can make it very perceptually non-linear, and hard to control. So applying suitable per-channel calibration curves "to linearize the response" can tame this.
Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 02, 2016, 09:58:42 am
Calibration might contain a step that one would call linearization and it might not. Some devices can be linearized, some can't (due to the driver sending data to the printer, an Epson with the Epson driver comes to mind and yes, it could use better linearization). Calibration is an attempt to put the device in a desirable condition so it's a broad term where linearization is far more specific.
Title: Re: The terms "linearization" vs "calibration"
Post by: Slobodan Blagojevic on May 02, 2016, 10:08:50 am
Does anyone speak English around here?
Title: Re: The terms "linearization" vs "calibration"
Post by: Mark D Segal on May 02, 2016, 10:14:25 am
I’m researching printer calibration. As I pore over books, manuals, and guidelines from various manufacturers of RIP software and printing machines, I’m finding many seem to use the terms “calibration” and “linearization” interchangeably.

Do the experts use these terms as entirely interchangeable nowadays or are they differentiated somehow?

Comments, clarifications anyone?

Based on a very succinct explanation of linearization given by Harald Johnson on page 134 of his "Mastering Digital Printing", linearization is a particular type of printer calibration designed to align input values and output values to be the same by controlling the density of ink the printer lays on the paper. A perfect alignment would exactly print the same ink density as that required. For example, it would output 25%, 50%, 75% Black if the Input (the reference values) were each 25%, 50%, 75% Black (i.e. a straight line relationship between the value required and the value printed). Dot gain and printer performance drifting over time can cause these output values to diverge from their input or required values, in which case the printer performance is non-linear. Linearization is the process that brings them back to convergence. As explained by Amadou Diallo on page 93 of his "Mastering Digital Black and White", linearization aims to eliminate abrupt shifts in density and produce a smooth gradation of tones, by adjusting the printer's ink output to compensate for non-linear printer behaviour. We linearize the printer to assure its basic linear behaviour before creating specific paper/printer profiles, as the profiling of papers should start on the basis of the printer being able to lay down the right amount of ink for aligning input with output values.
Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 02, 2016, 10:41:27 am
Does anyone speak English around here?
That question begs another; do you understand English? IF one of the answer's isn't clear, ask for clarity and specifics. Otherwise, your post adds nothing to a conversation that so far, only you appear to have difficulty understanding.


http://digitaldog.net/files/LinearityandGamma.pdf (http://digitaldog.net/files/LinearityandGamma.pdf)
Title: Re: The terms "linearization" vs "calibration"
Post by: Slobodan Blagojevic on May 02, 2016, 11:59:07 am
That question begs another; do you understand English?...

If I don't, that would be understandable.

Before Mark posted a response in simple English (thanks, Mark):

"...it would output 25%, 50%, 75% Black if the Input (the reference values) were each 25%, 50%, 75% Black (i.e. a straight line relationship between the value required and the value printed."


I heard the following "definitions":

- "...uniform intervals (in the context of numerical, or perceptual, or ...)... a modification in the weighting parameters, especially when interpolating between known reference values."

- "...a hand-waving term for 'making the device be better behaved'."

- "...Some devices can be linearized, some can't."
- Wow, deep! Never mind that it fails to define the term.


Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 02, 2016, 12:06:46 pm
If I don't, that would be understandable.
So your post was to troll?
It is pretty obvious by your original post, you don't understand this topic, fine.
It is pretty obvious by your original post, you could have asked for clarification from any of the posts previous; you didn't.
It is pretty obvious by your original post, you were not posting to attempt your typical OT (and sometimes) comical side.
So what isn't obvious is why you posted what you did. But I guess we can thank Mark for clearing up your confusions and can move on.
Quote
I heard the following "definitions":


- "...uniform intervals (in the context of numerical, or perceptual, or ...)... a modification in the weighting parameters, especially when interpolating between known reference values."


- "...a hand-waving term for 'making the device be better behaved'."


- "...Some devices can be linearized, some can't." - Wow, deep! Never mind that it fails to define the term.
You 'heard' but apparently didn't understand which is fine, worse, you didn't ask for clarification. That would have been more sensible to ask about.


At this point, a quote from this film seems appropriate:

"What is your major malfunction, numbnuts? Didn't Mommy and Daddy show you enough attention when you were a child?"   ;D
-Gunnery Sergeant Hartman, Full Metal Jacket (1987)
Title: Re: The terms "linearization" vs "calibration"
Post by: Slobodan Blagojevic on May 02, 2016, 12:13:38 pm
...you didn't ask for clarification...

If those were "definitions," I shudder to think what "clarifications" might sound like.

On a related note, I had to learn English. You, however, were born with it, but seem to squander that gift.

Since you brought mommy and daddy into discussion, I'll mention grandma too: "If you can't explain something to your grandma, you do not understand it fully yourself."
Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 02, 2016, 12:17:07 pm
If those were "definitions," I shudder to think what "clarifications" might sound like.
Since for some reason, you were unable to ask for any, let alone specific clarifications, that's now moot.
But you got your 15 seconds of attention today so hopefully we can move on, or you can attempt to further aid the OP as everyone else here attempted but you.

Title: Re: The terms "linearization" vs "calibration"
Post by: Slobodan Blagojevic on May 02, 2016, 12:28:17 pm
... or you can attempt to further aid the OP as everyone else here attempted but you.

Posts in a forum are not there just as a dialog between the OP and experts, they serve to educate others as well. I was simply trying to learn something about the topic myself. The first step in learning is a proper definition of terms used.
Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 02, 2016, 12:38:10 pm
Posts in a forum are not there just as a dialog between the OP and experts, they serve to educate others as well. I was simply trying to learn something about the topic myself. The first step in learning is a proper definition of terms used.
IF this were a forum on languages, your original post might have some weight to it. It isn't such a forum. Your original post was simply a call for attention. If you wish to be educated, ask a question that has some relevance to the topic. You didn't.


He who asks a question is a fool for five minutes; he who does not ask a question remains a fool forever. -Chinese Proverb
So what's your question about THIS topic?
Title: Re: The terms "linearization" vs "calibration"
Post by: Slobodan Blagojevic on May 02, 2016, 12:54:10 pm
... So what's your question about THIS topic?

My question is the same as the OP's:

Quote
terms “calibration” and “linearization”... are they differentiated somehow?

That differentiation can only be achieved if each term is defined separately and then the difference pointed out. Plain English helps the rest of us understand experts.
Title: Re: The terms "linearization" vs "calibration"
Post by: bjanes on May 02, 2016, 01:08:24 pm
Calibration might contain a step that one would call linearization and it might not. Some devices can be linearized, some can't (due to the driver sending data to the printer, an Epson with the Epson driver comes to mind and yes, it could use better linearization). Calibration is an attempt to put the device in a desirable condition so it's a broad term where linearization is far more specific.

From Wikipedia:

The formal definition of calibration by the International Bureau of Weights and Measures is the following: "Operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties (of the calibrated instrument or secondary standard) and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication."

Does that clarify the definition?  :)
Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 02, 2016, 01:08:44 pm
My question is the same as the OP's:
That differentiation can only be achieved if each term is defined separately and then the difference pointed out. Plain English helps the rest of us understand experts.


Geez, like pulling teeth to get you to ask a simple question.... As for the rest of us, you mean (thus far) you! Best speak for yourself.



Calibration is a process whereby a device is placed into some predetermined condition or behavior. An example is when a user calibrates a display. We want to set some parameters that can be controlled, such as the display’s white point, brightness contrast, and TRC gamma. This calibration of the device creates a condition that can be standardized and is repeatable. This allows similar devices in multiple locations to behave the same way. Since a device such as a display is in a state of flux over time, it is necessary to calibrate the device. This calibration returns the device to the original aim point. Calibration is something we need to do on a regular basis. Particularly with such devices as a display, which varies considerably over time. In conventional photographic processes, this would be similar to running control strips and adjusting chemistry to achieve target densities. If we understand that a profile describes the behavior of a device, we should be aware that if the device changes its behavior, the profile is no longer valid. Calibration returns the device back to the original condition, maintaining the integrity of the profile. If the device can no longer reach the original aim point, a new aim point within the capability of the device needs to be created.


Linearization
Some profile-building products offer an optional step they usually call prelinearization. The idea is to output a target with a small subset of patches, usually CMY and K in various steps from light to very dark as seen in Fig. 6-9. This linearization target is measured and the software uses that information to produce an optimal target for profiling based upon the information gathered from the linearization step. This means that the profiling process becomes a two-step procedure. Some devices are quite nonlinear in how they reproduce color. The linearization step can aid in producing quality profiles from such devices. At the very least, linearization allows a good profile to be generated with an initially smaller number of patches. This is possible because the secondary patches generated from the linearization data is better optimized for the printer.
Some products support this prelinearization process and some do not. Products that do support prelinearization usually ask the user if they wish to use this option. If you know the printing is very nonlinear, it’s worth testing. On the accompanying CD is a TIFF file called InkDensityTest.tif, which can be useful for visually evaluating if the output device is nonlinear. If most of the steps block up in color and don’t show good tonal separa- tion, the printer is exhibiting this nonlinear behavior. You can try a different driver setting if available. However, this can often alter the resulting color gamut of the printer. Alternatively, you can try conduct- ing a prelinearization process, but even this can’t produce miracles. A profile can do only so much to overcome the limitations of poor printer behavior (see the sidebar, “Printer Drivers and Their Effect on Quality and Color Gamut”).
Some packages also offer a postlinearization process. This can be quite useful for devices that change their behavior. After an initial profile is built, the user prints a special postlinearization target. This target is measured and the profile can be updated to account for some changes in the output device. This postlinearization process can’t work miracles on output devices whose behavior has greatly shifted. In such a case, building a new profile from a standard target is the only viable solution. However, for some devices where the drift of a printer isn’t large, this postlinearization process is a useful feature. Some users running commercial labs print out a postlinearization target every day and update their existing profile to account for slight device drift.



And DO try to read the materials provided already!
http://digitaldog.net/files/LinearityandGamma.pdf
Paragraph 2 for example.
Title: Re: The terms "linearization" vs "calibration"
Post by: Doug Gray on May 02, 2016, 02:58:52 pm
It's probably worth noting for those with a technical background that "linearization" in printing, or monitors for that matter, has nothing to do with "linear" as used in math/engineering. Visual perception is highly non-linear so linearization in photography related topics typically refers to perceptually linear. This is particularly true for printer profiles where the lookup table conversions are based on L*a*b* (Lab hereafter). Just taking the neutral scale (L) this is actually a power of 3 slope tacked onto a small linear (power of 1) deep shadow front end. For instance L=50 has 18% of the reflected photon density of a L=100 patch. It would be 12.5% from a pure power of 3 but is 18% because of the uplift from the linear front end of the L curve.

It turns out that, for B&W, the closer a printer driver matches the L response curve the more accurate the 3D LUTs that are inside profiles can be made. So achieving perceptual linearization coincides with producing good profiles.
Title: Re: The terms "linearization" vs "calibration"
Post by: Lundberg02 on May 02, 2016, 07:17:56 pm
What the hell is wrong with the top grey scale in the bottom image. I have never seen "fluting" that bad. Are the boundaries being differentiated by a high pass?
Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 02, 2016, 07:20:08 pm
What the hell is wrong with the top grey scale in the bottom image. I have never seen "fluting" that bad. Are the boundaries being differentiated by a high pass?
Not at all visible on this end; NEC SpectraView PA272W, fully high bit display path.
Title: Re: The terms "linearization" vs "calibration"
Post by: GWGill on May 02, 2016, 08:08:41 pm
A perfect alignment would exactly print the same ink density as that required. For example, it would output 25%, 50%, 75% Black if the Input (the reference values) were each 25%, 50%, 75% Black (i.e. a straight line relationship between the value required and the value printed).
A direct relationship between input value and resulting reflectance value is never likely to be desirable, simply because it's perceptually non-linear. A more linear relationship between input values and perceptually linear results is typically a better goal. This translates to something like linear density or L* response.
Title: Re: The terms "linearization" vs "calibration"
Post by: Mark D Segal on May 02, 2016, 08:24:27 pm
Fair enough.
Title: Re: The terms "linearization" vs "calibration"
Post by: Bart_van_der_Wolf on May 03, 2016, 04:33:17 am
From Wikipedia:

The formal definition of calibration by the International Bureau of Weights and Measures is the following: "Operation that, under specified conditions, in a first step, establishes a relation between the quantity values with measurement uncertainties provided by measurement standards and corresponding indications with associated measurement uncertainties (of the calibrated instrument or secondary standard) and, in a second step, uses this information to establish a relation for obtaining a measurement result from an indication."

Does that clarify the definition?  :)

Hi Bill,

You're right, that doesn't really help. I knew about it, and therefore decided not to refer to it.

Maybe the ISO definition of the term "Calibration" is a bit more helpful (although essentially it says the same as the above definition).

This is from the standard: ISO/DIS 13655(en)
Graphic technology — Spectral measurement and colorimetric computation for graphic arts images


Quote
3.3 calibration
set of operations that establish, under specified conditions, the relationship between values of quantities indicated by a measuring instrument or measuring system, or values represented by a material measure or a reference material, and the corresponding values realized by standards
[SOURCE: ISO/IEC Guide 99 (VIM)]
Note 1 to entry: Contrary to a common usage, calibration is not the process of adjusting a measurement system such that it produces values that are believed to be correct. Calibration permits either the assignment of values of measurands to the indications (creating a reference table) or the decision to reset or adjust the device. Following the resetting or adjusting of the device, a calibration needs to be verified to ensure that the new device setting(s) provide indications within the accepted values.

Emphasis in bold is mine.

So Calibration establlshes the relationship between the measured values and the values as realised by standards. It's a reference table between input, and output in specific units.

'Realised by standards' means, calculated by methods described in standards, say a definition of the method of determining luminance, or chrominance, or density, and the units or coordinate system to use for those). Sometimes new (non-SI) units are created, like SFR (Spatial Frequency Response) instead of MTF, because they are not 100% identical, or different ISO speed ratings.

Cheers,
Bart
Title: Re: The terms "linearization" vs "calibration"
Post by: hjulenissen on May 03, 2016, 04:46:49 am
I’m researching printer calibration. As I pore over books, manuals, and guidelines from various manufacturers of RIP software and printing machines, I’m finding many seem to use the terms “calibration” and “linearization” interchangeably.

Do the experts use these terms as entirely interchangeable nowadays or are they differentiated somehow?

Comments, clarifications anyone?
I don't know this stuff.

Linearization sounds like something one would do to each channel individually (e.g. "gamma" or "gamma compensation"), while calibration might be the inter-channel part or the whole thing?

-h
Title: Re: The terms "linearization" vs "calibration"
Post by: GWGill on May 03, 2016, 07:54:35 am
So Calibration establlshes the relationship between the measured values and the values as realised by standards. It's a reference table between input, and output in specific units.
That would be a non-typical usage, more aligned with profiling, although it might come down to whether it is per-channel or the whole color response.

The usual meaning is the second one in your quote "or the decision to reset or adjust the device".
Title: Re: The terms "linearization" vs "calibration"
Post by: Bart_van_der_Wolf on May 03, 2016, 09:52:33 am
That would be a non-typical usage, more aligned with profiling, although it might come down to whether it is per-channel or the whole color response.

The usual meaning is the second one in your quote "or the decision to reset or adjust the device".

Hi Graeme,

I interpret the latter as an act based on the measurement result being far enough out of tolerance to warrant an adjustment,  a decision. This could be the result of measuring a reference tile which has know characteristics (which change very little over time).

Cheers,
Bart
Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 03, 2016, 12:45:36 pm
So Calibration establlshes the relationship between the measured values and the values as realised by standards. It's a reference table between input, and output in specific units.
Not buying that either. We calibrate a display right? If there's a 'standard', it is as likely to work as not work because of the huge variables between display technologies, the goal of the end user (match a print? Under what illuminant?). I'd prefer to say calibration is the attempt to put a device into a desirable, well behaved condition and to be able to recalibrate to that aim point should the device alter it's behavior. Whatever the 'standard' for a display, unless it's a WP of CCT 5150K, 150cd/m2, 300:1 contrast ratio, that 'standard' will not produce the display behavior I specifically need to match a print next to my display! Using a specific viewing condition. And I'll often alter the CCT and contrast ratio based on the papers I'm viewing, again, next to that display. That's why one size doesn't fit all needs, and why SpectraView and the very few, similar products that allow us to build multiple calibrations for multiple needs are so darn useful.
Title: Re: The terms "linearization" vs "calibration"
Post by: Doug Gray on May 03, 2016, 01:37:49 pm
Not buying that either. We calibrate a display right? If there's a 'standard', it is as likely to work as not work because of the huge variables between display technologies, the goal of the end user (match a print? Under what illuminant?). I'd prefer to say calibration is the attempt to put a device into a desirable, well behaved condition and to be able to recalibrate to that aim point should the device alter it's behavior. Whatever the 'standard' for a display, unless it's a WP of CCT 5150K, 150cd/m2, 300:1 contrast ratio, that 'standard' will not produce the display behavior I specifically need to match a print next to my display! Using a specific viewing condition. And I'll often alter the CCT and contrast ratio based on the papers I'm viewing, again, next to that display. That's why one size doesn't fit all needs, and why SpectraView and the very few, similar products that allow us to build multiple calibrations for multiple needs are so darn useful.

Yep. This is the beauty of products like SpectraView and ColorNavigator. They combine a kind of calibration, ie: setting up the monitor so that it is optimized for profiling, and the actual profiling. Lesser software/hardware requires setting up max luminance, CCT, and black point behavior manually or semi-manually then profiling to that. That's adequate if you are only using one setup, one light booth setting, and papers that are sufficiently close to each other in white point and OB behavior.
Title: Re: The terms "linearization" vs "calibration"
Post by: bjanes on May 03, 2016, 02:03:31 pm
Hi Bill,

You're right, that doesn't really help. I knew about it, and therefore decided not to refer to it.

Maybe the ISO definition of the term "Calibration" is a bit more helpful (although essentially it says the same as the above definition).

This is from the standard: ISO/DIS 13655(en)
Graphic technology — Spectral measurement and colorimetric computation for graphic arts images


Emphasis in bold is mine.

So Calibration establlshes the relationship between the measured values and the values as realised by standards. It's a reference table between input, and output in specific units.

'Realised by standards' means, calculated by methods described in standards, say a definition of the method of determining luminance, or chrominance, or density, and the units or coordinate system to use for those). Sometimes new (non-SI) units are created, like SFR (Spatial Frequency Response) instead of MTF, because they are not 100% identical, or different ISO speed ratings.

Cheers,
Bart

Bart,

The wording of these definitions is dense, and this indicates that the process of calibration is more complicated than a lay person might think. The process involves comparison with a standard, which might be a primary standard or a secondary standard that is traceable to the primary standard. The decision making process involves consideration of precision and accuracy of both the instrument being calibrated and that of the standard itself.

If I calibrate my NEC monitor with Spectraview,the process is abbreviated. There is no luminance or chromaticity standard to check the calibration of the Spectraview device before each use. Presumably, the device was calibrated at the factory to a specified accuracy and we hope that there has been no drift since the calibration. After the calibration is complete, the instrument reports some delta-Es, but there is no opportunity to address accuracy and precision of the results.

Cheers,

Bill
Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 03, 2016, 03:38:19 pm
Presumably, the device was calibrated at the factory to a specified accuracy and we hope that there has been no drift since the calibration. After the calibration is complete, the instrument reports some delta-Es, but there is no opportunity to address accuracy and precision of the results.
Calibrated for what? I hear folks talk about factory calibration and various displays. Again, targeted, calibrated for what? To achieve what goal? Now I can select sRGB emulation in SpectraView, that's clearly calibrating to that 'standard' aim as best it can (without P22 phosphors?) and to be honest, I've never measured what the sRGB calibration provides and how close it really is to sRGB as specified. And accurate in what way? What's the reference values, how many, and what's the measured values and the dE produced? Without those vital pieces of data, the use of the term color accuracy isn't worth the paper it's printed on.
Title: Re: The terms "linearization" vs "calibration"
Post by: bjanes on May 03, 2016, 05:36:37 pm
Calibrated for what? I hear folks talk about factory calibration and various displays. Again, targeted, calibrated for what? To achieve what goal? Now I can select sRGB emulation in SpectraView, that's clearly calibrating to that 'standard' aim as best it can (without P22 phosphors?) and to be honest, I've never measured what the sRGB calibration provides and how close it really is to sRGB as specified. And accurate in what way? What's the reference values, how many, and what's the measured values and the dE produced? Without those vital pieces of data, the use of the term color accuracy isn't worth the paper it's printed on.

That was the point of my post and I don't understand your criticism. By calibration, I was referring to the Spectraview puck. It is accurate if the luminance presented to it is the value it reports. In other words, in terms of accuracy, if the standard is 120 cd/m^2, the instrument reports 120 cd/m^2 it is accurate to the nearest integer value. For a proper calibration, one would have to repeat the process 10 or 20 times, comparing the reported value with the standard. The mean reported value difference from the standard would represent the accuracy and the standard deviation indicates the precision.

Bill
Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 03, 2016, 05:43:11 pm
That was the point of my post and I don't understand your criticism.
NO criticism, simply a question. Calibrated to what, for what? I thought we were talking about 'factory calibration'. Some displays have this 'promise' hence the question.
Title: Re: The terms "linearization" vs "calibration"
Post by: Erland on May 04, 2016, 04:28:01 pm
In my world, the digital printing world, linearization and calibration are two different things, were a linearization is something we do every week, but a calibration is to some people linearization, and for some, like me, calibration means profiling in combination with a fresh linearization is done only once per paper.

Linearize is measuring 21 patches of cyan, 21 patches of magenta, 21 yellow and 21 black. This is to see the max and miniumun density, creating a linear curve from zero, to the densest possible patch within each color. This is later put into our Rips, and it calculates how to produce an even as possible output. Meaning CMY' "gray" to be as neutral as possible, and also as consistent from dark to light.

However, linearization does not care about what shade our Cyan is, or any other color for that matter. This is where profiling comes in. After a linearization, we profile the paper, as you do with a "Normal" inkjet printer as well. Here, the shade of cyan, magenta and greens and all the other colors producable count! Cyan in photoshop, fogra 39 colorspace, is not the same shade as our Cyan, and if you would to keep pure primarys in your output, it would differ from what you see on your screen. An often made complaint from old Offset printers is that there is magenta in their "pure" cyan, even though they "calibrated" (meaning linearizing). But when I explain that cyan from our printer does not have the same shade as our competitors or their old ink.
Title: Re: The terms "linearization" vs "calibration"
Post by: GWGill on May 04, 2016, 08:05:12 pm
However, linearization does not care about what shade our Cyan is, or any other color for that matter. This is where profiling comes in.
Profiling is not Calibration though. Calibration changes the device behavior, profiling doesn't.
Title: Re: The terms "linearization" vs "calibration"
Post by: Stephen Ray on May 04, 2016, 11:03:29 pm
a linearization is something we do every week

Erland,

I'm curious as to what brand of software (RIP?) you are using to do this linearization. Does it have a button that says, "linearize" and / or does it have a button that says "calibrate?"
Title: Re: The terms "linearization" vs "calibration"
Post by: Erland on May 05, 2016, 05:06:59 pm
I mostly use EFI's Fiery Rips, both using Color Workstation as well as Color Profiler suite. Creo Rips. And Our own newly developed Totalflow rip. The latter you use with X-rites i1Profiler to do a linearization and the former the built in utility.
Title: Re: The terms "linearization" vs "calibration"
Post by: Erland on May 05, 2016, 05:10:40 pm
Profiling is not Calibration though. Calibration changes the device behavior, profiling doesn't.

Sure enough. The only thing that actually change our hardwares behaviour is the built in process control. Patches with gradually increasing patches of each toner is placed on the ITB (image transfer belt) and is then read by a sensor, reading the density of each patch. Neither LInearization, calibration, nor profiling is changing the behavior of the hardware.
Title: Re: The terms "linearization" vs "calibration"
Post by: GWGill on May 05, 2016, 07:51:37 pm
Neither LInearization, calibration, nor profiling is changing the behavior of the hardware.
It's not helpful to say this based on pure technicalities.

Technically few modern devices "modify the hardware" - they all change behavior with electronic control modifications. That's not relevant. What's relevant as a conceptual anchor point is that everything that changes the device behavior in such a way that you must create a new ICC profile (or allows you to avoid creating a new profile in the face of unwanted changes in the output), is calibration.





Title: Re: The terms "linearization" vs "calibration"
Post by: Erland on May 06, 2016, 02:37:50 am
While I agree, maybe in today's translations between different languages, calibration in Swedish, actually means something else and thus the different views on what calibration is. I can not develop this further, just wanted to give my sight to this interesting discussion.
Title: Re: The terms "linearization" vs "calibration"
Post by: bjanes on May 06, 2016, 05:50:02 am
So Calibration establlshes the relationship between the measured values and the values as realised by standards. It's a reference table between input, and output in specific units.

'Realised by standards' means, calculated by methods described in standards, say a definition of the method of determining luminance, or chrominance, or density, and the units or coordinate system to use for those). Sometimes new (non-SI) units are created, like SFR (Spatial Frequency Response) instead of MTF, because they are not 100% identical, or different ISO speed ratings.

Not buying that either. We calibrate a display right? If there's a 'standard', it is as likely to work as not work because of the huge variables between display technologies, the goal of the end user (match a print? Under what illuminant?). I'd prefer to say calibration is the attempt to put a device into a desirable, well behaved condition and to be able to recalibrate to that aim point should the device alter it's behavior.

The DigitalDog thinks of calibration as applied to digital photography where the output of the device being calibrated can be adjusted electronically. However, in the scientific community there are devices where the output of the device can not be adjusted in the calibration process, and calibration merely establishes the relationship between the measured values and the values as realised by standards as Bart suggests. It's a reference table between input, and output in specific units.

An example of such a calibration exists with high precision mercury thermometers. The readings are taken with reference to the etching marks on the glass, and these can not be changed. The calibration is established by comparing the output of the thermometer with a thermometer traceable to NIST (formerly the NBS, National Bureau of Standards). The calibration may carried out at 4 points on the main scale and the user is provided with a table indicating the results of the calibration. An example is here (http://www.kesslerusa.com/page5.htm).

Bill
Title: Re: The terms "linearization" vs "calibration"
Post by: Bart_van_der_Wolf on May 06, 2016, 08:33:58 am
The DigitalDog thinks of calibration as applied to digital photography where the output of the device being calibrated can be adjusted electronically. However, in the scientific community there are devices where the output of the device can not be adjusted in the calibration process, and calibration merely establishes the relationship between the measured values and the values as realised by standards as Bart suggests. It's a reference table between input, and output in specific units.

Absolutely agree with that summary. And even in the case of calibration with an adjustable device, the values are within a given tolerance. In order to increase the reliability, linearization can help and also provide some statistical confidence output.

Quote
An example of such a calibration exists with high precision mercury thermometers.

Yes, a good example. Even with the possibility to calibrate at 0 Celsius / 32 Fahrenheit in melting ice, and at 100 Celsius / 212 Fahrenheit in steam from boiling water at a standard 1013.25 hPa barometric pressure, there may be minute differences in the diameter of the glass column that introduce non-linearities in the intermediate read-outs (interpolations).

So additional precaution is taken (in your example of reference thermometers) to add two more intermediate reference points, thus reducing the interpolation error and also supplying some sense of how (non)linear the known physical expansion actually is, due to external influences (the glass tube diameter). This is the linearization part.

As for spectrometers, they are usually adjustable (by updating either voltages, or lookup tables in firmware), so that the specific output units are within a given tolerance/precision range and in units that can be converted to other units.

Additional issues that affect accuracy, are electronic drift (long term due to aging of electronic components, but also short term due to temperature changes which hopefully is compensated for) and the changes in the lightsource. Multiple short spot measurements with a device that has a halogen bulb as lightsource will e.g. introduce drift due to Tungsten filament deposits on the inside of the quartz bulb because it doesn't become hot enough. Prolonged measurements that heat up the bulb enough will then allow the deposit to redeposit onto the filament, so in a way mechanically resetting the bulb's influence to a known more stable state.

It's an interesting topic.

Cheers,
Bart
Title: Re: The terms "linearization" vs "calibration"
Post by: Mark D Segal on May 06, 2016, 09:18:51 am
..............

As for spectrometers, they are usually adjustable (by updating either voltages, or lookup tables in firmware), so that the specific output units are within a given tolerance/precision range and in units that can be converted to other units.

Additional issues that affect accuracy, are electronic drift (long term due to aging of electronic components, but also short term due to temperature changes which hopefully is compensated for) and the changes in the lightsource. Multiple short spot measurements with a device that has a halogen bulb as lightsource will e.g. introduce drift due to Tungsten filament deposits on the inside of the quartz bulb because it doesn't become hot enough. Prolonged measurements that heat up the bulb enough will then allow the deposit to redeposit onto the filament, so in a way mechanically resetting the bulb's influence to a known more stable state.

It's an interesting topic.

Cheers,
Bart

Hi Bart, yes it is. I've been wondering why my old, but previously very reliable, X-Rite DTP-20 spectrophotometer suddenly stopped passing the calibration test. As XRite pulled all repair support from this device roughly a couple of years ago (being only 9 years after its introduction), I did what the victims of XRite's forced obsolescence strategies are expected to do and bought an i1Pro2, now hoping this one lasts a good long time. I was using the DTP-20 periodically for making new profiles as the need arose, but also doing lots of spot measurements, so I'm wondering now based on what you report here whether the cause of the DTP-20 demise could have been an imbalance between prolonged and instantaneous readings. Of course when it died it was about ten years old, so aging could have been at play, but the usage over that lifespan was not industrial-style heavy - quite periodic in fact.
Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 06, 2016, 10:10:07 am
The DigitalDog thinks of calibration as applied to digital photography where the output of the device being calibrated can be adjusted electronically. However, in the scientific community there are devices where the output of the device can not be adjusted in the calibration process, and calibration merely establishes the relationship between the measured values and the values as realised by standards as Bart suggests. It's a reference table between input, and output in specific units.
So you're stating: When a device cannot be calibrated, calibration is not possible. I'll add: When a device cannot be calibrated, it can be profiled (establishes the relationship between the measured values and the values as realised by standards reference). If and when you have specific standards for the reference, by all means, specify that.
Oh, and a device that can't be calibrated who's measured values do not produce the values realized by standards?
Title: Re: The terms "linearization" vs "calibration"
Post by: Bart_van_der_Wolf on May 06, 2016, 11:00:22 am
Hi Bart, yes it is. I've been wondering why my old, but previously very reliable, X-Rite DTP-20 spectrophotometer suddenly stopped passing the calibration test. As XRite pulled all repair support from this device roughly a couple of years ago (being only 9 years after its introduction), I did what the victims of XRite's forced obsolescence strategies are expected to do and bought an i1Pro2, now hoping this one lasts a good long time.

Yes, built-in obsolescence sucks, big time. In your case, the intermittent use could hardly have worn out the electronic components by being significantly heated multple times. There might be components that need disconnecting and reconnecting to clean the contacts. The only culprits I can think of is filters that could over time bleach or dichroic materials that get dirty from the atmosphere. I do not think there are that many components that age/corrode beside printplate solder or poorly coated copper tracks, especially when not used.

Halogen bulb deposits might also play a role to push the response outside the expected calibration range, but that could be improved by letting the bulb continuously burn for a while. Maybe the bulb can be replaced (if your unit uses that) to see if that triggers something.

One can only hope they didn't include a timer, which has to be reset during maintenance service while available. Chipped inkjet cartridges show that it would not be unthinkable, although I think it's unlikely because it is a more recent practice for many devices. Traditional Tungsten lightbulbs did have that obsolescence built in (cheaper filaments and the bulb blackened on the inside anyway), cars get less durable because they lasted too long, and a perfectly functional phone can be tossed away when the battery can't be replaced. Progress comes at a price (for the environment and our wallets).

Cheers,
Bart
Title: Re: The terms "linearization" vs "calibration"
Post by: Mark D Segal on May 06, 2016, 11:05:11 am
Indeed, and one can often legitimately ask how much progress, because it isn't always self-evident, apart from the sales figures of the relevant corporations.
Title: Re: The terms "linearization" vs "calibration"
Post by: bjanes on May 06, 2016, 08:48:24 pm
So you're stating: When a device cannot be calibrated, calibration is not possible. I'll add: When a device cannot be calibrated, it can be profiled (establishes the relationship between the measured values and the values as realised by standards reference). If and when you have specific standards for the reference, by all means, specify that.
Oh, and a device that can't be calibrated who's measured values do not produce the values realized by standards?

No, that is not what I said, but that's how you interpreted it. You distinguish between calibration and profiling, but that is not how the standards organizations define calibration. The example of the thermometer is considered calibration by NIST, since the process establishes a relationship between the temperature readings and the standard, and this relationship is expressed in a lookup table which is read out manually.

When I calibrate my NEC monitor with Spectraview, the results are recorded as a profile. The luminance is likely actually adjusted to the specified cd/m^2, but I think the tone curve and color values are encoded in a lookup table which loaded into the electronics of the monitor at startup. This is analogous to case of the thermometer, except that the lookup is done by the computer rather than manually.

Regards,

Bill
Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 06, 2016, 08:56:52 pm
No, that is not what I said, but that's how you interpreted it.
OK, sorry....
Quote
You distinguish between calibration and profiling, but that is not how the standards organizations define calibration.
OK fine, so considering this is the color management forum on a photo site, can you define exactly what standards you're referring to? I have asked more than once  ;D .
Quote
The example of the thermometer is considered calibration by NIST, since the process establishes a relationship between the temperature readings and the standard, and this relationship is expressed in a lookup table which is read out manually.
Understood. So if the thermometer doesn't follow it's reference, what temp really exists, now what?
Quote
When I calibrate my NEC monitor with Spectraview, the results are recorded as a profile.
That much I understand but the part you're not specifying is the part about calibration! You set the target for the calibration by flipping a coin, the default, what someone recommends on-line OR to produce a desired behavior from the device as I suggested earlier? I would hope and suspect the later.
Quote
This is analogous to case of the thermometer, except that the lookup is done by the computer rather than manually.
I don't understand your analogy, sorry. The thermometer is supposed to report some metric based on facts (this water is 112 degrees, when it really is 112 degrees or there abouts). The display you calibrate is or isn't aimed at some desired behavior? Like, matching a print for example.
Title: Re: The terms "linearization" vs "calibration"
Post by: Bart_van_der_Wolf on May 07, 2016, 09:21:18 am
OK fine, so considering this is the color management forum on a photo site, can you define exactly what standards you're referring to?

Take your pick... (http://www.iso.org/iso/home/store/catalogue_ics/catalogue_ics_browse.htm?ICS1=17&ICS2=180&ICS3=20&) , mostly from the first 8 references.

This (http://www.xrite.com/product_overview.aspx?ID=975&Action=support&SupportID=3539) might also come in handy as a basis for discussion, to differentiate between calibration and profiling.

And this (http://blog.xrite.com/the-importance-of-calibration/).

Cheers,
Bart
Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 07, 2016, 10:32:14 am
Take your pick... (http://www.iso.org/iso/home/store/catalogue_ics/catalogue_ics_browse.htm?ICS1=17&ICS2=180&ICS3=20&) , mostly from the first 8 references.
OK, which one requires a user to calibrate towards a goal, other than specifying a reference?


Let's take ISO 3664:2009 which specifies a standard for a display luminance from 75 cd/m2 to 100 cd/m2 with a monitor CCT of 6500K. IF I use that for calibration because as you suggest, it's a standard, the result is a mismatch of my display and print viewed next to the display. That serves no benefit as a target for calibration.


Calibration to CCT 5150 at 150cd/m2 using a specific piece of software (SpectaView) using a specific colorimeter (i1Display) next to a specific illuminant (GTI SOFV-1e) with a specific paper (Epson Luster) produces a match because that's the calibration that produces a match for me. Again; calibration in this context, in this forum for this audience is simple: putting a device into a desired condition to achieve a desired result! The standard calibration aim point of ISO 3664:2009 doesn't do that, not by a long shot. It's useless in that respect. It might be useful as a calibration aim point for other users so all their individual displays match (or more than likely, mismatch) from their goal. That's not a useful 'standard' for calibration in that context!
Title: Re: The terms "linearization" vs "calibration"
Post by: Doug Gray on May 07, 2016, 11:50:09 am
Let's take ISO 3664:2009 which specifies a standard for a display luminance from 75 cd/m2 to 100 cd/m2 with a monitor CCT of 6500K. IF I use that for calibration because as you suggest, it's a standard, the result is a mismatch of my display and print viewed next to the display. That serves no benefit as a target for calibration.

To be fair, ISO 3664:2009 specifically disclaims use of this standard for comparing images displayed on monitors to hard copy light booths. I presume you are aware of this. Others may not be.

The techniques of matching monitor displays to hard copy are beyond the scope of ISO 3664:2009.
Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 07, 2016, 12:02:25 pm
My point continues to be, that standards absolutely do not match my display to my hard copy prints in my light booth. YMMV and that's WHY products aimed for the calibration of displays offer many such options. The more, the better (as an example, NEC SpectraView). One size doesn't fit all when it comes to display calibration. Or are people here suggesting that the process where we select targets for this process shouldn’t be called calibration?
Quote
The techniques of matching monitor displays to hard copy are beyond the scope of ISO 3664:2009.
But it's a 'standard' the standards proponents will suggest.  :P
Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 07, 2016, 12:09:54 pm

http://www.gtilite.com/gti-faqs/
ISO 3664 species that in the imaging industry – graphic arts, photography, and graphic design – 5000K lighting, a.k.a. D5000 or D50 as it is commonly referred to, is the best light source to use. It has equal amounts of Red, Green, and Blue light energy, so it will not accentuate or subdue any colors. In imaging applications, many colors are viewed at the same time (e.g. a photograph), so all colors need to be represented evenly. This is why an equal energy light source is so important to this application.

D5000 should be the primary source, but a secondary source can also be used. In packaging applications, using the source used in the store would be useful. This is typically a high efficiency fluorescent lamp.

Old color evaluation booths also offered 7500K lighting. This light source was only used to help the pressman see the yellow ink on the press sheet. This was dropped from the standard (ISO 3664) over ten years ago. D50 is the only ISO standardized source now used in the imaging industry.

http://info.gtilite.com/what-is-iso-3664/#.Vy4St2P7BT4:


A summary of the graphic arts and photography standards.

ISO 3664 is the international color viewing standard for the graphic technology and photography industries. In order to improve the quality and consistency of light sources and color viewing, ISO 3664 outlines the minimum criteria necessary for all color viewing systems to meet. This set of specifications enables lighting engineers and manufacturers to design, test, and certify their color viewing systems to the industry standards, and challenges them to enhance the performance of their products. ISO 3664:2009 is the newest version of the international color viewing standard, revised to reflect tighter quality control guidelines in order to reduce miscommunications and other errors in color reproduction.

Color temperature – Per the ISO 3664:2009 standard, the color temperature of a light source should be relative to a phase of natural daylight with a correlated color temperature of about 5000K.


And a CCT value of about 5000K doesn't produce a match for me.
Title: Re: The terms "linearization" vs "calibration"
Post by: Doug Gray on May 07, 2016, 12:15:25 pm
My point continues to be, that standards absolutely do not match my display to my hard copy prints in my light booth. YMMV and that's WHY products aimed for the calibration of displays offer many such options. The more, the better (as an example, NEC SpectraView). One size doesn't fit all when it comes to display calibration.

Of course. I was just pointing out that people can't expect a standard to be useful outside of it's specific domain and that ISO standard specifically points this out in regard to display matching of prints in light booths.

ISO:
Quote
However, it is important to note that adherence to these specifications does not ensure that the monitor will match the hardcopy without provision of a defined colour transformation to the displayed image or use of proper colour management. This aspect of matching is outside the scope of this International Standard.
Title: Re: The terms "linearization" vs "calibration"
Post by: Mark D Segal on May 07, 2016, 12:20:31 pm
I think what all this boils down to is that one should "calibrate" in context to achieve specific objectives. If your objective is to replicate the ISO standard for whatever reason, you do that. If your objective is to equilibrate what comes out of your printer with what you see on your display, you may need to change the calibration parameters that the monitor and printer software provide for doing this. By calibration we mean that we set those several parameters to fixed values that are known to serve the purpose at hand, and whatever other colour management we need we do through profiling, which is predicated in part on the calibration parameters. As for linearization of printers, I remain of the view that Harald Johnson's explanation of it is the most straightforward and plausible, while contrast curves should be handled through image editing and profiling.
Title: Re: The terms "linearization" vs "calibration"
Post by: Bart_van_der_Wolf on May 07, 2016, 12:21:37 pm
It is my impression that the non-standard use of the word 'Standard', a methodology with recommendations for statistically solid procedures, causes some confusion. A 'Standard' is not a reference value.

In a similar vein, Calibration is not the same as Profiling. And Linearization is not the same as Calibration (although it may be part of a Calibration process to improve interpolation accuracy).

Cheers,
Bart
Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 07, 2016, 12:25:11 pm
It is my impression that the non-standard use of the word 'Standard', a methodology with recommendations for statistically solid procedures, causes some confusion. A 'Standard' is not a reference value.
In a similar vein, Calibration is not the same as Profiling. And Linearization is not the same as Calibration (although it may be part of a Calibration process to improve interpolation accuracy).
A standard doesn't, never defines a reference, a set of values to target? And who here said linearization is the same as calibration (cause it's different)?
Title: Re: The terms "linearization" vs "calibration"
Post by: Bart_van_der_Wolf on May 07, 2016, 07:43:40 pm
A standard doesn't, never defines a reference, a set of values to target?

Hi Andrew,

I wouldn't know about 'never', but the common pattern of 'a Standard' is a description of procedures to follow, and metrics to be calculated, in order to allow independently conducted tests to produce the same (within a margin) results, IOW produce repeatable and consistent observations. I do recommend getting hold of some of them, despite the cost, because they usually (after a long process of becoming an actual Standard) make a lot of sense and often explain the considerations that went into them before achieving a state of general consensus.

There may be some recommended settings for the tests to be conducted, but they are usually explained, e.g. to produce statistically more robust, or perceptually more relevant, results.

Quote
And who here said linearization is the same as calibration (cause it's different)?

Just stipulating the difference, given the OP's question, trying to get back on topic instead of hiding behind smoke screens.

Cheers,
Bart
Title: Re: The terms "linearization" vs "calibration"
Post by: Doug Gray on May 07, 2016, 09:30:36 pm
Getting back to the question of calibration v linearization. My personal use of the terms is thus:

Calibration: Things that change inking. In particular setting the highest density points of CYMK.
Linearization: Establishing the density of each ink at various points which will be entered in the RIP.

These are both done for a specific media.

After doing these steps profiling is done and should provide fairly smooth transitions over the 3D LUT space yielding high quality profiles.

Many others appear to include linearization as a part of calibration. I won't argue with that. It's the process that matters.
Title: Re: The terms "linearization" vs "calibration"
Post by: Stephen Ray on May 07, 2016, 10:18:17 pm
Getting back to the question of calibration v linearization. My personal use of the terms is thus:

Calibration: Things that change inking. In particular setting the highest density points of CYMK.
Linearization: Establishing the density of each ink at various points which will be entered in the RIP.


Doug,

Regarding Linearization and "establishing the density of each ink," are you determining the ink values to create a smooth gradation of the said ink channels or are you determining the values to arrive at a gray balance, thus ultimately creating a "linear curve" which would be a contradiction in terms? (I've found "linear curve" mentioned in reference material, that's why I even bring it up.)

I'm in the process of compiling my research and will share my findings ASAP.
Title: Re: The terms "linearization" vs "calibration"
Post by: Doug Gray on May 07, 2016, 11:06:40 pm
Doug,

Regarding Linearization and "establishing the density of each ink," are you determining the ink values to create a smooth gradation of the said ink channels or are you determining the values to arrive at a gray balance, thus ultimately creating a "linear curve" which would be a contradiction in terms? (I've found "linear curve" mentioned in reference material, that's why I even bring it up.)

I'm in the process of compiling my research and will share my findings ASAP.

IMO "linearization" is an inaccurate term of art. Spectrally, reflectance at a specific wavelength or CIE XYZ terms are linear. But most uses of the term are after a mapping that corresponds more closely to perception. Profiles use ICCLAB which is neither linear nor logarithmic (as in density). In fact, even taking the simple case of neutrals only (K, LK, LLK)  the last thing one would want to do to with linearization of printers is to create an actual "linear" mapping of the profile outputs. More important is that whatever process is used prior to profiling create a smooth response to the L*a*b* inputs. Lumpiness is hard, or impossible to fix in profiling as the profile size expands rapidly the more you segment it to accommodate gradient changes.

No idea what a "linear curve" even means but most likely means the author meant some sort of smoothness or limitations on gradient change rates. (caps on the second derivatives).

Also, there are different goals at work sometimes. Ink use reduction, and large gamut, amongst others.
Title: Re: The terms "linearization" vs "calibration"
Post by: Peter_DL on May 08, 2016, 06:57:16 am
IMO "linearization" is an inaccurate term of art.
...
No idea what a "linear curve" even means ...

Doug,

With a monitor's output in cd/m2, Linearization is achieved not by Calibration to whatever gamma but by the application of exactly the inverse function in color management. The calibrated gamma is counter-balanced by the 1/gamma-encoding upon conversion to the monitor profile.

The net result is a linear relationship (linear curve) between x: the RGB numbers of a grayscale in a linear gamma space, and y: the output luminance in cd/m2.

In a first order the calibrated gamma, whether 1.8, 2.2 or e.g. the L* trc, is simply irrelevant. Only in a second order there can be "bit precision effects", or let’s call it "smoothness". For example, a regular 2.2 gamma with its steep take off in the deep shadows is not a good idea once 8 bit come into play.


Now let’s think about printer again.
IMHO, the above described "net linearity" should finally be valid as well, now with y = the Reflectance along the printed grayscale.

Again there can be second order effects which may make it desirable to calibrate the printed grayscale not only right to this numerical linearity, but to a brighter state with a more perceptual distribution of tones, however, it is finally captured in the profile and therefore should get counter-balanced and eliminated in the course of color management – in a way that the net linearity is met again, and that there is no net addition of brightness.


Peter
--
Title: Re: The terms "linearization" vs "calibration"
Post by: Mark D Segal on May 08, 2016, 09:11:02 am

No idea what a "linear curve" even means but most likely means the author meant some sort of smoothness or limitations on gradient change rates. (caps on the second derivatives).


Doug, not clear what the source or context of your uncertainty is here, but as you most probably know, in general a set of multi-point relationships joined up in a line makes that line a curve - a straight line is one such curve and is called a linear curve.

L* is a non-linear curve; so when we hear discussions of linearity with respect to L*, my understanding it that we are talking about departures from L* that have a non-linear pattern; the more linear the pattern of input-output departures are from that curve, the more correctly the L* curve will be respected by the output device. Whether that kind of calibration is appropriate for the linearization of a printer is another matter.

Back to the O/P's concern: linearization is a specific step in calibration. The "linearity" of a printer means "the degree to which changes in the control signals produce proportional changes in the printed color" , and "Linearization" is "The act of making a device linear (which is a specific form of calibration)", where the definition of "linear" is: "A simple relationship between stimulus and response, whereby, for example, doubling the stimulus produces double the response." (cf. Real World Color Management, Second Edition, by Fraser Murphy and Bunting, pages 178, 179 and 545.
Title: Re: The terms "linearization" vs "calibration"
Post by: bjanes on May 08, 2016, 09:33:10 am
L* is a non-linear curve; so when we hear discussions of linearity with respect to L*, my understanding it that we are talking about departures from L* that have a non-linear pattern; the more linear the pattern of input-output departures are from that curve, the more correctly the L* curve will be respected by the output device. Whether that kind of calibration is appropriate for the linearization of a printer is another matter.

If you plot L* versus luminance, the response is nonlinear, but if you plot L* vs perceived brightness, the response is linear. The L* compensates for the nonlinearity of human perception. Similarly, gamma encoding is nonlinear. The gamma encoding is performed at capture, and the inverse gamma function is performed on printing display on the screen such that the overall result is a linear representation of scene luminance.

Bill
Title: Re: The terms "linearization" vs "calibration"
Post by: Mark D Segal on May 08, 2016, 09:45:33 am
If you plot L* versus luminance, the response is nonlinear, but if you plot L* vs perceived brightness, the response is linear. The L* compensates for the nonlinearity of human perception. Similarly, gamma encoding is nonlinear. The gamma encoding is performed at capture, and the inverse gamma function is performed on printing display on the screen such that the overall result is a linear representation of scene luminance.

Bill

I agree with the first two sentences but I wonder about the third. It is my understanding that a digital camera sensor works in a linear manner; it is when we demosaic the data that we give it a non-linear gamma to make the photo correspond with human visual perception.
Title: Re: The terms "linearization" vs "calibration"
Post by: Doug Gray on May 08, 2016, 10:00:58 am
Doug,

With a monitor's output in cd/m2, Linearization is achieved not by Calibration to whatever gamma but by the application of exactly the inverse function in color management. The calibrated gamma is counter-balanced by the 1/gamma-encoding upon conversion to the monitor profile.

The net result is a linear relationship (linear curve) between x: the RGB numbers of a grayscale in a linear gamma space, and y: the output luminance in cd/m2.

In a first order the calibrated gamma, whether 1.8, 2.2 or e.g. the L* trc, is simply irrelevant. Only in a second order there can be "bit precision effects", or let’s call it "smoothness". For example, a regular 2.2 gamma with its steep take off in the deep shadows is not a good idea once 8 bit come into play.


Now let’s think about printer again.
IMHO, the above described "net linearity" should finally be valid as well, now with y = the Reflectance along the printed grayscale.

Again there can be second order effects which may make it desirable to calibrate the printed grayscale not only right to this numerical linearity, but to a brighter state with a more perceptual distribution of tones, however, it is finally captured in the profile and therefore should get counter-balanced and eliminated in the course of color management – in a way that the net linearity is met again, and that there is no net addition of brightness.


Peter
--

I agree with everything you just said.  I should have restricted "linearization" to printer specific ops involving steps in inking which is anything but linear.   However, the net result of calibration, "linearization," and profiling is indeed linear.  At least in the colorimetric path.

Title: Re: The terms "linearization" vs "calibration"
Post by: Doug Gray on May 08, 2016, 10:35:32 am
I agree with the first two sentences but I wonder about the third. It is my understanding that a digital camera sensor works in a linear manner; it is when we demosaic the data that we give it a non-linear gamma to make the photo correspond with human visual perception.

The demosaic process is linear. The subsequent process of gamma encoding provides an increase in dynamic range when mapping to a smaller bit space. If one has 16 bits or more of resolution one can encode strictly in linear space.

However, there is a non-linear mapping that is done when transforming from scene referenced to output referenced form regardless of encoding gamma. This is the norm for photographs except for reproduction where the goal is colorimetric response in which case the scene referenced form is retained.
Title: Re: The terms "linearization" vs "calibration"
Post by: bjanes on May 08, 2016, 11:25:46 am
I agree with the first two sentences but I wonder about the third. It is my understanding that a digital camera sensor works in a linear manner; it is when we demosaic the data that we give it a non-linear gamma to make the photo correspond with human visual perception.

As I understand it, the gamma encoding is not to account for the non-linearity of vision but to improve gradation in the shadows. The gamma encoding is reversed on printing or viewing so that the luminances in the reproduction are the same as in the scene. This is necessary for the reproduction to be successful.

Bill
Title: Re: The terms "linearization" vs "calibration"
Post by: Doug Gray on May 08, 2016, 01:27:37 pm
As I understand it, the gamma encoding is not to account for the non-linearity of vision but to improve gradation in the shadows. The gamma encoding is reversed on printing or viewing so that the luminances in the reproduction are the same as in the scene. This is necessary for the reproduction to be successful.

Bill
It's actually both. Perception is more sensitive to absolute changes in luminosity at low levels and less sensitive at high levels.

For instance a nit (cd/m^2) change of 90 to 91 is not perceptible but a nit change of 5 to 6 is quite visible. A gamma encoding is one way to mitigate this when one only has 8 bits available.  If one has 16 bits then a gamma of 1 (linear) would be just fine.
Title: Re: The terms "linearization" vs "calibration"
Post by: bjanes on May 08, 2016, 03:45:45 pm
It's actually both. Perception is more sensitive to absolute changes in luminosity at low levels and less sensitive at high levels.

For instance a nit (cd/m^2) change of 90 to 91 is not perceptible but a nit change of 5 to 6 is quite visible. A gamma encoding is one way to mitigate this when one only has 8 bits available.  If one has 16 bits then a gamma of 1 (linear) would be just fine.

That is what I meant by gamma encoding is needed to improve gradation is the shadows. According to Weber-Fechner law, human vision is approximately logarithmic and a 1% difference in luminance is perceptible. A change from 5 to 6 is a 20% difference and shadow tones would exhibit banding, whereas a change from 90 to 91 is about a 1% difference and gradation would be smooth. Since the gamma encoding is reversed by the inverse gamma function for viewing or printing, the original scene luminance is restored (provided that no tone mapping is performed) and presented to the observer. Even if vision were linear, more bits would be needed to encode the shadows and gamma encoding reduces the needed precision for a given dynamic range. For HDR linear encoding with floating point is employed and gamma is not needed, as you recognized for low DR with 16 bits.

With limited precision of encoding a gamma curve helps satisfy the Weber-Fechner requirement, and to this extent it does address the nonlinearity of human vision. However, to state that non-linear encoding is necessary to compensate for non-linearity of vision is misleading. If the gamma encoding were not removed for viewing, double compression of luminances would occur: once in encoding and another time with viewing.

Does this make any sense?

Bill
Title: Re: The terms "linearization" vs "calibration"
Post by: GWGill on May 08, 2016, 07:46:08 pm
No, that is not what I said, but that's how you interpreted it. You distinguish between calibration and profiling, but that is not how the standards organizations define calibration. The example of the thermometer is considered calibration by NIST, since the process establishes a relationship between the temperature readings and the standard, and this relationship is expressed in a lookup table which is read out manually.
Although both derived from the same concepts, the word "calibrate" is actually a contraction of slightly different things in the scientific and color device worlds. In the science world it implies "establish somethings correspondence with a reference, and possibly as a result create a correction table or physically adjust it to conform to the standard". In the color device world it is "adjust the device so that it's response conforms to a target response". i.e. it implies an integrated process of establishing somethings response, creating a correction for it, and applying that correction so that the device now response as it is intended to.

One way of distinguishing between color calibration and profiling is how it functions. If you have M devices and N desired responses, then if you use calibration you need M x N calibration tables, but you only need M + N profiles.

Another difference between the world of science and color is that in science there is typically only one reference, while in color there are typically many different desired responses.
Quote
When I calibrate my NEC monitor with Spectraview, the results are recorded as a profile.
Recorded in the profile, as a supplemental tag. The calibration is along for the ride - nothing in ICC profiling knows anything about the calibration.
Title: Re: The terms "linearization" vs "calibration"
Post by: GWGill on May 08, 2016, 07:49:23 pm
Whether that kind of calibration is appropriate for the linearization of a printer is another matter.
Linearizing printer channel response in L*a*b* space works pretty well.
Title: Re: The terms "linearization" vs "calibration"
Post by: GWGill on May 08, 2016, 07:53:41 pm
As I understand it, the gamma encoding is not to account for the non-linearity of vision but to improve gradation in the shadows.
Which is exactly the same thing. Because our vision is basically ratiometric (i.e. non-linear), we are more sensitive to changes in the shadows than the highlights. An encoding that spreads gradation errors evenly is one that is also close to perceptually uniform.
Title: Re: The terms "linearization" vs "calibration"
Post by: Doug Gray on May 08, 2016, 10:49:36 pm
Linearizing printer channel response in L*a*b* space works pretty well.

It does. Not only because it is reasonably close to human perception but also because it is the input side of ICC printer profile 3DLUTs and non-matrix display profiles.
Title: Re: The terms "linearization" vs "calibration"
Post by: bjanes on May 09, 2016, 08:40:43 am
Which is exactly the same thing. Because our vision is basically ratiometric (i.e. non-linear), we are more sensitive to changes in the shadows than the highlights. An encoding that spreads gradation errors evenly is one that is also close to perceptually uniform.

I am not getting my point across. Relative error is an essential parameter in any measurement system, whether the quantity being measured follows a linear, power, or log function. That is why we use CV (coefficient of variation, standard deviation/mean) rather than the standard deviation when discussing relative error. Weight is a linear function. When weighing a 100 kg football player a scale accurate to the nearest 0.5 kg is adequate, but this scale would not be appropriate for weighing a 2 kg premature infant. The relative errors would be 0.5% and 25% respectively. We need finer gradation at the low end even with a linear function.

Gamma, a power function, was originally introduced in electronic imaging to account for the nonlinearity of cathode ray tubes and not to account for the non-llinearity of the perception of luminance, which is approximately logarithmic (not a power function), but a side effect was that gamma improved gradation in the shadows. However, gamma fails at low luminances where the slope approaches infinity as luminance approaches zero. For this reason, gamma encodings use a linear ramp at very low luminances.

Gamma encoding also fails when one is dealing with HDR imaging, where a log encoding yields constant relative error (see encoding (http://www.anyhere.com/gward/hdrenc/hdr_encodings.html) by Greg Ward). One can also improve relative error at the low end by brute force (using more significant digits), but this can be wasteful since the greater precision is not needed at the high end, or by using floating point notation.

Perceptual uniformity is useful in image editing so that a given increment in the control will produce the same proportional change at the low end as at the high end. Many critical users calibrate their monitors to L*a*b rather than gamma, since L*a*b is designed to be perceptually uniform. However, a linear ramp is still needed at low luminances.

When dealing with a wide range of luminances (e.g. HDR), gamma is abandoned and one goes over to log or linear floating point encoding as discussed by Ward in the quoted article.

Regards,

Bill
Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 09, 2016, 10:35:04 am
Perceptual uniformity is useful in image editing so that a given increment in the control will produce the same proportional change at the low end as at the high end. Many critical users calibrate their monitors to L*a*b rather than gamma, since L*a*b is designed to be perceptually uniform. However, a linear ramp is still needed at low luminances.
And this is why I've kept this fine post from Lars on the controversial subject of this kind of calibration (which at the time it was posted, to this day, hasn't been addressed by the Lstar proponents):


Quote

Re: [Icc_users] L* workingspaces and L* output calibration
Tuesday, March 11, 2008 8:46:58 PM
From:Lars Borg <borg@adobe.com>

L* is great if you're making copies. However, in most other
scenarios, L* out is vastly different from L* in.  And when L* out is
different from L* in, an L* encoding is very inappropriate as
illustrated below.


Let me provide an example for video. Let's say you have a Macbeth
chart. On set, the six gray patches would measure around  L* 96, 81,
66, 51, 36, 21.


Assuming the camera is Rec.709 compliant, using a 16-235 digital
encoding, and the camera is set for the exposure of the Macbeth
chart, the video RGB values would be 224,183,145,109,76,46.


On a reference HD TV monitor they should reproduce at L* 95.5, 78.7,
62.2, 45.8, 29.6, 13.6.
If say 2% flare is present on the monitor (for example at home), the
projected values would be different again, here: 96.3, 79.9, 63.8,
48.4, 34.1, 22.5.


As you can see, L* out is clearly not the same as L* in.
Except for copiers, a system gamma greater than 1 is a required
feature for image reproduction systems aiming to please human eyes.
For example, film still photography has a much higher system gamma
than video.


Now, if you want an L* encoding for the video, which set of values
would you use:
96, 81, 66, 51, 36, 21 or
95.5, 78.7, 62.2, 45.8, 29.6, 13.6?
Either is wrong, when used in the wrong context.
If I need to restore the scene colorimetry for visual effects work, I
need 96, 81, 66, 51, 36, 21.
If I need to re-encode the HD TV monitor image for another device,
say a DVD, I need 95.5, 78.7, 62.2, 45.8, 29.6, 13.6.


In this context, using an L* encoding would be utterly confusing due
to the lack of common values for the same patches.  (Like using US
Dollars in Canada.)
Video solves this by not encoding in L*. (Admittedly, video encoding
is still somewhat confusing. Ask Charles Poynton.)


When cameras, video encoders, DVDs, computer displays, TV monitors,
DLPs, printers, etc., are not used for making exact copies, but
rather for the more common purpose of pleasing rendering, the L*
encoding is inappropriate as it will be a main source of confusion.


Are you planning to encode CMYK in L*, too?


Lars
Lab attempts to be perceptually uniform but it's really not....
Title: Re: The terms "linearization" vs "calibration"
Post by: Doug Gray on May 09, 2016, 10:37:52 am
I am not getting my point across. Relative error is an essential parameter in any measurement system, whether the quantity being measured follows a linear, power, or log function. That is why we use CV (coefficient of variation, standard deviation/mean) rather than the standard deviation when discussing relative error. Weight is a linear function. When weighing a 100 kg football player a scale accurate to the nearest 0.5 kg is adequate, but this scale would not be appropriate for weighing a 2 kg premature infant. The relative errors would be 0.5% and 25% respectively. We need finer gradation at the low end even with a linear function.

Gamma, a power function, was originally introduced in electronic imaging to account for the nonlinearity of cathode ray tubes and not to account for the non-llinearity of the perception of luminance, which is approximately logarithmic (not a power function), but a side effect was that gamma improved gradation in the shadows. However, gamma fails at low luminances where the slope approaches infinity as luminance approaches zero. For this reason, gamma encodings use a linear ramp at very low luminances.

A power function may have a slope that goes to infinity at zero but a log response is worse. The value itself goes to -infinity at zero.

Quote

Gamma encoding also fails when one is dealing with HDR imaging, where a log encoding yields constant relative error (see encoding (http://www.anyhere.com/gward/hdrenc/hdr_encodings.html) by Greg Ward). One can also improve relative error at the low end by brute force (using more significant digits), but this can be wasteful since the greater precision is not needed at the high end, or by using floating point notation.
Sensitivity to relative error decreases rapidly at low luminance so a log response is not ideal there either. A power function may have infinite slope at 0 but at least it has a value, unlike a log response.

Quote

Perceptual uniformity is useful in image editing so that a given increment in the control will produce the same proportional change at the low end as at the high end. Many critical users calibrate their monitors to L*a*b rather than gamma, since L*a*b is designed to be perceptually uniform. However, a linear ramp is still needed at low luminances.

L*a*b* is much closer to a gamma encode like Adobe RGB than it is to a log encode. It's even more similar to sRGB which has a significant linear lead in ramp though L*a*b* has both a higher gamma (3.0) and larger lead in ramp than sRGB.

Quote
When dealing with a wide range of luminances (e.g. HDR), gamma is abandoned and one goes over to log or linear floating point encoding as discussed by Ward in the quoted article.

Regards,

Bill
Title: Re: The terms "linearization" vs "calibration"
Post by: Doug Gray on May 09, 2016, 10:56:41 am
As an aside, L*a*b* suffers from color shifts with scaling, A pure gamma encoding does not. One can move the curves slider in Photoshop to compensate for exposure without changing color in scene referenced images (necessary for repro work). This is not the case with L*a*b* and is the main reason I do not prefer L*a*b* as a working space.
Title: Re: The terms "linearization" vs "calibration"
Post by: GrahamBy on May 09, 2016, 11:08:20 am
If you plot L* versus luminance, the response is nonlinear, but if you plot L* vs perceived brightness, the response is linear. The L* compensates for the nonlinearity of human perception. Similarly, gamma encoding is nonlinear. The gamma encoding is performed at capture, and the inverse gamma function is performed on printing display on the screen such that the overall result is a linear representation of scene luminance.

Bill

Thank you. I was wondering if I'd lost my mind for a moment there.
Title: Re: The terms "linearization" vs "calibration"
Post by: GrahamBy on May 09, 2016, 11:20:44 am
A power function may have a slope that goes to infinity at zero but a log response is worse. The value itself goes to -infinity at zero.

Which is fine if you allow that zero illumination, like zero reflectance or zero absolute temperature, is never achieved. They only occur when people start making artificial choices of zero-points. That's why no one has a problem with using dB (10 * log_10(sound pressure level in Pascals/reference level)) as a measure of acoustic intensity. And why in the same field, everyone is happy to measure distortion as deviations from a linear relation of input to output, despite the logarithmic physiological response.
Title: Re: The terms "linearization" vs "calibration"
Post by: Doug Gray on May 09, 2016, 11:34:30 am
Which is fine if you allow that zero illumination, like zero reflectance or zero absolute temperature, is never achieved. They only occur when people start making artificial choices of zero-points. That's why no one has a problem with using dB (10 * log_10(sound pressure level in Pascals/reference level)) as a measure of acoustic intensity. And why in the same field, everyone is happy to measure distortion as deviations from a linear relation of input to output, despite the logarithmic physiological response.

Sure, but zero illumination does exist. It just can't be represented on a log scale. Gamma (power) scales have no problem with it. Zero reflectance doesn't exist of course but a gamma function handles it just fine as does a log linear or hybrid (L*, sRGB) scale.
Title: Re: The terms "linearization" vs "calibration"
Post by: bjanes on May 09, 2016, 11:57:36 am
A power function may have a slope that goes to infinity at zero but a log response is worse. The value itself goes to -infinity at zero.
Sensitivity to relative error decreases rapidly at low luminance so a log response is not ideal there either. A power function may have infinite slope at 0 but at least it has a value, unlike a log response.

L*a*b* is much closer to a gamma encode like Adobe RGB than it is to a log encode. It's even more similar to sRGB which has a significant linear lead in ramp though L*a*b* has both a higher gamma (3.0) and larger lead in ramp than sRGB.

I have already noted the limitations of gamma and L*a*b encodings as documented below, and you are quoting me out of context in an attempt to prove your point. The same limitation applies to a log encoding, but then zero luminance rarely occurs in practical photographic situations and the minimum value possible in a log encoding is sufficiently close to zero for practical use. Log encodings are successfully used for HDR along with floating point. Did you take the trouble to read the article by Greg Ward?

Regards,

Bill

Quote
However, gamma fails at low luminances where the slope approaches infinity as luminance approaches zero. For this reason, gamma encodings use a linear ramp at very low luminances.

Quote
Many critical users calibrate their monitors to L*a*b rather than gamma, since L*a*b is designed to be perceptually uniform. However, a linear ramp is still needed at low luminances.
Title: Re: The terms "linearization" vs "calibration"
Post by: bjanes on May 09, 2016, 12:31:01 pm
And this is why I've kept this fine post from Lars on the controversial subject of this kind of calibration (which at the time it was posted, to this day, hasn't been addressed by the Lstar proponents):

Lab attempts to be perceptually uniform but it's really not....


Quite true, but neither is a gamma function with an exponent of 1/2.2 or 1/2.5. Light contamination (flare) is present whether your monitor is calibrated to 1/2.2 or 1/2.5 as well as with L*a*b. In fact, if the images are viewed in a bright surround, Poynton suggests that an exponent of 1.1 or 1.2 would be more appropriate (Gamma FAQ). AdobeRGB uses 1/2.2 and ProPhotoRGB uses 1/1.8. The ProPhotoRGB spec assumes 0.5 to 1% viewing flare and the sRGB spec stipulates 1% viewing flare and a typical ambient illumination of 200 lux. I don't know about L*a*b assumes, if anything.

In developing his BetaRGB space, Bruce Lindbloom concluded, " 'What value for gamma gives a companding function that most closely represents the CIE L* function (i.e. a uniform perceptual scale)?' We can explore this question by setting the input to L* and the output to Gamma 1 (or Gamma 2). A perfect match would be a straight line drawn on the diagonal. You can see that a gamma value of 2.2 is not too bad of a compromise because it roughly follows the diagonal:"

In practice I don't think it makes much difference whether you calibrate your monitor to L*a*b or 1/2.2. 1/2.2 if fine with me and is what I use since that is what Spectraview uses.

Regards,

Bill
Title: Re: The terms "linearization" vs "calibration"
Post by: digitaldog on May 09, 2016, 12:33:29 pm
Quite true, but neither is a gamma function with an exponent of 1/2.2 or 1/2.5.
It doesn't promise to be AFAIK.
Quote
In practice I don't think it makes much difference whether you calibrate your monitor to L*a*b or 1/2.2. 1/2.2 if fine with me and is what I use since that is what Spectraview uses.
Exactly but the Lstar calibration proponents seem to disagree without proving it.
Title: Re: The terms "linearization" vs "calibration"
Post by: bjanes on May 09, 2016, 01:56:03 pm
[quote author=Doug Gray link=topic=110235.msg908499#msg908499 date=1462804672
L*a*b* is much closer to a gamma encode like Adobe RGB than it is to a log encode. It's even more similar to sRGB which has a significant linear lead in ramp though L*a*b* has both a higher gamma (3.0) and larger lead in ramp than sRGB.
[/quote]

L*a*b approximated by gamma = 3. I don't think so. According to Bruce Lindbloom it is closer to 2.2

Title: Re: The terms "linearization" vs "calibration"
Post by: Doug Gray on May 09, 2016, 04:42:53 pm
[quote author=Doug Gray link=topic=110235.msg908499#msg908499 date=1462804672
L*a*b* is much closer to a gamma encode like Adobe RGB than it is to a log encode. It's even more similar to sRGB which has a significant linear lead in ramp though L*a*b* has both a higher gamma (3.0) and larger lead in ramp than sRGB.


L*a*b approximated by gamma = 3. I don't think so. According to Bruce Lindbloom it is closer to 2.2

And Bruce is right that L* approximates a pure gamma 2.2.

L* is a linear ramp tacked onto a gamma=3 scale. It has a bigger linear ramp than sRGB which is why the approximate gamma as compared to a gamma only curve is close to 2.2.  In much the same way sRGB more closely matches a gamma=2.2 curve in spite of actually being a gamma 2.4 tacked onto a linear ramp. sRGB isn't as affected as L* because it has a smaller linear ramp than L*.
Title: Re: The terms "linearization" vs "calibration"
Post by: Doug Gray on May 09, 2016, 06:38:54 pm
I have already noted the limitations of gamma and L*a*b encodings as documented below, and you are quoting me out of context in an attempt to prove your point. The same limitation applies to a log encoding, but then zero luminance rarely occurs in practical photographic situations and the minimum value possible in a log encoding is sufficiently close to zero for practical use. Log encodings are successfully used for HDR along with floating point. Did you take the trouble to read the article by Greg Ward?

Regards,

Bill

I'm not uncomfortable with log scaling. It's the basis for floating point representation in essentially all computer architectures. And it does provide consistent, relative error performance. I just don't believe it is necessary in photography or printing.

You were the one that pointed out that the slope of gamma curves becomes infinite at 0 as if that was significant. I was just noting that log scales have no value, let alone slope, at 0. At some point you have to truncate (or clip) a log scale. Neither of these facts impairs the ability of either to function. It would be interesting to construct an analysis of log scale (say, using 16 bit floats) v 16 bit, gamma 4 over possible HDR ranges. I think both would serve quite well.

And yes, gamma curves do not have a constant relative error per step change. Certainly 8 bit discrete, gamma encoding is not going to work for HDR work. Either in synthetic images or real image captures. But 16 bits does. With or without a linear front end ramp though I do not like linear front end ramps on a gamma scale as a simple scale factor change can alter colors and that is not the case with pure gamma encoding.

Constant relative error is a useful property but it does not represent actual light physics. Shot noise magnitude, for instance, tracks the square root of luminance thus intrinsically will not produce constant relative error. Multi exposure HDR techniques can produce more constant relative error which allows wider adjustments in post. Still, it would be a rare 16 bit gamma encoded image where even an HDR image could not be encoded with errors below the physical shot noise limit. Unlike L* or sRGB scales, which are mixed linear/gammas, both log and gamma scales provide easy luminance scaling without shifting color. Gamma scales by simply multiplying RGBs by a factor while log scales accomplish the same thing by adding a factor.