Pages: 1 2 3 [4] 5   Go Down

Author Topic: The terms "linearization" vs "calibration"  (Read 30921 times)

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Re: The terms "linearization" vs "calibration"
« Reply #60 on: May 08, 2016, 09:11:02 am »


No idea what a "linear curve" even means but most likely means the author meant some sort of smoothness or limitations on gradient change rates. (caps on the second derivatives).


Doug, not clear what the source or context of your uncertainty is here, but as you most probably know, in general a set of multi-point relationships joined up in a line makes that line a curve - a straight line is one such curve and is called a linear curve.

L* is a non-linear curve; so when we hear discussions of linearity with respect to L*, my understanding it that we are talking about departures from L* that have a non-linear pattern; the more linear the pattern of input-output departures are from that curve, the more correctly the L* curve will be respected by the output device. Whether that kind of calibration is appropriate for the linearization of a printer is another matter.

Back to the O/P's concern: linearization is a specific step in calibration. The "linearity" of a printer means "the degree to which changes in the control signals produce proportional changes in the printed color" , and "Linearization" is "The act of making a device linear (which is a specific form of calibration)", where the definition of "linear" is: "A simple relationship between stimulus and response, whereby, for example, doubling the stimulus produces double the response." (cf. Real World Color Management, Second Edition, by Fraser Murphy and Bunting, pages 178, 179 and 545.
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: The terms "linearization" vs "calibration"
« Reply #61 on: May 08, 2016, 09:33:10 am »

L* is a non-linear curve; so when we hear discussions of linearity with respect to L*, my understanding it that we are talking about departures from L* that have a non-linear pattern; the more linear the pattern of input-output departures are from that curve, the more correctly the L* curve will be respected by the output device. Whether that kind of calibration is appropriate for the linearization of a printer is another matter.

If you plot L* versus luminance, the response is nonlinear, but if you plot L* vs perceived brightness, the response is linear. The L* compensates for the nonlinearity of human perception. Similarly, gamma encoding is nonlinear. The gamma encoding is performed at capture, and the inverse gamma function is performed on printing display on the screen such that the overall result is a linear representation of scene luminance.

Bill
Logged

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Re: The terms "linearization" vs "calibration"
« Reply #62 on: May 08, 2016, 09:45:33 am »

If you plot L* versus luminance, the response is nonlinear, but if you plot L* vs perceived brightness, the response is linear. The L* compensates for the nonlinearity of human perception. Similarly, gamma encoding is nonlinear. The gamma encoding is performed at capture, and the inverse gamma function is performed on printing display on the screen such that the overall result is a linear representation of scene luminance.

Bill

I agree with the first two sentences but I wonder about the third. It is my understanding that a digital camera sensor works in a linear manner; it is when we demosaic the data that we give it a non-linear gamma to make the photo correspond with human visual perception.
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2205
Re: The terms "linearization" vs "calibration"
« Reply #63 on: May 08, 2016, 10:00:58 am »

Doug,

With a monitor's output in cd/m2, Linearization is achieved not by Calibration to whatever gamma but by the application of exactly the inverse function in color management. The calibrated gamma is counter-balanced by the 1/gamma-encoding upon conversion to the monitor profile.

The net result is a linear relationship (linear curve) between x: the RGB numbers of a grayscale in a linear gamma space, and y: the output luminance in cd/m2.

In a first order the calibrated gamma, whether 1.8, 2.2 or e.g. the L* trc, is simply irrelevant. Only in a second order there can be "bit precision effects", or let’s call it "smoothness". For example, a regular 2.2 gamma with its steep take off in the deep shadows is not a good idea once 8 bit come into play.


Now let’s think about printer again.
IMHO, the above described "net linearity" should finally be valid as well, now with y = the Reflectance along the printed grayscale.

Again there can be second order effects which may make it desirable to calibrate the printed grayscale not only right to this numerical linearity, but to a brighter state with a more perceptual distribution of tones, however, it is finally captured in the profile and therefore should get counter-balanced and eliminated in the course of color management – in a way that the net linearity is met again, and that there is no net addition of brightness.


Peter
--

I agree with everything you just said.  I should have restricted "linearization" to printer specific ops involving steps in inking which is anything but linear.   However, the net result of calibration, "linearization," and profiling is indeed linear.  At least in the colorimetric path.

Logged

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2205
Re: The terms "linearization" vs "calibration"
« Reply #64 on: May 08, 2016, 10:35:32 am »

I agree with the first two sentences but I wonder about the third. It is my understanding that a digital camera sensor works in a linear manner; it is when we demosaic the data that we give it a non-linear gamma to make the photo correspond with human visual perception.

The demosaic process is linear. The subsequent process of gamma encoding provides an increase in dynamic range when mapping to a smaller bit space. If one has 16 bits or more of resolution one can encode strictly in linear space.

However, there is a non-linear mapping that is done when transforming from scene referenced to output referenced form regardless of encoding gamma. This is the norm for photographs except for reproduction where the goal is colorimetric response in which case the scene referenced form is retained.
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: The terms "linearization" vs "calibration"
« Reply #65 on: May 08, 2016, 11:25:46 am »

I agree with the first two sentences but I wonder about the third. It is my understanding that a digital camera sensor works in a linear manner; it is when we demosaic the data that we give it a non-linear gamma to make the photo correspond with human visual perception.

As I understand it, the gamma encoding is not to account for the non-linearity of vision but to improve gradation in the shadows. The gamma encoding is reversed on printing or viewing so that the luminances in the reproduction are the same as in the scene. This is necessary for the reproduction to be successful.

Bill
Logged

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2205
Re: The terms "linearization" vs "calibration"
« Reply #66 on: May 08, 2016, 01:27:37 pm »

As I understand it, the gamma encoding is not to account for the non-linearity of vision but to improve gradation in the shadows. The gamma encoding is reversed on printing or viewing so that the luminances in the reproduction are the same as in the scene. This is necessary for the reproduction to be successful.

Bill
It's actually both. Perception is more sensitive to absolute changes in luminosity at low levels and less sensitive at high levels.

For instance a nit (cd/m^2) change of 90 to 91 is not perceptible but a nit change of 5 to 6 is quite visible. A gamma encoding is one way to mitigate this when one only has 8 bits available.  If one has 16 bits then a gamma of 1 (linear) would be just fine.
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: The terms "linearization" vs "calibration"
« Reply #67 on: May 08, 2016, 03:45:45 pm »

It's actually both. Perception is more sensitive to absolute changes in luminosity at low levels and less sensitive at high levels.

For instance a nit (cd/m^2) change of 90 to 91 is not perceptible but a nit change of 5 to 6 is quite visible. A gamma encoding is one way to mitigate this when one only has 8 bits available.  If one has 16 bits then a gamma of 1 (linear) would be just fine.

That is what I meant by gamma encoding is needed to improve gradation is the shadows. According to Weber-Fechner law, human vision is approximately logarithmic and a 1% difference in luminance is perceptible. A change from 5 to 6 is a 20% difference and shadow tones would exhibit banding, whereas a change from 90 to 91 is about a 1% difference and gradation would be smooth. Since the gamma encoding is reversed by the inverse gamma function for viewing or printing, the original scene luminance is restored (provided that no tone mapping is performed) and presented to the observer. Even if vision were linear, more bits would be needed to encode the shadows and gamma encoding reduces the needed precision for a given dynamic range. For HDR linear encoding with floating point is employed and gamma is not needed, as you recognized for low DR with 16 bits.

With limited precision of encoding a gamma curve helps satisfy the Weber-Fechner requirement, and to this extent it does address the nonlinearity of human vision. However, to state that non-linear encoding is necessary to compensate for non-linearity of vision is misleading. If the gamma encoding were not removed for viewing, double compression of luminances would occur: once in encoding and another time with viewing.

Does this make any sense?

Bill
Logged

GWGill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 609
  • Author of ArgyllCMS & ArgyllPRO ColorMeter
    • ArgyllCMS
Re: The terms "linearization" vs "calibration"
« Reply #68 on: May 08, 2016, 07:46:08 pm »

No, that is not what I said, but that's how you interpreted it. You distinguish between calibration and profiling, but that is not how the standards organizations define calibration. The example of the thermometer is considered calibration by NIST, since the process establishes a relationship between the temperature readings and the standard, and this relationship is expressed in a lookup table which is read out manually.
Although both derived from the same concepts, the word "calibrate" is actually a contraction of slightly different things in the scientific and color device worlds. In the science world it implies "establish somethings correspondence with a reference, and possibly as a result create a correction table or physically adjust it to conform to the standard". In the color device world it is "adjust the device so that it's response conforms to a target response". i.e. it implies an integrated process of establishing somethings response, creating a correction for it, and applying that correction so that the device now response as it is intended to.

One way of distinguishing between color calibration and profiling is how it functions. If you have M devices and N desired responses, then if you use calibration you need M x N calibration tables, but you only need M + N profiles.

Another difference between the world of science and color is that in science there is typically only one reference, while in color there are typically many different desired responses.
Quote
When I calibrate my NEC monitor with Spectraview, the results are recorded as a profile.
Recorded in the profile, as a supplemental tag. The calibration is along for the ride - nothing in ICC profiling knows anything about the calibration.
« Last Edit: May 08, 2016, 07:55:47 pm by GWGill »
Logged

GWGill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 609
  • Author of ArgyllCMS & ArgyllPRO ColorMeter
    • ArgyllCMS
Re: The terms "linearization" vs "calibration"
« Reply #69 on: May 08, 2016, 07:49:23 pm »

Whether that kind of calibration is appropriate for the linearization of a printer is another matter.
Linearizing printer channel response in L*a*b* space works pretty well.
Logged

GWGill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 609
  • Author of ArgyllCMS & ArgyllPRO ColorMeter
    • ArgyllCMS
Re: The terms "linearization" vs "calibration"
« Reply #70 on: May 08, 2016, 07:53:41 pm »

As I understand it, the gamma encoding is not to account for the non-linearity of vision but to improve gradation in the shadows.
Which is exactly the same thing. Because our vision is basically ratiometric (i.e. non-linear), we are more sensitive to changes in the shadows than the highlights. An encoding that spreads gradation errors evenly is one that is also close to perceptually uniform.
Logged

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2205
Re: The terms "linearization" vs "calibration"
« Reply #71 on: May 08, 2016, 10:49:36 pm »

Linearizing printer channel response in L*a*b* space works pretty well.

It does. Not only because it is reasonably close to human perception but also because it is the input side of ICC printer profile 3DLUTs and non-matrix display profiles.
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: The terms "linearization" vs "calibration"
« Reply #72 on: May 09, 2016, 08:40:43 am »

Which is exactly the same thing. Because our vision is basically ratiometric (i.e. non-linear), we are more sensitive to changes in the shadows than the highlights. An encoding that spreads gradation errors evenly is one that is also close to perceptually uniform.

I am not getting my point across. Relative error is an essential parameter in any measurement system, whether the quantity being measured follows a linear, power, or log function. That is why we use CV (coefficient of variation, standard deviation/mean) rather than the standard deviation when discussing relative error. Weight is a linear function. When weighing a 100 kg football player a scale accurate to the nearest 0.5 kg is adequate, but this scale would not be appropriate for weighing a 2 kg premature infant. The relative errors would be 0.5% and 25% respectively. We need finer gradation at the low end even with a linear function.

Gamma, a power function, was originally introduced in electronic imaging to account for the nonlinearity of cathode ray tubes and not to account for the non-llinearity of the perception of luminance, which is approximately logarithmic (not a power function), but a side effect was that gamma improved gradation in the shadows. However, gamma fails at low luminances where the slope approaches infinity as luminance approaches zero. For this reason, gamma encodings use a linear ramp at very low luminances.

Gamma encoding also fails when one is dealing with HDR imaging, where a log encoding yields constant relative error (see encoding by Greg Ward). One can also improve relative error at the low end by brute force (using more significant digits), but this can be wasteful since the greater precision is not needed at the high end, or by using floating point notation.

Perceptual uniformity is useful in image editing so that a given increment in the control will produce the same proportional change at the low end as at the high end. Many critical users calibrate their monitors to L*a*b rather than gamma, since L*a*b is designed to be perceptually uniform. However, a linear ramp is still needed at low luminances.

When dealing with a wide range of luminances (e.g. HDR), gamma is abandoned and one goes over to log or linear floating point encoding as discussed by Ward in the quoted article.

Regards,

Bill
« Last Edit: May 09, 2016, 08:46:23 am by bjanes »
Logged

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20893
  • Andrew Rodney
    • http://www.digitaldog.net/
Re: The terms "linearization" vs "calibration"
« Reply #73 on: May 09, 2016, 10:35:04 am »

Perceptual uniformity is useful in image editing so that a given increment in the control will produce the same proportional change at the low end as at the high end. Many critical users calibrate their monitors to L*a*b rather than gamma, since L*a*b is designed to be perceptually uniform. However, a linear ramp is still needed at low luminances.
And this is why I've kept this fine post from Lars on the controversial subject of this kind of calibration (which at the time it was posted, to this day, hasn't been addressed by the Lstar proponents):


Quote

Re: [Icc_users] L* workingspaces and L* output calibration
Tuesday, March 11, 2008 8:46:58 PM
From:Lars Borg <borg@adobe.com>

L* is great if you're making copies. However, in most other
scenarios, L* out is vastly different from L* in.  And when L* out is
different from L* in, an L* encoding is very inappropriate as
illustrated below.


Let me provide an example for video. Let's say you have a Macbeth
chart. On set, the six gray patches would measure around  L* 96, 81,
66, 51, 36, 21.


Assuming the camera is Rec.709 compliant, using a 16-235 digital
encoding, and the camera is set for the exposure of the Macbeth
chart, the video RGB values would be 224,183,145,109,76,46.


On a reference HD TV monitor they should reproduce at L* 95.5, 78.7,
62.2, 45.8, 29.6, 13.6.
If say 2% flare is present on the monitor (for example at home), the
projected values would be different again, here: 96.3, 79.9, 63.8,
48.4, 34.1, 22.5.


As you can see, L* out is clearly not the same as L* in.
Except for copiers, a system gamma greater than 1 is a required
feature for image reproduction systems aiming to please human eyes.
For example, film still photography has a much higher system gamma
than video.


Now, if you want an L* encoding for the video, which set of values
would you use:
96, 81, 66, 51, 36, 21 or
95.5, 78.7, 62.2, 45.8, 29.6, 13.6?
Either is wrong, when used in the wrong context.
If I need to restore the scene colorimetry for visual effects work, I
need 96, 81, 66, 51, 36, 21.
If I need to re-encode the HD TV monitor image for another device,
say a DVD, I need 95.5, 78.7, 62.2, 45.8, 29.6, 13.6.


In this context, using an L* encoding would be utterly confusing due
to the lack of common values for the same patches.  (Like using US
Dollars in Canada.)
Video solves this by not encoding in L*. (Admittedly, video encoding
is still somewhat confusing. Ask Charles Poynton.)


When cameras, video encoders, DVDs, computer displays, TV monitors,
DLPs, printers, etc., are not used for making exact copies, but
rather for the more common purpose of pleasing rendering, the L*
encoding is inappropriate as it will be a main source of confusion.


Are you planning to encode CMYK in L*, too?


Lars
Lab attempts to be perceptually uniform but it's really not....
« Last Edit: May 09, 2016, 10:39:07 am by digitaldog »
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2205
Re: The terms "linearization" vs "calibration"
« Reply #74 on: May 09, 2016, 10:37:52 am »

I am not getting my point across. Relative error is an essential parameter in any measurement system, whether the quantity being measured follows a linear, power, or log function. That is why we use CV (coefficient of variation, standard deviation/mean) rather than the standard deviation when discussing relative error. Weight is a linear function. When weighing a 100 kg football player a scale accurate to the nearest 0.5 kg is adequate, but this scale would not be appropriate for weighing a 2 kg premature infant. The relative errors would be 0.5% and 25% respectively. We need finer gradation at the low end even with a linear function.

Gamma, a power function, was originally introduced in electronic imaging to account for the nonlinearity of cathode ray tubes and not to account for the non-llinearity of the perception of luminance, which is approximately logarithmic (not a power function), but a side effect was that gamma improved gradation in the shadows. However, gamma fails at low luminances where the slope approaches infinity as luminance approaches zero. For this reason, gamma encodings use a linear ramp at very low luminances.

A power function may have a slope that goes to infinity at zero but a log response is worse. The value itself goes to -infinity at zero.

Quote

Gamma encoding also fails when one is dealing with HDR imaging, where a log encoding yields constant relative error (see encoding by Greg Ward). One can also improve relative error at the low end by brute force (using more significant digits), but this can be wasteful since the greater precision is not needed at the high end, or by using floating point notation.
Sensitivity to relative error decreases rapidly at low luminance so a log response is not ideal there either. A power function may have infinite slope at 0 but at least it has a value, unlike a log response.

Quote

Perceptual uniformity is useful in image editing so that a given increment in the control will produce the same proportional change at the low end as at the high end. Many critical users calibrate their monitors to L*a*b rather than gamma, since L*a*b is designed to be perceptually uniform. However, a linear ramp is still needed at low luminances.

L*a*b* is much closer to a gamma encode like Adobe RGB than it is to a log encode. It's even more similar to sRGB which has a significant linear lead in ramp though L*a*b* has both a higher gamma (3.0) and larger lead in ramp than sRGB.

Quote
When dealing with a wide range of luminances (e.g. HDR), gamma is abandoned and one goes over to log or linear floating point encoding as discussed by Ward in the quoted article.

Regards,

Bill
Logged

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2205
Re: The terms "linearization" vs "calibration"
« Reply #75 on: May 09, 2016, 10:56:41 am »

As an aside, L*a*b* suffers from color shifts with scaling, A pure gamma encoding does not. One can move the curves slider in Photoshop to compensate for exposure without changing color in scene referenced images (necessary for repro work). This is not the case with L*a*b* and is the main reason I do not prefer L*a*b* as a working space.
Logged

GrahamBy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1813
    • Some of my photos
Re: The terms "linearization" vs "calibration"
« Reply #76 on: May 09, 2016, 11:08:20 am »

If you plot L* versus luminance, the response is nonlinear, but if you plot L* vs perceived brightness, the response is linear. The L* compensates for the nonlinearity of human perception. Similarly, gamma encoding is nonlinear. The gamma encoding is performed at capture, and the inverse gamma function is performed on printing display on the screen such that the overall result is a linear representation of scene luminance.

Bill

Thank you. I was wondering if I'd lost my mind for a moment there.
Logged

GrahamBy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1813
    • Some of my photos
Re: The terms "linearization" vs "calibration"
« Reply #77 on: May 09, 2016, 11:20:44 am »

A power function may have a slope that goes to infinity at zero but a log response is worse. The value itself goes to -infinity at zero.

Which is fine if you allow that zero illumination, like zero reflectance or zero absolute temperature, is never achieved. They only occur when people start making artificial choices of zero-points. That's why no one has a problem with using dB (10 * log_10(sound pressure level in Pascals/reference level)) as a measure of acoustic intensity. And why in the same field, everyone is happy to measure distortion as deviations from a linear relation of input to output, despite the logarithmic physiological response.
Logged

Doug Gray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2205
Re: The terms "linearization" vs "calibration"
« Reply #78 on: May 09, 2016, 11:34:30 am »

Which is fine if you allow that zero illumination, like zero reflectance or zero absolute temperature, is never achieved. They only occur when people start making artificial choices of zero-points. That's why no one has a problem with using dB (10 * log_10(sound pressure level in Pascals/reference level)) as a measure of acoustic intensity. And why in the same field, everyone is happy to measure distortion as deviations from a linear relation of input to output, despite the logarithmic physiological response.

Sure, but zero illumination does exist. It just can't be represented on a log scale. Gamma (power) scales have no problem with it. Zero reflectance doesn't exist of course but a gamma function handles it just fine as does a log linear or hybrid (L*, sRGB) scale.
« Last Edit: May 09, 2016, 06:29:21 pm by Doug Gray »
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Re: The terms "linearization" vs "calibration"
« Reply #79 on: May 09, 2016, 11:57:36 am »

A power function may have a slope that goes to infinity at zero but a log response is worse. The value itself goes to -infinity at zero.
Sensitivity to relative error decreases rapidly at low luminance so a log response is not ideal there either. A power function may have infinite slope at 0 but at least it has a value, unlike a log response.

L*a*b* is much closer to a gamma encode like Adobe RGB than it is to a log encode. It's even more similar to sRGB which has a significant linear lead in ramp though L*a*b* has both a higher gamma (3.0) and larger lead in ramp than sRGB.

I have already noted the limitations of gamma and L*a*b encodings as documented below, and you are quoting me out of context in an attempt to prove your point. The same limitation applies to a log encoding, but then zero luminance rarely occurs in practical photographic situations and the minimum value possible in a log encoding is sufficiently close to zero for practical use. Log encodings are successfully used for HDR along with floating point. Did you take the trouble to read the article by Greg Ward?

Regards,

Bill

Quote
However, gamma fails at low luminances where the slope approaches infinity as luminance approaches zero. For this reason, gamma encodings use a linear ramp at very low luminances.

Quote
Many critical users calibrate their monitors to L*a*b rather than gamma, since L*a*b is designed to be perceptually uniform. However, a linear ramp is still needed at low luminances.
Logged
Pages: 1 2 3 [4] 5   Go Up