Pages: 1 ... 8 9 [10] 11 12 ... 14   Go Down

Author Topic: Does a raw file have a color space?  (Read 190222 times)

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Does a raw file have a color space?
« Reply #180 on: January 28, 2008, 10:13:20 am »

Quote
Since the CIE XYZ space lacks a white point, ...

I thought white was X=Y=Z; or is there a distinction between the terms "white" and "white point"?

And BTW Bill, I don't have any special qualifications or claim to authority for this discussion.  Just eager to learn.
Logged
emil

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Does a raw file have a color space?
« Reply #181 on: January 28, 2008, 10:40:17 am »

Quote
I thought white was X=Y=Z; or is there a distinction between the terms "white" and "white point"?
[{POST_SNAPBACK}][/a]


Emil,

I do not think X=Y=Z=white. The XYZ space is not perceptually uniform, unlike L*a*b and the common matrix spaces. See [a href=\"http://en.wikipedia.org/wiki/CIE_1931_color_space]Wikipedia[/url]

I don't know the answer to your question about "white" and "white point".

Quote
And BTW Bill, I don't have any special qualifications or claim to authority for this discussion.  Just eager to learn.
[{POST_SNAPBACK}][/a]

I think you are being too modest. A professor of physics at the University of Chicago is more likely to have a grasp of scientific concepts than the average photographer on this forum. You can explain the science as few others in this thread can, and others can fill in on color theory, which is not your specialty.

Bill

Let the readers judge for themselves:
[a href=\"http://theory.uchicago.edu/~ejm/]Emil Martinec[/url]
Sensor Analysis
Logged

Iliah

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 770
Does a raw file have a color space?
« Reply #182 on: January 28, 2008, 11:00:17 am »

If in CIE colour space we chose 2 colours, any colour that is on the straight line connecting those 2 colours can be obtained by mixing those 2 colours. No colour that is not on that line can be obtained by mixing those 2 colours.

Another thing here:
To start with, please look at the spectral sensitivity characteristics of a sensor. For numeric example we will use here curves of SONY ICX285AQ, see p.7 of SONY publication
http://products.sel.sony.com/semi/PDF/ICX285AQ.pdf

Digitizing the curves on page 7, we will have a table similar to:
400nm 450nm 500nm 550nm 600nm 650nm 700nm
R 0.03 0.02 0.04 0.07 0.96 0.94 0.82
G 0.03 0.13 0.56 0.9 0.31 0.04 0.15
B 0.22 0.66 0.55 0.04 0 0.01 0.01

Now let's take 4 wavelengths, 450, 500, 550, and 600nm; and play with them a little. Can we find a source that emits 450 and 550 nm that will provide a response equivalent to another source of light, emitting 500 and 600nm? Solving a simultaneous linear equation, we see that sensor will respond to first power source, emitting 65.4mW at 450nm and 41.6mW at 550 nm exactly the same way as to the second power source, emitting 81.5mW at 500nm and 1mW at 600nm.

Further, it is easy to see that we can find an infinite number of mixtures of those 4 wave lengths that will produce exactly the same sensor response. That means that for ICX285AQ a lot of colours between cyanish blue and cyanish green will trigger the same sensor response.

Reproduction of many shades of reds is even more challenging then that. If the series of wavelengths is 400, 500, 600, and 700nm with this sensor orange, brown, and even some shades of green trigger same response.

The above problem is emphasized when shooting conditions are far from native sensor colour temperature, and with "wrong" exposure. Yes, resulting colour depends not only on the white balance, but on exposure too. Speak of ETTR

It is important that this kind of metamerism is very far from the way our human perception tends to interpret colours. The example above shows the colours where humans see distinctly different colour. To prove this one can compute tristimulus values computed as per CIE.

Here are two examples of rendition of ambiguous colours mentioned above - the  particular sensor will render them into the same numbers, and demosaicing will further interprete them as a same colour:

cyanish blue to cyanish green:


orange-brown-green:


You can have a better view in a colour-savvy application. Images are in sRGB colour space.
Logged

Iliah

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 770
Does a raw file have a color space?
« Reply #183 on: January 28, 2008, 11:04:25 am »

In  Lab L is perceptually more or less uniform, especially with corrected conversion constants. "a" and "b" are not - you can see that when changing the saturation. More on the issues:
http://brucelindbloom.com/LContinuity.html
http://brucelindbloom.com/UPLab.html
Logged

Panopeeper

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1805
Does a raw file have a color space?
« Reply #184 on: January 28, 2008, 11:14:42 am »

Quote
I thought white was X=Y=Z; or is there a distinction between the terms "white" and "white point"?

One could say that CIE has a "white line" and a "white region". The curve with the numbers on it represents the Planckian locus on the attached image, which is the locus of color temperatures of black bodies. Other light sources are in the "white" region, characterized not solemnly by temperature but tint as well.
Logged
Gabor

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
Does a raw file have a color space?
« Reply #185 on: January 28, 2008, 11:35:09 am »

Quote
I suppose what I am trying to get at is that the camera raw data is color data that is no more or less valid than CIE color data insofar as it is related to the color data of human vision; it is just less standardized.  Because the three numbers comprising that color data represent averages over SPD's convolved with SRF's, one cannot map one set of color data uniquely to the other.   Constructing a map amounts to choosing a convention rather than deriving a rigorous relation; one tries to construct the map so that the map is roughly accurate with respect to a wide variety of SPD's (the aforementioned optimization problem).

As I see it, the distinction between RAW data and CIE color data (I'm assuming you mean LAB or defined RGB spaces like ProPhoto, sRGB, etc) is the white balance issue. With ProPhoto RGB, we can unambiguously plot the coordinates of a given pixel's color value in LAB space. But with RAW data, we cannot, until we choose a WB value to put the RAW RGB data in the proper context (distinguishing between a white wall lit with orange light or an orange wall lit wit white light).

RAW data requires 3 data sets to map a specific pixel value to a point in LAB space: the spectral response of the camera sensor and filter array, the spectral composition of the lighting, and the RAW data itself. In contrast, only 2 data sets are needed for sRGB or ProPhoto data: the ICC profile, and the RGB image data.
Logged

Iliah

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 770
Does a raw file have a color space?
« Reply #186 on: January 28, 2008, 12:04:11 pm »

Quote
As I see it, the distinction between RAW data and CIE color data (I'm assuming you mean LAB or defined RGB spaces like ProPhoto, sRGB, etc) is the white balance issue.

White balance is per channel exposure. You can't restore values with white balance if some parts of the spectrum are not recorded due to the nature of the capture process. And it is always the case.

Raw data can't be unambiguously restored to Lab also because of metamerism. Good conversion needs to take into account true exposure (light values, independent of ISO trickery), as it affects mapping.
« Last Edit: January 28, 2008, 12:04:53 pm by Iliah »
Logged

papa v2.0

  • Full Member
  • ***
  • Offline Offline
  • Posts: 206
Does a raw file have a color space?
« Reply #187 on: January 28, 2008, 12:05:33 pm »

Quote
RAW data requires 3 data sets to map a specific pixel value to a point in LAB space: the spectral response of the camera sensor and filter array, the spectral composition of the lighting, and the RAW data itself. In contrast, only 2 data sets are needed for sRGB or ProPhoto data: the ICC profile, and the RGB image data.
[a href=\"index.php?act=findpost&pid=170271\"][{POST_SNAPBACK}][/a]

MMM what do you mean by the "spectral composition of the lighting"

This is not intrinsically available from sensor data. This is one of the problems in digital imaging –
Illuminant estimation.
Logged

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Does a raw file have a color space?
« Reply #188 on: January 28, 2008, 12:15:31 pm »

Quote
In  Lab L is perceptually more or less uniform, especially with corrected conversion constants. "a" and "b" are not - you can see that when changing the saturation. More on the issues:

http://brucelindbloom.com/LContinuity.html
http://brucelindbloom.com/UPLab.html
[{POST_SNAPBACK}][/a]

Ihah, thanks for the links and your input.


[a href=\"http://www.poynton.com/notes/colour_and_gamma/ColorFAQ.html]Poynton[/url] discusses perceptual uniformity in his Color FAQ and L*a*b was an attempt to achieve perceptual uniformity. As discussed previously in this thread, mathematical models often are less than perfect in describing the behavior of a device, much less human perception.

"Finding a transformation of XYZ into a reasonably perceptually-uniform space consumed a decade or more at the CIE and in the end no single system could be agreed. So the CIE standardized two systems, L*u*v* and L*a*b*, sometimes written CIELUV and CIELAB. (The u and v are unrelated to video U and V.) Both L*u*v* and L*a*b* improve the 80:1 or so perceptual nonuniformity of XYZ to about 6:1."

Nonetheless, when one is editing in L*a*b, if a* = b*, the color is assumed to be neutral. This is not the case with CIE XYZ.

Bill
Logged

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Does a raw file have a color space?
« Reply #189 on: January 28, 2008, 12:37:53 pm »

Quote
As I see it, the distinction between RAW data and CIE color data (I'm assuming you mean LAB or defined RGB spaces like ProPhoto, sRGB, etc) is the white balance issue. With ProPhoto RGB, we can unambiguously plot the coordinates of a given pixel's color value in LAB space. But with RAW data, we cannot, until we choose a WB value to put the RAW RGB data in the proper context (distinguishing between a white wall lit with orange light or an orange wall lit wit white light).

RAW data requires 3 data sets to map a specific pixel value to a point in LAB space: the spectral response of the camera sensor and filter array, the spectral composition of the lighting, and the RAW data itself. In contrast, only 2 data sets are needed for sRGB or ProPhoto data: the ICC profile, and the RGB image data.
[a href=\"index.php?act=findpost&pid=170271\"][{POST_SNAPBACK}][/a]

No, I meant XYZ color space.  After reading a bit more, I think I see what you are getting at; the transformation between XYZ and LAB color coordinates requires not just the XYZ coordinates of the sampled light, but some reference coordinates X', Y', Z' of the light source used to illuminate the scene.  

I have to say the latter seems to me not well-defined: I sit here in a room with incandescent sources, there is the light emitted by my computer screen, and sunlight coming in through the window, suppose I also used flash to take a picture of the scene -- the light coming into different points on the camera sensor, or different parts of my retina as I view the scene, will have been coming from quite different superpositions of these disparate sources with different SPD's, and there is nothing I would call "the" XYZ values of "the" light source).  

But putting this quibble aside, transforming XYZ values to LAB does indeed involve another piece of data, the XYZ coordinates of "the" light source.  That would be true even for a hypothetical device whose spectral response was precisely the CIE's spectral response functions that define X,Y,and Z; such a device would still require an X'Y'Z' of "the light source" to map the X,Y,Z that it records to a set of L,A,B values.  And yet XYZ is called a color space.  Various RGB color spaces incorporate this data implicitly by defining themselves wrt a reference light source (eg D65 or D50); so they don't eliminate this piece of information, they incorporate it into their very definition.   But before introducing this extra complication, the hypothetical CIE-sensor records the scene directly in XYZ color space.  The need to define and use the X'Y'Z' of the light source only comes later when you want to start processing this color data in a manner that is consistent with the way the brain processes its tristimulus data.  

And the non-CIE sensor in a DSLR seems no different in principle.    

The important issue seems to me that of the difference in spectral response functions among human vision, CIE, and camera sensor.  Iliah's examples illustrate that beautifully.
« Last Edit: January 28, 2008, 12:47:03 pm by ejmartin »
Logged
emil

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
Does a raw file have a color space?
« Reply #190 on: January 28, 2008, 03:31:42 pm »

Quote
This is not intrinsically available from sensor data. This is one of the problems in digital imaging –
Illuminant estimation.

My point exactly. In order to convert a RAW, we must estimate the characteristics of the lighting before we can meaningfully convert the RAW data to LAB or a standard RGB color space.
« Last Edit: January 28, 2008, 03:35:11 pm by Jonathan Wienke »
Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
Does a raw file have a color space?
« Reply #191 on: January 29, 2008, 06:17:48 am »

Quote
No, I meant XYZ color space.  After reading a bit more, I think I see what you are getting at; the transformation between XYZ and LAB color coordinates requires not just the XYZ coordinates of the sampled light, but some reference coordinates X', Y', Z' of the light source used to illuminate the scene. 

I have to say the latter seems to me not well-defined: I sit here in a room with incandescent sources, there is the light emitted by my computer screen, and sunlight coming in through the window, suppose I also used flash to take a picture of the scene -- the light coming into different points on the camera sensor, or different parts of my retina as I view the scene, will have been coming from quite different superpositions of these disparate sources with different SPD's, and there is nothing I would call "the" XYZ values of "the" light source).

I've made exteriors of buildings under mixed light sources with wildly varying SPD, late-evening twilight, fluorescent, incandescent, sodium vapor, and tiki torches all in the same frame. The only way to get "natural looking" color under such circustances is to do a RAW conversion balanced specifically for each light source, and then manually blend each conversion together into a single final image.
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
Does a raw file have a color space?
« Reply #192 on: January 29, 2008, 07:26:26 pm »

Quote
I've made exteriors of buildings under mixed light sources with wildly varying SPD, late-evening twilight, fluorescent, incandescent, sodium vapor, and tiki torches all in the same frame. The only way to get "natural looking" color under such circustances is to do a RAW conversion balanced specifically for each light source, and then manually blend each conversion together into a single final image.
[a href=\"index.php?act=findpost&pid=170542\"][{POST_SNAPBACK}][/a]

If you can't turn each light on independently.  If you have the power to control the lights, you can take a separate exposure for each light, balance it the way you want it, and then add them together.
Logged

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Does a raw file have a color space?
« Reply #193 on: January 29, 2008, 10:40:41 pm »

Quote
I've made exteriors of buildings under mixed light sources with wildly varying SPD, late-evening twilight, fluorescent, incandescent, sodium vapor, and tiki torches all in the same frame. The only way to get "natural looking" color under such circustances is to do a RAW conversion balanced specifically for each light source, and then manually blend each conversion together into a single final image.
[a href=\"index.php?act=findpost&pid=170542\"][{POST_SNAPBACK}][/a]


This example illustrates the importance of separating the photometry of the scene, as opposed to the image processing to be undertaken later.  In vision (perhaps I'm oversimplifying here), the eye is the sensor (photometer) and the brain is post-processing.  IMO it seems unreasonable to ask the DSLR sensor to be  both eye and brain; rather we should only ask it to be eye, and thus carry out the photometric task and forget about "the vision thing", leaving post-processing in Photoshop (such as in your example) to supplant the role of brain in interpreting the scene.  The camera sensor should hardly be blamed for any shortcomings of post-processing software in rendering the spectral data in a way that parallels human vision.

This demarcation is mimiced (albeit in cartoon form) by the distinction between XYZ and LAB color spaces.  XYZ is more photometric -- an average of the spectral power distribution of the light signal against the spectral response functions of the color receptors, no judgments about light source required.  LAB is mapped to from XYZ through a function of X/X', Y/Y', Z/Z' where X'Y'Z' are the color coordinates of the light source; this already brings in elements of post-processing, and a rough attempt to mimic the way the brain processes the tristimulus data that the eye presents to it by incorporating the way the brain responds to ambient lighting and contextual data.

It seems to me more important (for the purpose of color accuracy) that the DSLR sensor spectral response parallel as closely as possible that of the eye** (for instance having roughly similar degeneracies, ie metamers), and less important to worry about how the sensor is supposed to infer the structure of the light source from this data.  

With similar responses, the camera sensor matches the tristimulus data of the eye well, and becomes a color space closely approximating XYZ.  Having that in hand, future improvements in post-processing software, fed by advances in the science of vision, can translate that data into a close facsimile of the way the brain interprets the data it receives from the eye, so that we can reproduce it accurately for our eyes to enjoy.





** though I'm sure that devotees of IR and astro photography will heartily disagree!
« Last Edit: January 29, 2008, 10:41:35 pm by ejmartin »
Logged
emil

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Does a raw file have a color space?
« Reply #194 on: January 30, 2008, 08:41:15 am »

Quote
This example illustrates the importance of separating the photometry of the scene, as opposed to the image processing to be undertaken later.  In vision (perhaps I'm oversimplifying here), the eye is the sensor (photometer) and the brain is post-processing.  IMO it seems unreasonable to ask the DSLR sensor to be  both eye and brain; rather we should only ask it to be eye, and thus carry out the photometric task and forget about "the vision thing", leaving post-processing in Photoshop (such as in your example) to supplant the role of brain in interpreting the scene.  The camera sensor should hardly be blamed for any shortcomings of post-processing software in rendering the spectral data in a way that parallels human vision.

This demarcation is mimiced (albeit in cartoon form) by the distinction between XYZ and LAB color spaces.  XYZ is more photometric -- an average of the spectral power distribution of the light signal against the spectral response functions of the color receptors, no judgments about light source required.  LAB is mapped to from XYZ through a function of X/X', Y/Y', Z/Z' where X'Y'Z' are the color coordinates of the light source; this already brings in elements of post-processing, and a rough attempt to mimic the way the brain processes the tristimulus data that the eye presents to it by incorporating the way the brain responds to ambient lighting and contextual data.

It seems to me more important (for the purpose of color accuracy) that the DSLR sensor spectral response parallel as closely as possible that of the eye** (for instance having roughly similar degeneracies, ie metamers), and less important to worry about how the sensor is supposed to infer the structure of the light source from this data.   

With similar responses, the camera sensor matches the tristimulus data of the eye well, and becomes a color space closely approximating XYZ.  Having that in hand, future improvements in post-processing software, fed by advances in the science of vision, can translate that data into a close facsimile of the way the brain interprets the data it receives from the eye, so that we can reproduce it accurately for our eyes to enjoy.
** though I'm sure that devotees of IR and astro photography will heartily disagree!
[{POST_SNAPBACK}][/a]


Even with colorimetric rendering where the camera XYZ closely approximates the tristimulus response of the eye, [a href=\"http://www.athle.com/asp.net/main.medias/display.aspx?mediaid=17272&section=52&day=0&month=0&year=0&mode=]Wandell et al [/url] note that the color reproduction will be accurate only when the original and reproduction are viewed under similar conditions, including surround, ambient lighting, and field of view. Since the actual scene is being reproduced, presumably "white balance" considerations are eliminated as in Emil's model.

If the white point needs adjustment, there are further complications as Bruce Lindbloom outlines in the article on chromatic adaption on his web site. He concludes, "...You can see from the table that Bradford is superior to von Kries, which in turn is superior to XYZ Scaling. You can also see that the adaptation is only an approximation to the true value, and that this approximation is worse when the two reference illuminants are very different from each other. Adaptation also becomes progressively less perfect as the color is farther away from neutral.

Wandell also discusses non-colorimetric sensors, where a 3x3 matrix conversion will give only an approximate result. He mentions simple linear transformations that vary smoothly with the input data (interpolation--presumably what is done with ICC camera profiles used by some raw converters). He also mentions non-linear polynomial functions, and methods based on simple neural networks. Another approach is to use memory colors (e.g. flesh tones, foliage, blue sky) as a reference for further adjustment. Some of these methods are propriatory.

Bill
Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
Does a raw file have a color space?
« Reply #195 on: January 30, 2008, 08:54:15 am »

Quote
If you can't turn each light on independently.  If you have the power to control the lights, you can take a separate exposure for each light, balance it the way you want it, and then add them together.

That wasn't even remotely possible, as two of the 5 light sources were ambient outdoor light and sodium-vapor streetlignts, and I was shooting the restaurant with actual customers dining...



Your suggestion doesn't really save any time, as the most time-consuming part is manually blending all of the pieces together either way.
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
Does a raw file have a color space?
« Reply #196 on: January 30, 2008, 09:09:44 am »

Quote
That wasn't even remotely possible,

but sometimes it is, so it is good to keep it in mind.

Quote
Your suggestion doesn't really save any time, as the most time-consuming part is manually blending all of the pieces together either way.
[a href=\"index.php?act=findpost&pid=170957\"][{POST_SNAPBACK}][/a]

You can never get a blend with all the lights in one scene, exactly right.  Every surface has a complex blend of the lighting.  Exposing each light color separately guarantees the natural complexity.
Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
Does a raw file have a color space?
« Reply #197 on: January 30, 2008, 09:41:37 am »

I stand by what I said. In my example image, the upstairs windows have incandescent illumination, except for the rightmost two by the grill, which are lit primarily by fluorescent fixtures. I had to blend/fade the opacity of the incandescent-WB and fluorescent-WB layers along the transition area between the fluorescent-lit and incancescent-lit areas. I would have had to do exactly the same thing if I had shot each light source separately, so your suggestion has a net time savings of exactly zero. In addition, using a single RAW capture as the source (processed multiple times with different WB settings) guarantees zero registration issues when blending the layers together. That can be a big issue when there are ceiling fans, fountains, trees, or foliage along the boundary between one lighting type and another.
« Last Edit: January 30, 2008, 09:43:24 am by Jonathan Wienke »
Logged

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Does a raw file have a color space?
« Reply #198 on: January 30, 2008, 09:00:18 pm »

Just for fun, I did a little exercise.   Noting that the sensor referred to in Iliah's post has a substantial response in the IR, it seems that it's important to use the SRF of an actual DSLR with IR filter in place, otherwise the SRF's are overweighted toward the IR and have no hope of matching human vision.  

I found a measurement of the Nikon D70 SRF at

http://scien.stanford.edu/class/psych221/p...h/spectral.html



The sensor data are linear in the inputs, and so it seems after all that the best one can attempt is to fit the CIE XYZ spectral response functions to a linear combination of those of the camera.  I discretized the above at 10nm intervals and did a least squares fit to the best linear transform, and the result is



The lighter lines are the CIE XYZ spectral response functions, the heavier lines are the best fit linear combinations of the D70 SRF's.  As is clearly evident, the best fit red channel can't capture the double hump character of the CIE X channel SRF, and the best fit green channel has a cleft in the maximum where it wants to reproduce the smooth maximum of the Y channel.  Overall the fit is decent but not spectacular.

I suspect is is this sort of linear transform that ACR is applying in mapping demosaiced camera data to map it to XYZ data.
« Last Edit: January 30, 2008, 09:09:21 pm by ejmartin »
Logged
emil

Iliah

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 770
Does a raw file have a color space?
« Reply #199 on: January 30, 2008, 10:17:47 pm »

Quote
Noting that the sensor referred to in Iliah's post has a substantial response in the IR

All of them do unfortunately. That is why I used wavelengths that are rather far from IR.  Experimentally, it is very interesting to shoot rainbow formed by a prism (or formed some other controlled way - keeping in mind that AA filters sometimes have polarizing effect).
Logged
Pages: 1 ... 8 9 [10] 11 12 ... 14   Go Up