Pages: 1 ... 7 8 [9] 10 11 ... 14   Go Down

Author Topic: Does a raw file have a color space?  (Read 190212 times)

Panopeeper

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1805
Does a raw file have a color space?
« Reply #160 on: January 26, 2008, 09:26:13 pm »

Quote
Four? That those profiles only have a colorimetric table. Absolute will likely look funky. RelCol is going to be what you get with Saturation or Perceptual selected in Photoshop since those tables don't exist.

I remember to have read, that not all of them have been realized, but I did not remember, which one not. Now I compared all against the others.

First, the Adobe engine:

- saturation shows tiny differences against absolute and more against relative, but none against perceptual.

- perceptual shows tiny differences against absolute and more against relative.

- absolute shows much difference against relative.

According to the above, perceptual and saturation are identical. Absolute is close to these two, but not completely identical - what can cause the difference?

With the Microsoft engine:

- saturation against perceptual: small but clear, against absolute: huge, against relative: small but clear

- perceptual against absolute: huge, against relative: none

- absolute against relative: huge

So, perceptual and absolute are the same.

Anyway, all that does not change the fact, that the transformation from ProPhoto to sRGB is ambiguous.
Logged
Gabor

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
Does a raw file have a color space?
« Reply #161 on: January 26, 2008, 09:28:08 pm »

Quote
Not so. How do you think the projection from a larger set on a smaller one can be unambiguous?

First of all you need to pay closer attention to what you read. I'm starting from sRGB in my example, which means there cannot be any out-of-gamut colors when converting to ProPhoto or LAB.

Second, your argument is totally irrelevant. If the source color is outside the destination space, defining its location in the destination space is not ambiguous, it's impossible. The best we can do is attempt to select an alternate color within the destination space that is as close as possible to the original color. The process we use to select alternate in-gamut colors may be ambiguous, but that does NOT mean we don't know exactly what the source color is. When the source colors fall within the destination space, we can convert back and forth at will with no ambiguity at all. sRGB (215, 133, 37) = ProPhoto (161, 124, 51) = LAB (63, 27, 61).

Contrast that to a Foveon RAW pixel with RGB value 215, 133, 37 (scaled to 8 bits). That RAW RGB value could represent anything from orange to white, depending on the lighting illuminating the subject. Even if we have perfect knowledge of the sensor's color behavior, we still do not know what color that RAW RGB value represents until we choose a white balance setting. The white balance setting chosen can make that pixel almost any color at all after RAW conversion. That is what I mean by ambiguity. The RAW data by itself, even if tagged with detailed camera color response data, cannot distinguish between an orange object shot in white light and a white object shot in orange light. The correct interpretation can be found only when white balance is set properly. A properly converted RGB image does not have this issue; if a sRGB pixel has the value 215, 133, 37 we know unequivocally that pixel is a specific shade of orange. There are no other factors that affect how one should interpret that color.
Logged

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Does a raw file have a color space?
« Reply #162 on: January 26, 2008, 09:32:10 pm »

Quote
So, perceptual and absolute are the same.

Anyway, all that does not change the fact, that the transformation from ProPhoto to sRGB is ambiguous.
[a href=\"index.php?act=findpost&pid=169874\"][{POST_SNAPBACK}][/a]

I'm not discussing ambiguity, you guys can go at it.

There's only ONE table in simple matrix profiles (all working spaces including ProPhoto RGB): Colorimetric. Doesn't matter what you select with what engine. You can't get perceptual. You get either RelCol or Absolute. At some point, V4 Profiles should change this (the only V4 working space profile I know of is a specialized version of sRGB).

This is one reason, as you report that the differences are so small.

Output profiles (complex LUT based profiles) have two other tables; Perceptual and Saturation.
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Does a raw file have a color space?
« Reply #163 on: January 27, 2008, 12:59:24 pm »

Quote
No it isn't. Your statement is equivalent to saying that exposure is defined by shutter speed irrespective of aperture and ISO. The camera sensor/CFA characteristics are only half of the equation. The spectral characteristics of the lighting are equally pertinent to the RAW values that are recorded, which is why every RAW converter has a White Balance setting. If RAW data was unambiguously colorimetric like LAB or ProPhoto RGB, then a white balance setting would be unnecessary.
[a href=\"index.php?act=findpost&pid=169785\"][{POST_SNAPBACK}][/a]

Quote
Contrast that to a Foveon RAW pixel with RGB value 215, 133, 37 (scaled to 8 bits). That RAW RGB value could represent anything from orange to white, depending on the lighting illuminating the subject. Even if we have perfect knowledge of the sensor's color behavior, we still do not know what color that RAW RGB value represents until we choose a white balance setting. The white balance setting chosen can make that pixel almost any color at all after RAW conversion. That is what I mean by ambiguity. The RAW data by itself, even if tagged with detailed camera color response data, cannot distinguish between an orange object shot in white light and a white object shot in orange light. The correct interpretation can be found only when white balance is set properly. A properly converted RGB image does not have this issue; if a sRGB pixel has the value 215, 133, 37 we know unequivocally that pixel is a specific shade of orange. There are no other factors that affect how one should interpret that color.

I have to say I'm puzzled by this line of reasoning.  The spectral characteristics of the lighting are what they are, the sensor doesn't change its physical properties according to what the light source is.  Indoor light simply has less blue, and that's why the raw data under such lighting will have a very underexposed blue channel.  The sensor records that fact (filtered by the spectral transmissivity of the CFA filters; why is that not colorimetric?

An orange object shot in white light and white object shot in orange light both look orange to my eyes; and they will both be recorded as orange by a camera sensor.  If you want to make the white object look white after the fact by monkeying with the controls in Photoshop, that's up to you but that's not the spectral composition of the light just after it reflected off the object at the scene.

Just thinking out loud here, isn't the need for white balance related to the gamma correction applied to the raw data?  If the illuminant had one EV less blue than green and red, then applying a gamma curve will amplify the green and red 2^gamma more than the blue and throw the colors out of whack relative to what they were at the scene.  This suggests to me that the white balance is needed due to some historical artifact -- that the common display device used to be a CRT with gamma around two or so.  If the standard were changed and LCD's used a linear gamma instead of trying to match the old CRT display's gamma~2, then white balance correction shouldn't be needed.  An output device with linear gamma would reconstruct precisely (leaving aside issues of spatial interpolation) the colors of the scene from the linear raw data.
« Last Edit: January 27, 2008, 01:09:34 pm by ejmartin »
Logged
emil

Graeme Nattress

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 584
    • http://www.nattress.com
Does a raw file have a color space?
« Reply #164 on: January 27, 2008, 01:10:46 pm »

You still need white  balance on linear light data too. Also, remember the gamma curve of the CRT is approximately the inverse of the encoding gamma curve.

Tungsten light  is blue deficient, and silicon sensors are less sensitive to blue too, making the problem compounded.

Graeme
Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
Does a raw file have a color space?
« Reply #165 on: January 27, 2008, 01:21:25 pm »

Quote
An orange object shot in white light and white object shot in orange light both look orange to my eyes; and they will both be recorded as orange by a camera sensor.  If you want to make the white object look white after the fact by monkeying with the controls in Photoshop, that's up to you but that's not the spectral composition of the light just after it reflected off the object at the scene.

Hogwash. A white tablecloth looks white to the eye under incandescent lighting, but if you process a RAW shot of said tablecloth with daylight WB, it's going to look orange. The human eye does a pretty good job of adapting to ambient lighting conditions, but no camera does any such thing.
Logged

Panopeeper

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1805
Does a raw file have a color space?
« Reply #166 on: January 27, 2008, 05:54:27 pm »

Quote
If the source color is outside the destination space, defining its location in the destination space is not ambiguous, it's impossible

It is not impossible, it needs to be defined, what has to happen to such colors. See rendering intent.

Quote
The process we use to select alternate in-gamut colors may be ambiguous, but that does NOT mean we don't know exactly what the source color is. When the source colors fall within the destination space, we can convert back and forth at will with no ambiguity at all. sRGB (215, 133, 37) = ProPhoto (161, 124, 51) = LAB (63, 27, 61)

If one of the color spaces is larger than the other one, then an unambiguous projection between them is impossible. You are trying to prove your point with sRGB vs. raw, while the same problem exists between ProPhoto and sRGB as well.

Quote
Even if we have perfect knowledge of the sensor's color behavior, we still do not know what color that RAW RGB value represents until we choose a white balance setting. The white balance setting chosen can make that pixel almost any color at all after RAW conversion. That is what I mean by ambiguity

We are back at the beginning: you can make your own definition of what a color space is. If there are ten definitions, then why could not be one more?

However, the fact is, that there is no such generally accepted definition of "color space", which would include the conditions you like to include.

You stated at the beginning of this sub-thread:

a full-fledged color space has one and only one unambiguous numeric designation for a color that falls within its gamut

You expanded this by the requirement of unambiguous projection from that color space to another (which is nonsense), and now added the condition, that setting of WB be not necessary.

You can expand your definition with "the documentation is written in Chinese and published in PDF format" as well. The issue is, that others may not agree with that definition. Luckily, it does not matter, tha camera's whatever (what you don't call a color space) can be transformed into other whatevers, even if ambiguously.
« Last Edit: January 27, 2008, 05:55:54 pm by Panopeeper »
Logged
Gabor

Panopeeper

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1805
Does a raw file have a color space?
« Reply #167 on: January 27, 2008, 06:00:36 pm »

Quote
You still need white  balance on linear light data too

I for myself need white balance only on the linear data.

Quote
remember the gamma curve of the CRT is approximately the inverse of the encoding gamma curve

So what? It has nothing to do with virtually anything, except the misguided notation of "gamma curve".
Logged
Gabor

Iliah

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 770
Does a raw file have a color space?
« Reply #168 on: January 27, 2008, 06:47:59 pm »

"a full-fledged color space has one and only one unambiguous numeric designation for a color that falls within its gamut"

It may be worth mentioning here that cameras do not have gamuts.

For example, quoting from the F.A.Q. on the Munsell Color Science Laboratory (Rochester Institute of Technology) web site.

"Digital image sensors (such as those used in digital cameras) use red, green, blue ink-based color filters to generate color. Do they therefore have a color gamut that limits the range of colors that they can detect?

Let's start with the short answer to your question; there is no such thing as a camera, or scanner, gamut. A gamut is defined as the range of colors that a given imaging device can display. To say that a camera had a gamut would be to imply that you could put a color in front of it that it could not possibly respond to. While it is certainly possible that two colors that are visually distinct might be mapped into the same color signals by a camera that does not mean that the camera could not detect those colors. It just couldn't discriminate them. For example, a monochrome sensor will map all colors into a grayscale image and encode it as such. Certainly the encoding has a gamut (in this case a lightness range with no chroma information), but did the camera responded to all the colors put before it. It is the encoding that imposed the gamut. In the color world, encoding is based on some explicit or implied display. For example, sRGB is a description of a display and therefore defines a gamut (but only if the sRGB values are limited in range). If a camera encodes an image in sRGB, that doesn't mean that the range of colors the camera detected are only from within the sRGB display gamut, but it means the camera data have been transformed to best use that sRGB encoding. As long as a camera has three or more sensors that span the visual spectrum, then it will respond all the same stimuli as our visual system. Whether the camera can discriminate colors as well as the human visual system will depend on the encoding of the camera signals, quantitization, and the details of the camera responsivities. (To return to the black and white system, that camera encodes all the colors into a gray scale. They could then be displayed as any color within a given display, but many colors from the original scene would be mapped to the same values.)

Since there is no such thing as a gamut for an input device, then there is no way to compute it or calculate a figure of merit. Generally, the accuracy of color capture devices is assessed through the accuracy of the output values for known inputs in terms of color differences. Also, sensors are sometimes evaluated in terms of their ability to mimic human visual responses (and therefore be accurate) using quantities with names like colorimetric quality factor, that measure how close the camera responsivities are to linear transformations of the human color matching functions. Doing an internet search on "colorimetric quality factor" will lead you in the right direction."


http://www.cis.rit.edu/mcsl/outreach/faq.php?catnum=0#255

Should be noted that pretty much every camera registers far beyond visible light, and in any case the recording is not uniform and invariant to changing conditions, especially heat.
« Last Edit: January 27, 2008, 06:48:53 pm by Iliah »
Logged

Jonathan Wienke

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 5829
    • http://visual-vacations.com/
Does a raw file have a color space?
« Reply #169 on: January 27, 2008, 07:06:42 pm »

Quote
It is not impossible, it needs to be defined, what has to happen to such colors. See rendering intent.

Bullcrap. If a color is outside the destination gamut then there is no possibility to put the exact color inside the destination space. All you can do is select an in-gamut alternate color, which is what rendering intents try to do. But no matter what you do, you cannot fit ProPhoto 0, 255, 0 into sRGB. It is out of sRGB's gamut and always will be. Different rendering intents will select various in-sRGB-gamut alternate colors, but none of them will be the same color as ProPhoto 0, 255, 0.

Quote
You are trying to prove your point with sRGB vs. raw, while the same problem exists between ProPhoto and sRGB as well.

Again, bullcrap. In a ProPhoto-tagged TIFF file, we have everything we need to unambiguously determine the color of every pixel in the file. With RAW, we do not, even if we know the exact spectral response characteristics of the camera's sensor and CFA. It is not possible to look at an out-of-focus shot of a flat surface and say exactly what color that surface is. We can make that flat surface be any color at all depending on the WB setting we choose. And if we only know the RAW data and camera characteristics, any WB setting is just as likely to be correct as any other.

In real-life images we can look at objects and make reasonably accurate, but  judgments regard to correct WB and colorimetric rendering of RAW data. But this is still a judgment process, not precise colorimetric definition. If you give a RAW image to 10 different people to convert, it is unlikely that any two people will select the exact same WB setting for conversion.

The difference between RAW and ProPhoto is that Prophoto has a precise colorimetric definition for each pixel. RAW does not; it requires interpretation and informed judgment to even get close.
« Last Edit: January 27, 2008, 07:26:21 pm by Jonathan Wienke »
Logged

Panopeeper

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1805
Does a raw file have a color space?
« Reply #170 on: January 27, 2008, 07:48:26 pm »

Some of the definitions of color gamut:

Quote
- The range of colors possible with any color system. For example, the original picture, a specific color monitor or a specific paper/ink/press combination

- Range of colors that can be formed by all combinations of a given set of light sources or colorants of a color reproduction system

- The range of colors a device can capture

- The entire range of hues possible to reproduce using a specific device, such as a computer screen, or system, such as four-color process printing

- The subset of colors which can be accurately represented in a given circumstance, such as within a given color space or by a certain output device

- The complete set of colors found within an image at a given time

- A range of colors that can be displayed on a digital TV, or seen by the human eye

- The complete range of hues and strengths of colors that can be achieved with a given set of colorants such as cyan, magenta, yellow, and black inks on a specific substrate

- Complete subset of colors

I don't see any reason to accept such an idiotic definition, which would exclude amond others ProPhoto from having a gamut.
Logged
Gabor

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Does a raw file have a color space?
« Reply #171 on: January 27, 2008, 10:07:56 pm »

Quote
Raw has no color space. And it doesn't matter what you set your camera to (you're shooting Raw).
[{POST_SNAPBACK}][/a]

According to Andrew, raw does not have a color space, but he confirms that a digital camera does indeed have a color space. See his [a href=\"http://www.adobe.com/digitalimag/pdfs/phscs2ip_colspace.pdf]Color Space White Paper[/url] on the Adobe web site, which states:

Classes of Color Spaces
There are classes of color spaces that define the behavior of a capture device like a scanner or digital camera.

According to the above, the camera does have a color space, but its output in the form of a raw file does not. This contradicts one of Andrew's quoted experts, Chris Murphy, who stated, "So yes a camera (and thus a Raw file) has a color space." Since a digital camera does not have a gamut, one can conclude by extension that not all color spaces have a gamut. Interesting.
« Last Edit: January 27, 2008, 10:10:06 pm by bjanes »
Logged

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Does a raw file have a color space?
« Reply #172 on: January 27, 2008, 10:19:10 pm »

OK, still puzzled though, and trying to understand the chain of custody of color as it makes its way from the original scene to output device (monitor or print), and its implications for color rendering.  Let us ignore the output side of the problem, and concentrate on capture.  For that aspect of the problem, one has the following:

The original scene has some light source with some spectral power distribution (SPD), and contains some objects with pigments of given spectral reflectivity.  Light bounces around and comes through the lens and incident upon the sensor having a given spectral power distribution  -- the intensities of the various constituent frequency components of the light.  Let us for present purposes set aside issues of demosaicing, eg we have a scene without fine detail at the pixel level which would distinguish different interpolation algorithms.

The color filters in the CFA have a transmissivity dependent on frequency.  Underneath the filter, the sensel responds to the transmitted light, and so integrates over the incident spectral power distribution convolved with the filter's spectral transmissivity and the spectral response of the photodetector.  The integration averages over spectral information and therefore data is lost.

1. Because I am assuming that one can set aside interpolation issues, one has color data for each pixel in the form of raw values from the three different types of pixel in the CFA.  Let us call this camera color data, the three numbers R_c, G_c,B_c.  As some have said in this thread, the camera defines its own notion of color space (or if that is too charged a terminology, call it "color data") through the sensor response to various SPD's.   We now wish to use that color data to reconstruct "color" according to some other measure(s).  

2. Human vision has its own set of spectral response functions (SRF's) of the cones in the eye, somewhat quantitatively measured by researchers of decades past.  An observer looking through the camera viewfinder sees colors that are characterized by three numbers, again the SPD integrated against these SRF's of human vision.  Again information is lost since the true SPD of various components of the scene is a function of wavelength; a generic function of one variable cannot be fully characterized by the three numbers, call them R_e,G_e,B_e, that one gets by integrating the SPD against the SRF's of the three kinds of receptor.  Nevertheless our brain takes this information and somehow interprets it as color.  Let us call this eye color data (ECD).

3. Yet another characterization of the scene are the tristimulus values XYZ, which are the SPD integrated against the CIE's color-matching functions (distinct from the cone response functions of human vision, but meant to model them).  Let us call this CIE color data, the values XYZ.

Because three numbers do not specify a function, neither the camera color data, the eye color data, nor the CIE color data are sufficient to reconstruct the original SPD of the original scene.  That, in a nutshell, seems to be the source of the problem of mapping between the three sorts of color data layed out above.  Of course, ideally that is what one would like so that one can convey to someone elsewhere, later, the experience of the scene where and when one records an image.  The best one can hope is that one can reconstruct a reasonable approximation to the SPD using three numbers.  Unless of course the SRF's of the camera are the same as human vision or CIE, in which case the corresponding sets of color data will correspond for any input SPD.

There is by now a whole industry built around the CIE convention using XYZ.  It seems reasonable to use that as a starting point, and not concern ourselves with the map between CIE color data and eye color data (leaving that to the CIE to refine, though since the SRF's of the CIE standard for XYZ are not the same SRF's of human vision, the relation between CIE color data and eye color data is as fraught with ambiguity as the relation between camera color data and CIE color data, or between camera color data and eye color data).  

Setting aside the relation to visual perception, and concentrating on relating the camera's color data to CIE conventions, one wants a map from camera data R_c,G_ c,B_c to X,Y,Z that is bijective (maps in both directions unambiguously).  Of course, that is a bit of a non-starter, since for example the degeneracies (metamers) of the two sets of data -- the sets of SPD functions that yield the same XYZ or R_c G_c B_c -- are in general quite different.  However, it seems reasonable that one could set up an optimization problem to average over various SPD's to make an "optimal" map between different representations of color data.  

I suppose what I am trying to get at is that the camera raw data is color data that is no more or less valid than CIE color data insofar as it is related to the color data of human vision; it is just less standardized.  Because the three numbers comprising that color data represent averages over SPD's convolved with SRF's, one cannot map one set of color data uniquely to the other.   Constructing a map amounts to choosing a convention rather than deriving a rigorous relation; one tries to construct the map so that the map is roughly accurate with respect to a wide variety of SPD's (the aforementioned optimization problem).  There is absolutely no reason why the map need be a linear (matrix) transform, a linear map is simply the crudest and simplest approximation one could make; a lookup table is a more general way of encoding such a map.  

Anyway, a long-winded exposition of a few thoughts about which I'd be happy to hear comments.
« Last Edit: January 28, 2008, 12:59:49 am by ejmartin »
Logged
emil

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Does a raw file have a color space?
« Reply #173 on: January 27, 2008, 10:30:28 pm »

Quote
You still need white  balance on linear light data too. Also, remember the gamma curve of the CRT is approximately the inverse of the encoding gamma curve.

Tungsten light  is blue deficient, and silicon sensors are less sensitive to blue too, making the problem compounded.

Graeme
[a href=\"index.php?act=findpost&pid=169990\"][{POST_SNAPBACK}][/a]

Quite right, gamma is a red herring; never mind.  

But I'm still puzzled as to why white balance should be necessary.  The sensor is a passive device, simply responding to the input given it.  To the extent that one can use that data to direct a display device to emit the same tristimulus values, white balance seems superfluous.  What is the white balance accomplishing in terms of matching the display's SPD to that of the scene when it was recorded by the camera?
« Last Edit: January 27, 2008, 10:31:45 pm by ejmartin »
Logged
emil

Graeme Nattress

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 584
    • http://www.nattress.com
Does a raw file have a color space?
« Reply #174 on: January 27, 2008, 10:39:09 pm »

You just need to look at the graph showing sensitivity to colours of light of a unfiltered silicon based sensor. It's sensitivity to blue is a lot less than green than red. So the only white light that will appear white without any white balance processing is a light that is very strong in blue. So, you basically end up having to gain up blue a lot under the vast majority of lighting conditions.

Now you've done that, you're still not going to get white out looking white because in the real scene, the eye/brain adapts to the changing colour temperature of the light - it effectively has a very powerful auto-white balance circuit. The camera doesn't so you have to either guess (and they do a pretty good job these days) or pick a white with a white balance picker. Once the computer knows what colour the white is recorded in the raw data as, you can adjust accordingly.

So, if your display had the inverse characteristic to the sensor, and filled enough of your field of vision that it fooled your eye into adapting the white balance automatically, you'd be probably ok, but that's practically never the case.

Graeme
Logged

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Does a raw file have a color space?
« Reply #175 on: January 27, 2008, 11:09:35 pm »

Quote
You just need to look at the graph showing sensitivity to colours of light of a unfiltered silicon based sensor. It's sensitivity to blue is a lot less than green than red. So the only white light that will appear white without any white balance processing is a light that is very strong in blue. So, you basically end up having to gain up blue a lot under the vast majority of lighting conditions.

Now you've done that, you're still not going to get white out looking white because in the real scene, the eye/brain adapts to the changing colour temperature of the light - it effectively has a very powerful auto-white balance circuit. The camera doesn't so you have to either guess (and they do a pretty good job these days) or pick a white with a white balance picker. Once the computer knows what colour the white is recorded in the raw data as, you can adjust accordingly.

So, if your display had the inverse characteristic to the sensor, and filled enough of your field of vision that it fooled your eye into adapting the white balance automatically, you'd be probably ok, but that's practically never the case.

Graeme
[a href=\"index.php?act=findpost&pid=170134\"][{POST_SNAPBACK}][/a]

OK, but lack of blue sensitivity is a fixed property of the sensor.  Why won't a fixed choice of relative gain among channels compensate for it (depending on camera model of course)?  Why do we need a slider that changes from image to image?  

Perhaps the problem is that I'm not understanding the phrase "the only white light that will appear white without any white balance processing is a light that is very strong in blue".   Is the point that any two "opposite" colors can be combined to make white, and as the higher-frequency member of the pair tends toward blue, the lack of response of the sensor in blues leads to sensor color data that "seems" less and less white?  In other words, is this one aspect of different SPD's leading to the same perceived color (in this case white)?
Logged
emil

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Does a raw file have a color space?
« Reply #176 on: January 27, 2008, 11:09:42 pm »

Quote
Quite right, gamma is a red herring; never mind. 

But I'm still puzzled as to why white balance should be necessary.  The sensor is a passive device, simply responding to the input given it.  To the extent that one can use that data to direct a display device to emit the same tristimulus values, white balance seems superfluous.  What is the white balance accomplishing in terms of matching the display's SPD to that of the scene when it was recorded by the camera?
[{POST_SNAPBACK}][/a]

If you merely want to record the characteristics of the light falling on the sensor and are not interested in correlating the sensor response with the perceived color appearance of the target under the existing viewing conditions, white balance does seem superfluous. Indeed, according to [a href=\"http://www.adobeforums.com/webx?14@@.3bc2e802/0]Thomas Knoll[/url], the CIE XYZ space does not have a white point (if that is the same as white balance).

BTW, your previous essay is very helpful for the intelligent layman, since it is from the viewpoint of a physicist rather than a color scientist; the specialized jargon of the latter sometimes confuse the issue (Note to readers: Emil is a professor of physics at the University of Chicago).

Bill
Logged

Panopeeper

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1805
Does a raw file have a color space?
« Reply #177 on: January 28, 2008, 12:06:53 am »

Quote
Why won't a fixed choice of relative gain among channels compensate for it (depending on camera model of course)?  Why do we need a slider that changes from image to image?

A constant compensation would not include the effect of change in the light source.

You do realize, that different cameras (with different microfilters) response differently  to different captured light. This means, that if you capture the very same scenery lit by the very same light source, then two cameras will result into different pixel values, even if the sensors' SPD is compensated for. So, without a further adjustment, we would get a camera-dependent image.

There is a second point, "visio-psychological" (I think I just made up this term, but who knows).

Even using the very same camera, the recorded pixel values are varying depending on the light source. One could say that this does not matter (or only in larger variations), because the eyes are adjusting to the actual lighting.

This is true in real life, but not on picture. I think the analogy is correct: when you are in a city with tall buildings and look upwards, you don't notice, that the buildings appear much narrower at the very top. However, when you see that scenery on a picture, you can't ignore (or at least I can't) the effect: what looks perfect in real life, is abhorrend, when ripped out of the reality and presented in a totally strange, unnaturel setting, namle on a picture.

I think the very same phenomenon causes us to notice the "incorrect" color on a picture, while ignoring (not even noticing) that in life.

(Sorry, no citations, I can't blame anyone for the above idea.)
« Last Edit: January 28, 2008, 12:12:09 am by Panopeeper »
Logged
Gabor

ejmartin

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 575
Does a raw file have a color space?
« Reply #178 on: January 28, 2008, 01:38:59 am »

Let me ask the question a bit differently.  Suppose we were to construct a device that records color data directly as CIE XYZ values, that has the CIE spectral response functions built into it.  Would the color data need to be "white balanced" when we go to display data recorded under various lighting choices with different color temperatures?

If yes, then the issue of white balance is separate from the issue of using camera color data to define a color space, since something that all agree should be called a color space has the same issue.  If no, then what is different about the camera sensor?  Naively it's just a different trio of spectral response functions.
« Last Edit: January 28, 2008, 01:40:22 am by ejmartin »
Logged
emil

bjanes

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3387
Does a raw file have a color space?
« Reply #179 on: January 28, 2008, 07:41:59 am »

Quote
Let me ask the question a bit differently.  Suppose we were to construct a device that records color data directly as CIE XYZ values, that has the CIE spectral response functions built into it.  Would the color data need to be "white balanced" when we go to display data recorded under various lighting choices with different color temperatures?

If yes, then the issue of white balance is separate from the issue of using camera color data to define a color space, since something that all agree should be called a color space has the same issue.  If no, then what is different about the camera sensor?  Naively it's just a different trio of spectral response functions.
[a href=\"index.php?act=findpost&pid=170171\"][{POST_SNAPBACK}][/a]

Since the CIE XYZ space lacks a white point, then information tagged in the raw file must be used when one is converting from the CIE XYZ space to the output space which does have a white point. As you pointed out so eloquently, "The color filters in the CFA have a transmissivity dependent on frequency. Underneath the filter, the sensel responds to the transmitted light, and so integrates over the incident spectral power distribution convolved with the filter's spectral transmissivity and the spectral response of the photodetector. The integration averages over spectral information and therefore data is lost." No white balancing seems to be involved at this stage, and it would be meaningless since the CIE XYZ space lacks a white point.

As Jonathan has pointed out, the sensor would have no way to tell the difference between an orange target illuminated by white light or a white target illuminated by orange light. The sensor output would be the same and the white balance would have to be accomplished later in processing.
« Last Edit: January 28, 2008, 07:45:57 am by bjanes »
Logged
Pages: 1 ... 7 8 [9] 10 11 ... 14   Go Up