Pages: « 1 ... 10 11 [12] 13 14 »   Bottom of Page
Print
Author Topic: Does a raw file have a color space?  (Read 132581 times)
mrgalleta
Newbie
*
Offline Offline

Posts: 3


« Reply #220 on: March 25, 2008, 11:58:51 AM »
ReplyReply

Thanks a lot for your replies,

What I dont really understand is how to make use of those matrices to make this kind of  conversion. I mean, if you take two pictures at different shutter speed values of the exactly same scene, then the values of a little area of the picture would have the same XYZ values but the RAW data of the camera should be different, so this matrix should only be useful for specific shutter speed, and so, isnt it ?

Thanks again and excuse me if say anything that makes no sense, I'm just learning in this extremely interesting subject.

Regards,

Carlos.
Logged
madmanchan
Sr. Member
****
Offline Offline

Posts: 2112


« Reply #221 on: March 25, 2008, 01:20:51 PM »
ReplyReply

Quote
What I dont really understand is how to make use of those matrices to make this kind of  conversion. I mean, if you take two pictures at different shutter speed values of the exactly same scene, then the values of a little area of the picture would have the same XYZ values but the RAW data of the camera should be different, so this matrix should only be useful for specific shutter speed, and so, isnt it ?
[a href=\"index.php?act=findpost&pid=184154\"][{POST_SNAPBACK}][/a]

No, the XYZ values would not be the same in the 2 cases. Why should they be? Both the XYZ (D50) and camera RGB values would be lower in the case where the exposure time is longer (i.e., slower shutter speed).

Eric
Logged

madmanchan
Sr. Member
****
Offline Offline

Posts: 2112


« Reply #222 on: March 25, 2008, 01:24:19 PM »
ReplyReply

Quote
But yes, the matrix is very likely a best fit linear map from the camera's spectral response curves to those of the XYZ color matching functions (minimizing the difference between the mapped curves and the XYZ cmf's), along the lines that I posted above in this thread.
[a href=\"index.php?act=findpost&pid=182788\"][{POST_SNAPBACK}][/a]

More precisely, the matrices map (non-white-balanced) linear camera coordinates to XYZ with a D50 white point.
Logged

mrgalleta
Newbie
*
Offline Offline

Posts: 3


« Reply #223 on: March 25, 2008, 01:28:37 PM »
ReplyReply

Quote
No, the XYZ values would not be the same in the 2 cases. Why should they be? Both the XYZ (D50) and camera RGB values would be lower in the case where the exposure time is longer (i.e., slower shutter speed).

Eric
[a href=\"index.php?act=findpost&pid=184172\"][{POST_SNAPBACK}][/a]

Sorry, maybe I didnt explain myself properly, Lets say that I have an spectroradimeter and I point it to a place that doesnt change in illumination, the spectroradiometer would give me the XYZ values. Then, i put a camera on the same place pointing at the same direction and I take de bayer array values of the place where the spectroradiometer was pointing at. If I take to pictures at different exposures, is there a way of getting with the camera approximately the same XYZ values obtained with the spectroradiometer independently of the exposure?

Thanks a lot.
Logged
madmanchan
Sr. Member
****
Offline Offline

Posts: 2112


« Reply #224 on: March 25, 2008, 01:30:50 PM »
ReplyReply

Quote
I thought white was X=Y=Z; or is there a distinction between the terms "white" and "white point"?

And BTW Bill, I don't have any special qualifications or claim to authority for this discussion.  Just eager to learn.
[{POST_SNAPBACK}][/a]

Just in case it wasn't already answered, the white coordinates in XYZ depend on the illuminant. Y is 1 but X and Z are generally not. For example, the D50 white point is about X = 0.9642, Y = 1.0, Z = 0.82521. You can see this page for the XYZ coordinates of various common white points:

[a href=\"http://www.brucelindbloom.com/Eqn_ChromAdapt.html]http://www.brucelindbloom.com/Eqn_ChromAdapt.html[/url]
Logged

ejmartin
Sr. Member
****
Offline Offline

Posts: 575


« Reply #225 on: March 25, 2008, 01:49:08 PM »
ReplyReply

Quote
Thanks a lot for your replies,

What I dont really understand is how to make use of those matrices to make this kind of  conversion. I mean, if you take two pictures at different shutter speed values of the exactly same scene, then the values of a little area of the picture would have the same XYZ values but the RAW data of the camera should be different, so this matrix should only be useful for specific shutter speed, and so, isnt it ?

[a href=\"index.php?act=findpost&pid=184154\"][{POST_SNAPBACK}][/a]

Changing the exposure time by a factor X linearly rescales the raw data by X.  In order not to screw up this linearity, any attempt to fit camera raw values to XYZ values should be linear (XYZ values are also linear wrt change of luminosity).   So the general map is

"X" = a1 R +a2 G + a3 B
"Y" = b1 R +b2 G + b3 B
"Z" = c1 R +c2 G + c3 B

where "X", "Y" and"Z" are the best approximation the camera can make to XYZ values of the recorded scene, RGB are the camera raw values for the three color channels (suitably interpolated, and ignoring any issues stemming from such interpolation), and the coefficients ai, bi, and ci (i=1,2,3) are determined by fitting the spectral response functions of the camera to the XYZ spectral response functions as best as possible via a linear map.
Logged

emil
madmanchan
Sr. Member
****
Offline Offline

Posts: 2112


« Reply #226 on: March 25, 2008, 07:42:02 PM »
ReplyReply

Yes, it's just a linear scaling, and hence will be preserved with a linear transform such as Adobe's camera/XYZ matrices.
Logged

zcream
Newbie
*
Offline Offline

Posts: 1


« Reply #227 on: March 16, 2014, 07:38:14 PM »
ReplyReply

I am trying to convert RAW image data with no header info into a usable DNG. ATM, I am using this color matrix
Color Matrix values 1 to 3...: 1.0048828125 -0.27294921875 -0.04931640625 
Color Matrix values 4 to 6...: -0.56689453125 1.34936523437 0.20776367188 
Color Matrix values 7 to 9...: -0.2919921875 0.34936523438 0.63916015625 

I am getting gaps in the picture after conversion. Stippling?

https://dl.dropboxusercontent.com/u/9906333/fc2_save-0000.jpg
 I used a jpg here but can see the same issue in the DNG file in preview or Irfan View.
 There seems to a multicolor bands when zooming in on the darker areas.

I was wondering if this could be due to the color matrix ?

I saw other matrices here - http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 3022



« Reply #228 on: April 27, 2015, 12:29:58 PM »
ReplyReply

I am trying to convert RAW image data with no header info into a usable DNG. ATM, I am using this color matrix
Color Matrix values 1 to 3...: 1.0048828125 -0.27294921875 -0.04931640625 
Color Matrix values 4 to 6...: -0.56689453125 1.34936523437 0.20776367188 
Color Matrix values 7 to 9...: -0.2919921875 0.34936523438 0.63916015625 

I am getting gaps in the picture after conversion. Stippling?

https://dl.dropboxusercontent.com/u/9906333/fc2_save-0000.jpg
 I used a jpg here but can see the same issue in the DNG file in preview or Irfan View.
 There seems to a multicolor bands when zooming in on the darker areas.

I was wondering if this could be due to the color matrix ?

I saw other matrices here - http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html

This is an old thread, but it contains much useful information. Many arguments against considering a camera as having a color space are dispelled by this post from the developers of Fast Raw Viewer. In the end, semantics are involved in what is a color space.

Bill
Logged
digitaldog
Sr. Member
****
Offline Offline

Posts: 10560



WWW
« Reply #229 on: April 27, 2015, 12:52:28 PM »
ReplyReply

This is an old thread, but it contains much useful information. Many arguments against considering a camera as having a color space are dispelled by this post from the developers of Fast Raw Viewer. In the end, semantics are involved in what is a color space.
Argument 1, Viewing RAW is Impossible seems silly to me. I've always found BayerDemosaic by Marc Rochkind to be a great teaching tool to illustrate this. For example:
http://www.digitaldog.net/files/raw.jpg


Argument 2 "The color on RAW data is wrong" isn't the same as raw has no color space IMHO. I think we came to agreement here or elsewhere about this. Raw image data is in some native camera color space, but it is not a colorimetric color space, and has no single “correct” relationship to colorimetry. The piece you outline states:
Quote
A RAW image is recorded in the sensor's "color space." So nothing is really new here. As usual, all we need to do is assign proper "color space" to the data.
Ah, so what is that color space?

Argument 3: RAW images are very dark". That's of course untrue and the article discusses why. For many years, when doing classes on why we need to have proper profiles assigned to image data, I showed a capture from an old Kodak DCS camera which allowed a gamma 1.0 capture as an option. I would place info palette sample points on the image which at the time was 'assumed' incorrectly to be sRGB. Using Assign Profile command, I would select the ICC profile that defined the data. The numbers didn't change, the image got lighter and  appeared correct. So yes, these images are not dark, they are being previewed with the wrong profile.

Interestingly we see the same silly assumptions with Adobe RGB (1998) being 'dull' instead of sRGB when it is being previewed improperly. The same false ideas about Adobe RGB (1998) and these raw files populate the web. There's nothing wrong with the data. There's something wrong about how the data is interpreted, an incorrect assumed profile.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/

Another here who is ignoring all posts from: "AlterEgo" "deeejjaaa"
bjanes
Sr. Member
****
Offline Offline

Posts: 3022



« Reply #230 on: April 27, 2015, 05:55:03 PM »
ReplyReply

Argument 2 "The color on RAW data is wrong" isn't the same as raw has no color space IMHO. I think we came to agreement here or elsewhere about this. Raw image data is in some native camera color space, but it is not a colorimetric color space, and has no single “correct” relationship to colorimetry. The piece you outline states: Ah, so what is that color space?

Yes, I think we did come to agreement on that matter after an interesting discussion. The camera does not have a colorimetrically defined space. However, we can convert from the "camera space" to a colorimetric space using an experimentally derived 3*3 matrix. As I understand it, the coefficients of that matrix are often derived from a least squares approximation that minimizes error. Different workers can derive different coefficients, but useful results can be obtained via this approach. See my thoughts on this matter in a post to Iliah regarding Fast Raw Viewer and his response on how this could be done in FRV. Your input would be appreciated.

Regards,

Bill

Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1875


« Reply #231 on: April 28, 2015, 03:25:12 AM »
ReplyReply

Yes, I think we did come to agreement on that matter after an interesting discussion. The camera does not have a colorimetrically defined space. However, we can convert from the "camera space" to a colorimetric space using an experimentally derived 3*3 matrix. As I understand it, the coefficients of that matrix are often derived from a least squares approximation that minimizes error. Different workers can derive different coefficients, but useful results can be obtained via this approach. See my thoughts on this matter in a post to Iliah regarding Fast Raw Viewer and his response on how this could be done in FRV. Your input would be appreciated.

Regards,

Bill
The 3 channels of the camera are measurements of intensity within 3 spectral bands. I guess that color-science stuff are also intensities at either single wavelengths or some other spectral bands. Then mapping one to the other by a linear 3x3 transform allows you to approximate the true camera response to that of some idealized reference, generalizing the response of one particular camera to a reference response (with some errors)? I am an engineer, not a scientist. Color science makes me dizzy.

I have been frustrated by camera color profiles in Lightroom. Are you aware of any tools that can be used to visualize the difference between such profiles? I have a distinct subjective perception that Adobe profiles for my Canon camera are over-saturated and with wrong hue in deep reds vs my Colorchecker passport profiles. But having a graph makes the discussion more concrete...

-h
Logged
Erland
Jr. Member
**
Offline Offline

Posts: 77



« Reply #232 on: April 28, 2015, 08:38:46 AM »
ReplyReply

I read about this yesterday. I won't go into the discussion. Just thought this link could help you:
https://sites.google.com/site/chromasoft/Profiles
and
http://chromasoft.blogspot.se/2009/02/visualizing-dng-camera-profiles-part-1.html
Logged

Service Technician Digital Printers and Peripherals.
Epson Stylus Photo 1400.
AlterEgo
Sr. Member
****
Offline Offline

Posts: 1035



« Reply #233 on: April 28, 2015, 09:46:10 AM »
ReplyReply

The 3 channels of the camera are measurements of intensity within 3 spectral bands. I guess that color-science stuff are also intensities at either single wavelengths or some other spectral bands. Then mapping one to the other by a linear 3x3 transform allows you to approximate the true camera response to that of some idealized reference, generalizing the response of one particular camera to a reference response (with some errors)? I am an engineer, not a scientist. Color science makes me dizzy.

I have been frustrated by camera color profiles in Lightroom. Are you aware of any tools that can be used to visualize the difference between such profiles? I have a distinct subjective perception that Adobe profiles for my Canon camera are over-saturated and with wrong hue in deep reds vs my Colorchecker passport profiles. But having a graph makes the discussion more concrete...

-h

may be something like dcp2icc tool will help somewhat ? easier to find tool to visualize color transforms stored in icc containers... just my $0.00000000002
Logged
BartvanderWolf
Sr. Member
****
Online Online

Posts: 4765



« Reply #234 on: April 28, 2015, 09:49:41 AM »
ReplyReply

Yes, I think we did come to agreement on that matter after an interesting discussion. The camera does not have a colorimetrically defined space. However, we can convert from the "camera space" to a colorimetric space using an experimentally derived 3*3 matrix.

That's how I also see it, although we can even do better than a simple 3x3 matrix. With the use of RawDigger, we can take the result of a controlled shot of a colorchecker type of target (the larger Digital SG version is better suited than the simple 24 patch version), and output a CGATS text file directly from the Raw data, even before demosaicing.

That CGATS data file can then be used by e.g. the Argyll CMS to build a profile for the camera and illuminant used. There is even a 'simple' wrapper made by Iliah Borg for the specific Argyll application that creates the ICC profile, called MakeInputICC , I believe a Windows and a Mac version is available. This allows to create  a simple matrix kind of profile, but also LUT versions. The colorspace primaries will be part of the profile.

Quote
As I understand it, the coefficients of that matrix are often derived from a least squares approximation that minimizes error. Different workers can derive different coefficients, but useful results can be obtained via this approach.

Yes, either via the matrix coefficients, or via the CGATS derived profile primaries, one is able to do further work based on that data.

Cheers,
Bart
Logged

== If you do what you did, you'll get what you got. ==
AlterEgo
Sr. Member
****
Offline Offline

Posts: 1035



« Reply #235 on: April 28, 2015, 10:11:56 AM »
ReplyReply

That's how I also see it, although we can even do better than a simple 3x3 matrix.

in some discussions there was a reference to

http://www.cis.rit.edu/jwgu/research/camspec/

the idea is to shot a proper target, try to find out a good approximation of CFA transmission curves (in the absence of monochromator setup to get that directly) and then it is pure math, you have that and illumination spectrum and you can create "virtual" targets with as many patches of whatever required reflectance and calculate LUT profiles arguably better than before... somebody with a good grasp of matlab and math can try that... the link has some matlab code supplied to do the first part.
Logged
AlterEgo
Sr. Member
****
Offline Offline

Posts: 1035



« Reply #236 on: April 28, 2015, 10:14:15 AM »
ReplyReply

but also LUT versions.
it seems that old Gretag ProfileMaker can teach Argyll a lesson when it comes to creating "usable" LUT profiles from targets with limited amount of patches.
Logged
joofa
Sr. Member
****
Offline Offline

Posts: 499



« Reply #237 on: April 28, 2015, 11:32:47 AM »
ReplyReply

That's how I also see it, although we can even do better than a simple 3x3 matrix.
Cheers,
Bart

Of course, one can do better than a 3x3 matrix, by using a higher-dim matrix. For e.g., in the link below it was stated that a Quattro camera, which uses a Foveon sensor, could not match human luminous response (CIE Y) if a 3x3 transformation of the Fovean sensor 'primaries' was done:

http://www.dpreview.com/forums/post/54071772

However, in a 9-dim setting, which is still yields a linear CCM matrix, I got the following response, which is quite close. Interesting thing to note here is that the extra 6 parameters were derived from the camera/sensor r, g, b. Hence, no change to camera hardware needs to be made.

http://www.dpreview.com/forums/post/54094414

The reason is that the while the extra dims so obtained are correlated with the original 3, but they are linearily independent of them. And, adding extra linearily independent terms to a linear model can make one move closer to the desired target. There could be a noise penalty though, when such extra terms are added, though.

More info below:

http://www.dpreview.com/forums/post/54094878
http://www.dpreview.com/forums/post/54095986

Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
AlterEgo
Sr. Member
****
Offline Offline

Posts: 1035



« Reply #238 on: April 28, 2015, 11:54:19 AM »
ReplyReply

However, in a 9-dim setting, which is still yields a linear CCM matrix,

isn't that actually already a non linear transform hidden behind some "matrix"
Logged
joofa
Sr. Member
****
Offline Offline

Posts: 499



« Reply #239 on: April 28, 2015, 11:58:15 AM »
ReplyReply

isn't that actually already a non linear transform hidden behind some "matrix"

No. The transform is linear, of the y = AX variety, where y, and x are vectors, and A is the matrix. The matrix A coefficients are however non-linearily derived from R, G, and B, so the basis functions are non-linear, which gives it more power than linear-only basis functions. But that doesn't change the nature of the transformation, which remains linear.
« Last Edit: April 28, 2015, 12:00:20 PM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
Pages: « 1 ... 10 11 [12] 13 14 »   Top of Page
Print
Jump to: