Pages: [1]   Go Down

Author Topic: Why do digital cameras capture the full visible spectrum?  (Read 1406 times)

Tulloch

  • Newbie
  • *
  • Offline Offline
  • Posts: 7
Why do digital cameras capture the full visible spectrum?
« on: May 02, 2021, 08:08:20 am »

Maybe this is a strange question, but I'm wondering why digital colour cameras measure the entire rgb spectrum if they are only going to be displayed using the sRGB primaries? There seems to be a lot of focus on reducing the amount of colour bleed across the filters over the sensor pixels, and making sure that they fill the visible spectrum and cross over at particular values arbitrarily defined as "red", "green" and "blue", when it may actually be easier and more accurate to simply capture narrow-band data corresponding to the sRGB primaries, 612nm for red, 549nm for green and 464nm for blue.

Why capture the full visible spectrum if the signal will simply be converted to one wavelength per colour for display purposes?

Andrew
Logged

JRSmit

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 922
    • Jan R. Smit Fine Art Printing Specialist
Re: Why do digital cameras capture the full visible spectrum?
« Reply #1 on: May 02, 2021, 09:32:49 am »

Maybe this is a strange question, but I'm wondering why digital colour cameras measure the entire rgb spectrum if they are only going to be displayed using the sRGB primaries? There seems to be a lot of focus on reducing the amount of colour bleed across the filters over the sensor pixels, and making sure that they fill the visible spectrum and cross over at particular values arbitrarily defined as "red", "green" and "blue", when it may actually be easier and more accurate to simply capture narrow-band data corresponding to the sRGB primaries, 612nm for red, 549nm for green and 464nm for blue.

Why capture the full visible spectrum if the signal will simply be converted to one wavelength per colour for display purposes?

Andrew
Wat makes you state that the display of the captured result is only sRGB?
Logged
Fine art photography: janrsmit.com
Fine Art Printing Specialist: www.fineartprintingspecialist.nl


Jan R. Smit

mcbroomf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1538
    • Mike Broomfield
Re: Why do digital cameras capture the full visible spectrum?
« Reply #2 on: May 02, 2021, 10:14:16 am »

Wat makes you state that the display of the captured result is only sRGB?

+1 .... I certainly want more than sRGB. 

Also, if you only capture very narrow wavelengths per colour which I think is what you are asking at the end of the post, you will get terrible banding as you move from one colour to the next.  Think of a sky.  It's not one wavelength of blue.
Logged

digitaldog

  • Sr. Member
  • ****
  • Online Online
  • Posts: 20646
  • Andrew Rodney
    • http://www.digitaldog.net/
Re: Why do digital cameras capture the full visible spectrum?
« Reply #3 on: May 02, 2021, 12:27:59 pm »

Digital raw capture has nothing to do with sRGB which is based on a theoretical emissive display (CRT) circa 1994 or so. Digital cameras don't even have a color gamut! Digital cameras have a color mixing function. Basically, a color mixing function is a mathematical representation of a measured color as a function of the three standard monochromatic RGB primaries needed to duplicate a monochromatic observed color at its measured wavelength. Cameras don’t have primaries, they have spectral sensitivities, and the difference is important because a camera can capture all sorts of different primaries. Two different primaries may be captured as the same values by a camera, and the same primary may be captured as two different values by a camera (if the spectral power distributions of the primaries are different). A camera has colors it can capture and encode as unique values compared to others, that are imaginary (not visible) to us. There are colors we can see, but the camera can't capture that are imaginary to it. Most of the colors the camera can "see" we can see as well. Yet some cameras can “see (captures) colors“ outside the spectral locus however every attempt is usually made to filter those out. Most important is the fact that cameras “see colors“ inside the spectral locus differently than humans. I know of no shipping camera that meets the Luther-Ives condition. This means that cameras exhibit significant observer metameric failure compared to humans. The camera color space differs from a more common working color space in that it does not have a unique one to one transform to and from CIE XYZ. This is because the camera has different color filters than the human eye, and thus "sees" colors differently. Any translation from camera color space to CIE XYZ space is therefore an approximation.

The point is that if you think of camera primaries you can come to many incorrect conclusions because cameras capture spectrally. On the other hand, displays create colors using primaries. Primaries are defined colorimetrically so any color space defined using primaries is colorimetric. Native (raw) camera color spaces are almost never colorimetric, and therefore cannot be defined using primaries. Therefore, the measured pixel values don't even produce a gamut until they're mapped into a particular RGB space. Before then, *all* colors are (by definition) possible.

Raw image data is in some native camera color space, but it is not a colorimetric color space, and has no single “correct” relationship to colorimetry. The same thing could be said about a color film negative.
Someone has to make a choice of how to convert values in non-colorimetric color spaces to colorimetric ones. There are better and worse choices, but no single correct conversion (unless the “scene” you are photographing has only three independent colorants, like when we scan film).

Do raw files have a color space? Fundamentally, they do, but we or those handling this data in a converter may not know what that color space is. The image was recorded through a set of camera spectral sensitivities which defines the intrinsic colorimetric characteristics of the image. One simple way to think of this is that the image was recorded through a set of "primaries" and these primaries define the color space of the image.

If we had spectral sensitivities for the camera, that would make the job of mapping to CIE XYZ better and easier, but we'd still have decisions on what to do with the colors the camera encodes, that are imaginary to us.


Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

Tulloch

  • Newbie
  • *
  • Offline Offline
  • Posts: 7
Re: Why do digital cameras capture the full visible spectrum?
« Reply #4 on: May 02, 2021, 05:16:09 pm »

Wat makes you state that the display of the captured result is only sRGB?

OK, I understand, but for 99% of people they view digital images on a monitor with an sRGB gamut. This is a bit of a thought experiment, I'm just trying to understand the process.

+1 .... I certainly want more than sRGB. 

Also, if you only capture very narrow wavelengths per colour which I think is what you are asking at the end of the post, you will get terrible banding as you move from one colour to the next.  Think of a sky.  It's not one wavelength of blue.

Thanks, but aren't 99% of pictures of the sky shown on a monitor with an sRGB gamut? Banding is not usually an issue is it?

Digital raw capture has nothing to do with sRGB which is based on a theoretical emissive display (CRT) circa 1994 or so. Digital cameras don't even have a color gamut! Digital cameras have a color mixing function. Basically, a color mixing function is a mathematical representation of a measured color as a function of the three standard monochromatic RGB primaries needed to duplicate a monochromatic observed color at its measured wavelength. Cameras don’t have primaries, they have spectral sensitivities, and the difference is important because a camera can capture all sorts of different primaries. Two different primaries may be captured as the same values by a camera, and the same primary may be captured as two different values by a camera (if the spectral power distributions of the primaries are different). A camera has colors it can capture and encode as unique values compared to others, that are imaginary (not visible) to us. There are colors we can see, but the camera can't capture that are imaginary to it. Most of the colors the camera can "see" we can see as well. Yet some cameras can “see (captures) colors“ outside the spectral locus however every attempt is usually made to filter those out. Most important is the fact that cameras “see colors“ inside the spectral locus differently than humans. I know of no shipping camera that meets the Luther-Ives condition. This means that cameras exhibit significant observer metameric failure compared to humans. The camera color space differs from a more common working color space in that it does not have a unique one to one transform to and from CIE XYZ. This is because the camera has different color filters than the human eye, and thus "sees" colors differently. Any translation from camera color space to CIE XYZ space is therefore an approximation.

The point is that if you think of camera primaries you can come to many incorrect conclusions because cameras capture spectrally. On the other hand, displays create colors using primaries. Primaries are defined colorimetrically so any color space defined using primaries is colorimetric. Native (raw) camera color spaces are almost never colorimetric, and therefore cannot be defined using primaries. Therefore, the measured pixel values don't even produce a gamut until they're mapped into a particular RGB space. Before then, *all* colors are (by definition) possible.

Raw image data is in some native camera color space, but it is not a colorimetric color space, and has no single “correct” relationship to colorimetry. The same thing could be said about a color film negative.
Someone has to make a choice of how to convert values in non-colorimetric color spaces to colorimetric ones. There are better and worse choices, but no single correct conversion (unless the “scene” you are photographing has only three independent colorants, like when we scan film).

Do raw files have a color space? Fundamentally, they do, but we or those handling this data in a converter may not know what that color space is. The image was recorded through a set of camera spectral sensitivities which defines the intrinsic colorimetric characteristics of the image. One simple way to think of this is that the image was recorded through a set of "primaries" and these primaries define the color space of the image.

If we had spectral sensitivities for the camera, that would make the job of mapping to CIE XYZ better and easier, but we'd still have decisions on what to do with the colors the camera encodes, that are imaginary to us.

Thanks for this detailed information, I guess my question is at the level of "how to cameras convert the linear sensor data captured over the wavebands defined as red, green or blue into a single colour primary for display purposes?" I naively thought that the linear sensor data was combined into a bin that then used a CIE 1931 conversion where the intensity of the primaries were assigned the linear output for the particular waveband, and then XYZ, Lab and RGB were calculated according to the colour matrix.

You said that the image was kind of captured with a set of camera "primaries" (which I assume refers to the rgb filters and sensitivity of the camera pixels), so why not start with the narrowband wavelengths for the display? Why is the overall waveband intensity captured when only a specific wavelength is used for the final image?

For the cameras I am using, I know the spectral sensitivities across the visible band, but how would that help me convert to XYZ space?
http://astronomy-imaging-camera.com/wp-content/uploads/QE-ASI224.jpg

Thanks again, Andrew
Logged

digitaldog

  • Sr. Member
  • ****
  • Online Online
  • Posts: 20646
  • Andrew Rodney
    • http://www.digitaldog.net/
Re: Why do digital cameras capture the full visible spectrum?
« Reply #5 on: May 02, 2021, 05:30:25 pm »

OK, I understand, but for 99% of people they view digital images on a monitor with an sRGB gamut. This is a bit of a thought experiment, I'm just trying to understand the process.
There's no reason to make up such figures.
Further, every iPhone since version 6, iPads around the same time, many iMacs, are wide gamut devices (DCI-P3) and then there are all the other wide gamut panels many of us go out of our way to purchase. sRGB is rather useless for anything other than posting to the web and mobile devices without color management. And without color management, sRGB is a meaningless concept anyway**
Quote
Thanks, but aren't 99% of pictures of the sky shown on a monitor with an sRGB gamut? Banding is not usually an issue is it?
No.
Quote
I guess my question is at the level of "how to cameras convert the linear sensor data captured over the wavebands defined as red, green or blue into a single colour primary for display purposes?"
Uniquely (depending on the raw converter) and in a proprietary fashion.
Quote
so why not start with the narrowband wavelengths for the display?
Why would anyone do this? A camera isn't a display. All displays differ; that's why we calibrate and profile them.
Quote
For the cameras I am using, I know the spectral sensitivities across the visible band, but how would that help me convert to XYZ space?
It would when you start writing your own raw converter.  ;D

**sRGB urban legend & myths Part 2

In this 17 minute video, I'll discuss some more sRGB misinformation and cover:
When to use sRGB and what to expect on the web and mobile devices
How sRGB doesn't insure a visual match without color management, how to check
The downsides of an all sRGB workflow
sRGB's color gamut vs. "professional" output devices
The future of sRGB and wide gamut display technology
Photo print labs that demand sRGB for output

High resolution: http://digitaldog.net/files/sRGBMythsPart2.mp4
Low resolution on YouTube: https://www.youtube.com/watch?v=WyvVUL1gWVs
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

GWGill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 608
  • Author of ArgyllCMS & ArgyllPRO ColorMeter
    • ArgyllCMS
Re: Why do digital cameras capture the full visible spectrum?
« Reply #6 on: May 02, 2021, 06:12:01 pm »

Why capture the full visible spectrum if the signal will simply be converted to one wavelength per colour for display purposes?

The simple answer is "because that's not how human eyes work".

Light reflected from objects can be characterised by its spectral light distribution.
When we see something, our eyes are integrating different wavelengths into 3 components that we interpret as color.
Different sensors can use different integration functions (AKA Color Matching Functions), but this means that they will see the world differently to how we see it. This isn't a useful trait in something intended to capture images for human consumption, irrespective of how they are displayed.

Extreme example, let's say you use your 612nm, 549nm and 464nm narrow band camera to capture a scene containing an energised 504nm LED. To your eyes the LED is clearly visible and on. To your narrow band camera, the LED looks like it is off. Not useful.
Logged

Tulloch

  • Newbie
  • *
  • Offline Offline
  • Posts: 7
Re: Why do digital cameras capture the full visible spectrum?
« Reply #7 on: May 02, 2021, 06:41:42 pm »

There's no reason to make up such figures.
Further, every iPhone since version 6, iPads around the same time, many iMacs, are wide gamut devices (DCI-P3) and then there are all the other wide gamut panels many of us go out of our way to purchase.
OK, I guess I was a little flippant with my 99% figure :)

sRGB is rather useless for anything other than posting to the web and mobile devices without color management. And without color management, sRGB is a meaningless concept anyway.

This is exactly what I'm trying to understand. In the astro-photography world, all we get is a camera sensor with a linear signal output value that is recorded to disk. No colour management (other than a gross multiplication factor for "red" and "blue" pixels relative to the green ones), no icc profiles, no colour correction matrix, no transformation to any gamut, just raw linear numbers. These then get automatically converted by default to sRGB without any consideration of an appropriate gamma function, and displayed on a screen.

It would when you start writing your own raw converter.  ;D

This is what I have trying to do, with little success. I've imaged Macbeth colour charts under approximate D50 lighting conditions with my astro-camera, created icm profiles for it using Argyll and Coca, investigated the colour temperatures for daylight vs starlight conditions and am still no further along than when I started.

Do you know of a good reference on the web that might help me along this path?

Thanks again, Andrew
Logged

Tulloch

  • Newbie
  • *
  • Offline Offline
  • Posts: 7
Re: Why do digital cameras capture the full visible spectrum?
« Reply #8 on: May 02, 2021, 06:45:35 pm »

The simple answer is "because that's not how human eyes work".

...

Extreme example, let's say you use your 612nm, 549nm and 464nm narrow band camera to capture a scene containing an energised 504nm LED. To your eyes the LED is clearly visible and on. To your narrow band camera, the LED looks like it is off. Not useful.

Hmmm, good point, nice example, thanks :).

Assuming that we are using our "full visible spectrum" camera to image this LED, would the colour in the image show up as 504nm?
Logged

digitaldog

  • Sr. Member
  • ****
  • Online Online
  • Posts: 20646
  • Andrew Rodney
    • http://www.digitaldog.net/
Re: Why do digital cameras capture the full visible spectrum?
« Reply #9 on: May 02, 2021, 07:02:30 pm »

Do you know of a good reference on the web that might help me along this path?
For what? Creating your own raw processor, sorry; no.
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

EricV

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 270
Re: Why do digital cameras capture the full visible spectrum?
« Reply #10 on: May 02, 2021, 07:57:26 pm »

In addition to the other answers, capturing only narrow bands of the spectrum would waste most of the incoming light, so the camera would be very insensitive.
Logged

Tulloch

  • Newbie
  • *
  • Offline Offline
  • Posts: 7
Re: Why do digital cameras capture the full visible spectrum?
« Reply #11 on: May 02, 2021, 09:14:41 pm »

For what? Creating your own raw processor, sorry; no.

OK, thanks for your replies to my off-beat questions :)

Andrew
Logged
Pages: [1]   Go Up