Pages: 1 ... 3 4 [5] 6   Go Down

Author Topic: State of the art camera profiling software?  (Read 43183 times)

TRANTOR

  • Newbie
  • *
  • Offline Offline
  • Posts: 24
    • Nuclear Light | Home of Color
Re: State of the art camera profiling software?
« Reply #80 on: March 12, 2015, 05:56:44 pm »

It will also give some indication of the gamut capture capability of the camera.
Sensor gamut don't have a "gamut capture capability". Only shape of his convex hull. You cannot achieve this convex hull without capture all "border" (high chromaticity) spectra.

For example convex hull of Canon 6D sensor gamut (vs human vision gamut):

Tim Lookingbill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2436
Re: State of the art camera profiling software?
« Reply #81 on: March 12, 2015, 10:09:44 pm »

Sensor gamut don't have a "gamut capture capability". Only shape of his convex hull. You cannot achieve this convex hull without capture all "border" (high chromaticity) spectra.

For example convex hull of Canon 6D sensor gamut (vs human vision gamut):

My understanding of how sensors record RGGB filtered photons on a luminance scale per pixel site as defined as voltage readings and converted to 1's & 0's by the A/D converter tells me that there is no way to attest any color gamut 3D model of a sensor until those voltage charges as correlated to 1's & 0's are defined as color after demosaicing in an image processor.

From my understanding you'ld have to measure how many luminance level variations between zero charge to full saturation of each pixel site that determines how many possible levels of intensity of RGGB and compare it against the pixel site right next to it and thus nearest neighbor. It is these levels of variation between pixel sites (excluding noise) that would be the only way to define how many combinations of color that are possible a sensor can record but then to define those combinations requires reconstruction in an image processor on the computer.

How and at what stage are the voltage variants at each pixel site defined by a 3D color gamut model?
« Last Edit: March 12, 2015, 10:12:07 pm by Tim Lookingbill »
Logged

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: State of the art camera profiling software?
« Reply #82 on: March 13, 2015, 05:06:03 am »

How and at what stage are the voltage variants at each pixel site defined by a 3D color gamut model?

You don't need to demosaic to get RGB responses, as a test patch covers a bunch of pixels so you get plenty for each R G and B. When you've measured RGB filters like Trantor you don't need to shoot test targets, you can feed the camera model with "virtual test patches", including single wave lengths so you can trace the spectral locus for example (if I understand correctly Trantor's plots above is a result of spectral locus tracing). Being able to work with virtual test patches is necessary to have a gamut discussion, as a reflective test target never can cover any extreme colors.

Printer+paper gamuts are simple, the reason is that we can print all colors a printer can reproduce onto a number of papers and accurately measure them, and we get an accurate gamut.

The printer is an output device, so the question is "which colors can it reproduce?", with the camera the question is "which colors it can register?".

With the camera a problem is that we can't make a test chart that covers all colors we can see. If we could the camera's gamut could be defined as covered by all test patches that gives us a unique RGB value. In other words all colors the camera can differ.

With the virtual method we can generate any test patch spectrum, however the variations are infinite and I don't know if there is a good method to generate only the ones we need to appropriately cover all colors the eye can see. Maybe there is, Trantor may know. If there is we could do the above, ie feed the measured camera response curves with all human detectable spectrums and see how many of those that yields a different RGB value. Doing it all inside Matlab or other software we can test millions of virtual patches.

The next step is however that RGB values need to be mapped to *correct* XYZ positions. This is the job of the profile. A matrix-only profile will typically only succeed placing low saturation colors reasonably correct and the high saturation colors will be way off, possibly in out-of-human-gamut positions. A LUT profile can make the best of it, but there may be reasons to not optimise solely for XYZ accuracy as many extreme colors are never seen in a real scene.

When you have a profile that translates only subset of of RGB combinations to correct XYZ coordinates and many get into grossly incorrect positions or even out of human gamut, what is then the gamut of the camera+profile combination? There is no clear definition, and I think one should then rephrase the question to something like how large is the gamut where this camera+profile combination can produce colors with a Delta E smaller than X (where X is quite large, say 10)?

Probably it's wise to optimize a profile to make good color match within say Pointer's gamut and relax matching of extreme colors.

It's also worth noting that a camera may (might?) also be able to separate some spectrums that the eye can't. Using the human eye as reference I guess we should consider those colors invalid and a LUT profile could merge them to the same XYZ coordinate, but from an artistic reason we may want them to be separated anyway.

All these issues is the reason some say "camera's have no gamut". With a printer you just need a high quality ICC profile and a profile viewer and you'll see what colors it can correctly reproduce. Looking at a camera ICC or DNG profile you cannot see what colors it can accurately capture.
« Last Edit: March 13, 2015, 06:01:23 am by torger »
Logged

AlterEgo

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1995
Re: State of the art camera profiling software?
« Reply #83 on: March 13, 2015, 09:28:09 am »

> You don't need to demosaic to get RGB responses, as a test patch covers a bunch of pixels so you get plenty for each R G and B.

by the way - what are the nuances for RG1BG2, where G1 != G2 - a common thing in many cameras (not talking about CYGM or RGBE)... we need to pay attention how that will be done by demosaick that will convert camera's RG1BG2 into camera's RGB, no ?
Logged

AlterEgo

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1995
Re: State of the art camera profiling software?
« Reply #84 on: March 13, 2015, 09:32:49 am »

It's also worth noting that a camera may (might?) also be able to separate some spectrums that the eye can't.

or how we deal with metameric failures where camera (it's model represented by those measured curves) can't distinguish two colors that we can - so how do we assign them, they might be sufficiently different
Logged

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Re: State of the art camera profiling software?
« Reply #85 on: March 13, 2015, 10:40:28 am »

TL;DR
And no chance to make profiles that is colorimetric accurate in general because sensors do not correspond Luther-Ives condition.
So true and yet, that single important fact seem to have been ignored.
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20630
  • Andrew Rodney
    • http://www.digitaldog.net/
Re: State of the art camera profiling software?
« Reply #86 on: March 13, 2015, 11:08:56 am »

Digital cameras don't have a gamut, but rather a color mixing function. Basically, a color mixing function is a mathematical representation of a measured color as a function of the three standard monochromatic RGB primaries needed to duplicate a monochromatic observed color at its measured wavelength. Cameras don’t have primaries, they have spectral sensitivities, and the difference is important because a camera can capture all sorts of different primaries. Two different primaries may be captured as the same values by a camera, and the same primary may be captured as two different values by a camera (if the spectral power distributions of the primaries are different). A camera has colors it can capture and encode as unique values compared to others, that are imaginary (not visible) to us. There are colors we can see, but the camera can't capture that are imaginary to it. Most of the colors the camera can "see" we can see as well. Yet some cameras can “see colors“ outside the spectral locus however every attempt is usually made to filter those out. Most important is the fact that cameras “see colors“ inside the spectral locus differently than humans. No shipping camera that I know of meets the Luther-Ives condition. This means that cameras exhibit significant observer metamerism with respect to humans. The camera color space differs from a more common working color space in that it does not have a unique one to one transform to and from CIE XYZ. This is because the camera has different color filters than the human eye, and thus "sees" colors differently. Any translation from camera color space to CIE XYZ space is therefore an approximation.

The point is that if you think of camera primaries you can come to many incorrect conclusions because cameras capture spectrally. On the other hand, displays create colors using primaries. Primaries are defined colorimetrically so any color space defined using primaries is colorimetric. Native (raw) camera color spaces are almost never colorimetric, and therefore cannot be defined using primaries. Therefore, the measured pixel values don't even produce a gamut until they're mapped into a particular RGB space. Before then, *all* colors are (by definition) possible.

Raw image data is in some native camera color space, but it is not a colorimetric color space, and has no single “correct” relationship to colorimetry. The same thing could be said about a color film negative. Someone has to make a choice of how to convert values in non-colorimetric color spaces to colorimetric ones. There are better and worse choices, but no single correct conversion (unless the “scene” you are photographing has only three independent colorants, like when we scan film).

Do raw files have a color space? Fundamentally, they do, but we or those handling this data in a converter may not know what that color space is. The image was recorded through a set of camera spectral sensitivities which defines the intrinsic colorimetric characteristics of the image. One simple way to think of this is that the image was recorded through a set of "primaries" and these primaries define the color space of the image.

If we had spectral sensitivities for the camera, that would make the job of mapping to CIE XYZ better and easier, but we'd still have decisions on what to do with the colors the camera encodes, that are imaginary to us.
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

TRANTOR

  • Newbie
  • *
  • Offline Offline
  • Posts: 24
    • Nuclear Light | Home of Color
Re: State of the art camera profiling software?
« Reply #87 on: March 13, 2015, 11:22:12 am »

I don't know if there is a good method to generate only the ones we need to appropriately cover all colors the eye can see.
I multiply spectral data of Munsell book of color by random numbers in each band. Not brilliant but usual.
« Last Edit: March 13, 2015, 11:49:05 am by TRANTOR »
Logged

TRANTOR

  • Newbie
  • *
  • Offline Offline
  • Posts: 24
    • Nuclear Light | Home of Color
Re: State of the art camera profiling software?
« Reply #88 on: March 13, 2015, 11:48:14 am »

Just for clarification.

 Any translation from camera color space to CIE XYZ space is therefore an approximation.
I think that free-form-deformation (FFD) is more useful technique than "clean" approximation. Approximation are worst to predict in some cases.

https://youtu.be/7Pe-RPLMeDI

but no single correct conversion
No single linear conversion.
« Last Edit: March 13, 2015, 11:59:56 am by TRANTOR »
Logged

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: State of the art camera profiling software?
« Reply #89 on: March 13, 2015, 11:50:53 am »

or how we deal with metameric failures where camera (it's model represented by those measured curves) can't distinguish two colors that we can - so how do we assign them, they might be sufficiently different

I think that one is easy, if the RGB values are the same then the XYZ output will be the same. You could of course argue that that XYZ coordinate should correspond to the "center" of the local space it can't differentiate rather than just on of the test patches XYZ coordinates, and finding that center will not be easy. I think it's a minor problem, I'd probably not search for any center but just place it based on maximizing smoothness.
Logged

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: State of the art camera profiling software?
« Reply #90 on: March 13, 2015, 11:55:09 am »

> You don't need to demosaic to get RGB responses, as a test patch covers a bunch of pixels so you get plenty for each R G and B.

by the way - what are the nuances for RG1BG2, where G1 != G2 - a common thing in many cameras (not talking about CYGM or RGBE)... we need to pay attention how that will be done by demosaick that will convert camera's RG1BG2 into camera's RGB, no ?

G1 != G2 is a manufacturing limitation where you have some blue taint in the green filter on one row and red taint on the next. Difference is about 1%, probably monochromator measurement errors will be larger, but anyway we would average over lots of pixels which the demosaicer will also do, ie no meaningful difference.

There will be errors in many places of the profiling process, one challenge is to figure out which errors are large and which are small, which we need to take into account and which we can ignore.
Logged

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: State of the art camera profiling software?
« Reply #91 on: March 13, 2015, 12:01:59 pm »

A profile can skip the matrixing approximation alltogether and jump directly to a LUT solution. Phase One's ICC profiles goes from RGB directly to Lab so they incorporate chromatic adaptation in the profile too.

With DNG profiles you can't skip a matrix but you don't really need them to make a sane result, you can just drag colors into desired position with the LUT.

To get deeper into understanding of the limitations of camera color it's a good start to look at pure linear matrix profiles though.

Logged

Tim Lookingbill

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2436
Re: State of the art camera profiling software?
« Reply #92 on: March 13, 2015, 02:30:52 pm »

So anyone want to answer or take a guess at how many photons each pixel site of a sensor can contain between zero charge to max charge to present a specific but variable charge so the A/D converter can assign a luminance number to define 1's & 0's?

How many variants of charge per pixel translates into a voltage reading that can be assigned a color? 255 levels as in digital language? Millivolts as in million levels of luminance variation?

Torger quote:
Quote
When you've measured RGB filters like Trantor you don't need to shoot test targets, you can feed the camera model with "virtual test patches", including single wave lengths so you can trace the spectral locus for example

How can you measure a sensor's RGGB filter? You pull it off the sensor and subject it to transmissive light and record what the CIE Lab numbers indicate off the spectro? What are those Lab numbers with the RGGB filter backlit at full saturation vs half luminance down to just barely visible luminance?

A 2D spectral locus doesn't include luminance variation, the point I keep making about the pixel site voltage variances which as you can see from the rose example missed the mark on luminance interpretation at the pixel site which required a compression curve to correct for. A 3D model will have to be employed to effectively and consistently characterize a sensor's response to real world lit objects subjecting the sensor to near full saturation.
« Last Edit: March 13, 2015, 02:32:57 pm by Tim Lookingbill »
Logged

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: State of the art camera profiling software?
« Reply #93 on: March 16, 2015, 03:41:32 am »

So anyone want to answer or take a guess at how many photons each pixel site of a sensor can contain between zero charge to max charge to present a specific but variable charge so the A/D converter can assign a luminance number to define 1's & 0's?

How many variants of charge per pixel translates into a voltage reading that can be assigned a color? 255 levels as in digital language? Millivolts as in million levels of luminance variation?

Torger quote:
How can you measure a sensor's RGGB filter? You pull it off the sensor and subject it to transmissive light and record what the CIE Lab numbers indicate off the spectro? What are those Lab numbers with the RGGB filter backlit at full saturation vs half luminance down to just barely visible luminance?

A 2D spectral locus doesn't include luminance variation, the point I keep making about the pixel site voltage variances which as you can see from the rose example missed the mark on luminance interpretation at the pixel site which required a compression curve to correct for. A 3D model will have to be employed to effectively and consistently characterize a sensor's response to real world lit objects subjecting the sensor to near full saturation.

There is a commercial product called "camSpecs" which can be used to measure the filters, but you can also make your own setup using a stable full spectrum lamp (halogen lamp), a monochromator, the camera, and some cloth and duct tape to shade :). With the monochromator you can create a single bandwidth light, so you step through from 380 to 730nm at 5nm intervals for example and shoot one frame for each. Then you use rawdigger manually or preferably write your own custom software to parse the 90 files or so to get a raw value readout for each bandwidth and color averaged over a number of pixels to reduce noise. Then you will get curves, note that you need to scale them for the lamp's spectrum (which you can measure with a spectrometer). Here's one setup described, a bit more complex and automated:

https://spectralestimation.wordpress.com/data/

Due to the linear behavior of the camera's pixels (just linear photon counters) you don't need to vary luminance in the measurement.

Concerning how many photons that are counted per pixel it varies between sensors, it's called "full well capacity". Those that work with astro-photography are often intersted in this number so there are measurements out there. It can be 40000 or so per pixel, larger pixels usually means larger full well capacity. Regarding how many unique steps you can get from it I think DxOmark's "tonal range" measurement is a good indication.

When you have measured the filter response you can then calculate which signal (raw value) the camera will produce for any type of spectrum, that is you can the perform a profiling and generate a profile mathematically, like Trantor has done.

I'm not sure if Trantor has measured the cameras or if he has used any public measurement data. The link above has measurement data for the Nikon D5100 for example so anyone can try to make a profile from that.

There will be some measurement errors when getting the curves of course. I think that for reproduction photography the best method will be the traditional test target method (likely smaller measurement error for the targeted colors), but when making a profile for generic use it can be an advantage to know the camera's behavior also for saturated colors that cannot be reproduced with a test target. The two methods could be combined of course. But as said there are no available software for consumers to do this, vendors have their own custom software, and researchers their own (often Matlab scripts).
« Last Edit: March 16, 2015, 05:24:21 am by torger »
Logged

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: State of the art camera profiling software?
« Reply #94 on: March 16, 2015, 08:30:03 am »

the point I keep making about the pixel site voltage variances which as you can see from the rose example missed the mark on luminance interpretation at the pixel site which required a compression curve to correct for. A 3D model will have to be employed to effectively and consistently characterize a sensor's response to real world lit objects subjecting the sensor to near full saturation.

I think you're mixing up gamut limits of screen and print with recording correct color. Even if the camera+profile is able to record a correct XYZ coordinate it may be outside the screen's gamut and then the color will be clipped or compressed in some way, which happens in the red rose example. That's a separate problem, which Trantor has chosen to solve with a special gamut map profile, but you can also solve it with manual compression be reducing saturation or value of the problematic color. I personally prefer to do it manually, and like the camera+profile to capture as correct colorimetric coordinates as the hardware+LUT profile allows.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: State of the art camera profiling software?
« Reply #95 on: March 16, 2015, 08:33:11 am »

So anyone want to answer or take a guess at how many photons each pixel site of a sensor can contain between zero charge to max charge to present a specific but variable charge so the A/D converter can assign a luminance number to define 1's & 0's?

Tim, it's irrelevant, because us mortals can only access the data that comes out of the ADC and gets written to a Raw file. But if you insist, the sensor well can hold more electrons than the ADC will use. From what I've read, something like 70% of true full well is used by the ADC, because above that level, the sensor response gets to be too non-linear.

Quote
How many variants of charge per pixel translates into a voltage reading that can be assigned a color? 255 levels as in digital language? Millivolts as in million levels of luminance variation?

Depends on the camera, and the ISO settings which may influence the ADC gain that is used. A 14-bit ADC can typically output some 16384 levels of intensity per color plane, although some 1024 may be subtracted for read-noise, leaving some 15360 individual levels. At unity gain settings, this will translate to 1 digital number per converted photon, at lower ISO settings that would be e.g. 4 photons per DN at ISO 100, or thereabouts. All this is at the linear gamma Raw level we need for color calculations. Final gamma pre-compensation for the output modality will reduce the remaining integer levels if we do not stay in floating-point number representation.

The reason for you asking escapes me a bit, because we have to deal with what the sensor electronics unveil to us, and there may be other mechanisms involved like WhiteBalance precompensation (typically multiplies the Red and Blue color readouts with a factor before writing to Raw), and/or (lossy) compression, and/or non-linear tone curves. Anyway, the short story is that we could get some 13.9-b/ch precision from a Raw file (after managing photon shot noise, Read noise, Dark current, Pixel Response Non-Uniformity, and pattern noise), in other words some 3.6x10^12 possible coordinate positions, of which only a part are utilized for humanly discernible differences, AKA colors.

Cheers,
Bart
« Last Edit: March 16, 2015, 10:39:37 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

AlterEgo

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1995
Re: State of the art camera profiling software?
« Reply #96 on: March 16, 2015, 10:24:16 am »

Tim, it's irrelevant, because us mortals can only access the data that comes out of the ADC and gets written to a Raw file.
you need to change that to only "written to a Raw file", because mortals can't do anything about what firmware does in between... yes, there are people who (can) write firmware for cameras, but then I did not see anyone here.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: State of the art camera profiling software?
« Reply #97 on: March 16, 2015, 10:35:43 am »

you need to change that to only "written to a Raw file", because mortals can't do anything about what firmware does in between... yes, there are people who (can) write firmware for cameras, but then I did not see anyone here.

Hi,

I'm quite sure there are mortals who can intercept the data before it gets written to Raw (e.g. the Magic Lantern crew), but most of us simpler mortals indeed can't... ;).

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: State of the art camera profiling software?
« Reply #98 on: March 16, 2015, 10:41:45 am »

you need to change that to only "written to a Raw file", because mortals can't do anything about what firmware does in between... yes, there are people who (can) write firmware for cameras, but then I did not see anyone here.

Firmware doesn't do much for most cameras in terms of changing ADC readout, but there are exceptions (often followed by loud criticism from users, as users tend to dislike "cooked" raws). In terms of camera profiling we don't need to worry about it though. We can safely look at the camera as a device with linear pixels and we get the data in the raw file. We may need to perform black level subtraction before we can make filter response measurements.
Logged

AlterEgo

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1995
Re: State of the art camera profiling software?
« Reply #99 on: March 16, 2015, 10:57:46 am »

Hi,

I'm quite sure there are mortals who can intercept the data before it gets written to Raw (e.g. the Magic Lantern crew), but most of us simpler mortals indeed can't... ;).

Cheers,
Bart

true, but then who knows how many layers of firmware Canon cameras have... not being an expert in ML at all I might still suggest that ML is running in an outer layer while Canon still have inner layer, no ? like for example in OS there might be a kernel-level/mode and non kernel level/mode (user space) drivers.
Logged
Pages: 1 ... 3 4 [5] 6   Go Up