Pages: [1] 2   Go Down

Author Topic: Camera profiling target evaluation  (Read 9967 times)

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Camera profiling target evaluation
« on: May 24, 2015, 04:25:15 pm »

Using DCamProf (http://www.ludd.ltu.se/~torger/dcamprof.html) I've through spectral simulations virtually evaluated a number of real camera profiling targets. It works like this: by having measured spectral reflectance of the targets and the spectral sensitivity function of the camera, as well as the light source the whole profiling process can be completed without a single shot taken. The generated profiles can then be evaluated against real spectra to see how well they match.

The following targets have been evaulated:

* cc24: X-Rite Colorchecker 24
* ccsg: X-Rite Colorchecker Digital SG
* it8: Reflective IT-8 on photo paper by Wolf Faust http://www.targets.coloraid.de/
* qpc202: QP-Card 202
* pixma: homemade target printed with a Canon Pixma Pro-1 pigment inkjet on OBA-free semi-gloss baryta paper

The following real spectral data has been used for evaluating performance
* nordic-nature: leaves, flowers, etc natural colors from nature in Sweden/Finland
* lippmann2000: skin, hair, lips

Cameras used: Canon 5Dmk2 and Nikon D3x
Reference light: D50 (ie 5000K daylight)

Cameras have limited color matching ability so for reference I've generated a profile using lippman2000 + munsell colors to fill out, and same for nordic nature. These server as a baseline reference to show a limit of how good it can get. For both cameras I get below 1.0 DE at p90 with a max below 2 DE for nature and below 1 max for humans, ie very good match for an ideally designed profile. The human color set is somewhat simpler to match as it does not contain any high saturation colors, while the nature set has some saturated flower colors in it.

Comments about the target spectra:

cc24/ccsg patches has smooth spectra without any obvious colorant limitations. It8 while a photographic target has quite smooth and varied spectra, covers a large gamut, spectral variation looks better than I thought a photographic target would. QPC202 spectra is smooth but more similar to an inkjet print. The Pixma print has largest gamut but also have some limitations in spectral variation due to the limited number of colorants, but most likely considerably better than an ordinary CMYK print as there are more inks.

On to matching performance:

Both cameras show very similar results after profiling (which is expected), so to shorten the presentation I've only used numbers from the 5D mark II. The matching is with 2.5D LUT. The numbers are average, median, 90th percentile and max (ie worst), DE values are CIEDE2000, and the LCH is the error split in Luminance, Chroma (=saturation) and Hue. Generally speaking hue error is worst, luminance error the least.

NORDIC-NATURE matching:


cc24
avg DE 0.75, DE LCh 0.36 0.39 0.41
mdn DE 0.68, DE LCh 0.34 0.35 0.27
p90 DE 1.27, DE LCh 0.69 0.74 0.92
max DE 3.59, DE LCh 1.58 1.85 3.28

ccsg
avg DE 0.64, DE LCh 0.25 0.32 0.40
mdn DE 0.51, DE LCh 0.19 0.28 0.27
p90 DE 1.26, DE LCh 0.51 0.63 1.03
max DE 2.95, DE LCh 1.99 1.68 2.21

it8
avg DE 0.91, DE LCh 0.35 0.51 0.54
mdn DE 0.86, DE LCh 0.31 0.47 0.33
p90 DE 1.59, DE LCh 0.75 1.00 1.20
max DE 4.32, DE LCh 1.85 2.44 3.43

qpc202
avg DE 0.71, DE LCh 0.29 0.32 0.46
mdn DE 0.62, DE LCh 0.23 0.25 0.35
p90 DE 1.23, DE LCh 0.63 0.70 1.00
max DE 3.72, DE LCh 1.72 1.49 3.35

pixma
avg DE 1.22, DE LCh 0.54 0.79 0.53
mdn DE 1.16, DE LCh 0.53 0.72 0.36
p90 DE 2.10, DE LCh 1.05 1.43 1.24
max DE 5.51, DE LCh 1.87 3.63 3.95


Here we see that the Pixma and IT8 target has some problems compared to the others, but not much and looking deeper into the stats we see that it's highly saturated colors, both has problem with a specific light vivid orange patch.

All the others, including the seemingly simplistic cc24 perform an equal level.

LIPPMANN2000 (skin/hair/lips) matching:


cc24
avg DE 0.52, DE LCh 0.10 0.22 0.41
mdn DE 0.53, DE LCh 0.07 0.19 0.41
p90 DE 0.77, DE LCh 0.21 0.44 0.73
max DE 1.25, DE LCh 0.60 0.76 1.09

ccsg
avg DE 0.71, DE LCh 0.28 0.24 0.58
mdn DE 0.77, DE LCh 0.28 0.20 0.63
p90 DE 0.97, DE LCh 0.49 0.43 0.83
max DE 1.26, DE LCh 0.84 0.74 1.11

it8
avg DE 0.58, DE LCh 0.18 0.19 0.48
mdn DE 0.61, DE LCh 0.16 0.16 0.52
p90 DE 0.83, DE LCh 0.37 0.40 0.72
max DE 1.19, DE LCh 0.76 0.61 1.09

qpc202
avg DE 1.11, DE LCh 0.40 0.18 1.00
mdn DE 1.27, DE LCh 0.43 0.17 1.18
p90 DE 1.66, DE LCh 0.70 0.34 1.48
max DE 2.01, DE LCh 1.02 0.52 1.75

pixma
avg DE 0.97, DE LCh 0.17 0.55 0.74
mdn DE 1.10, DE LCh 0.12 0.55 0.82
p90 DE 1.22, DE LCh 0.40 0.84 1.04
max DE 1.50, DE LCh 0.91 1.03 1.35


Matching the human set, all targets do good. The ColorChecker SG has many "skintone" patches (foundation-tone patches I suppose) but it doesn't perform better than the CC24 still, and really when it's already below 1.0 you can make any real improvement. As long as the target is able to orient the camera into the skintone zone the the relative color differences within is handled well.

From this it looks like a CC24 is all you need, but isn't the supersaturated colors of the pixma target good at something? I haven't yet investigated thoroughly due to lack of spectral data but, I do suspect that it add some stability when dealing with subjects with super-saturated colors, for example what I see in running apparel when I shoot running competitions.

To just give some indication I generated artificial spectra along the pointer gamut border, that is a set with very saturated colors with smooth spectra:

POINTER BORDER GENERATED matching:


cc24
avg DE 2.30, DE LCh 1.53 0.84 1.25
mdn DE 2.26, DE LCh 1.33 0.59 1.36
p90 DE 4.20, DE LCh 3.51 1.97 2.16
max DE 5.34, DE LCh 4.33 2.35 2.86

ccsg
avg DE 1.94, DE LCh 1.13 0.66 1.16
mdn DE 1.84, DE LCh 0.88 0.44 1.17
p90 DE 3.17, DE LCh 2.63 1.64 2.05
max DE 4.30, DE LCh 4.25 2.24 2.65

it8
avg DE 1.87, DE LCh 0.82 0.64 1.32
mdn DE 1.65, DE LCh 0.76 0.32 1.29
p90 DE 3.17, DE LCh 1.63 1.72 2.53
max DE 6.52, DE LCh 3.00 2.68 5.79

qpc202
avg DE 2.01, DE LCh 1.46 0.58 1.05
mdn DE 1.59, DE LCh 0.99 0.38 0.86
p90 DE 4.34, DE LCh 3.80 1.39 1.91
max DE 5.57, DE LCh 4.60 2.17 2.74

pixma
avg DE 1.70, DE LCh 0.90 0.78 0.88
mdn DE 1.20, DE LCh 0.87 0.22 0.54
p90 DE 4.25, DE LCh 1.50 2.72 2.63
max DE 7.12, DE LCh 1.97 5.75 4.20

combo cc24 + pixma
avg DE 1.26, DE LCh 0.75 0.39 0.70
mdn DE 1.12, DE LCh 0.77 0.18 0.46
p90 DE 2.22, DE LCh 1.52 1.25 1.90
max DE 4.60, DE LCh 1.68 2.56 3.75


Here we see that while having a quite bad max (which is not suprising due to colorant limitations), the pixma target has the best mean. The last one "combo" is a special one, where the cc24 is the primary target and then saturated colors are filled out with the pixma target. This is possible to do in a regular DCamProf workflow, just make sure you have stable setup and light, and shoot one shot with the cc24 and one with the pixma and then merge the two, letting the cc24 have priority (similar colors from the pixma is then excluded, but the supersaturated colors the cc24 lacks is included). A profile generated this way performs as the cc24 with some added boost on saturated colors.

I will personally test that combination more, getting the glossy Colorchecker Digital SG seems to be a waste of money, if you like me already have a printer and a spectrometer. And if you just have a cc24 and makes profiles with that, I think these simulations show that you should not really need to worry about the quality of those profiles, except possibly for supersaturated colors, cc24 is weakest in deep violet.

Note that this evaluation does not evaluate the performance of the bundled software, all profiles have been generated by DCamProf. The most interesting software to test I think would be QP-Card, which is supposed to have some novel technique to profiling, patented and all. The QP-Card does not contain skin color patches and such, but just patches which is supposed to be good at profiling the camera filters. I'm mildly skeptical but without trying the software I don't know if it can make a more accurate profile with the qpc202 chart than DCamProf can.
« Last Edit: May 25, 2015, 01:43:50 am by torger »
Logged

hugowolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1001
Re: Camera profiling target evaluation
« Reply #1 on: May 24, 2015, 08:22:03 pm »

Average vs mean?

The word average usually refers to the arithmetic mean. Median would be something different, ie the middle value, or for even numbered data sets, the average of the two values closest to the middle.

Brian A
Logged

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: Camera profiling target evaluation
« Reply #2 on: May 25, 2015, 01:25:05 am »

Language confusion, mean should be median, the middle value. Median is called "median" in Swedish. All those years I have believed the translation to English is "mean", thanks for letting me know. Editing the post...

Reading I got a bit confused about, "average" vs "mean", but it seems like "average" is the causal language name for "arithmetic mean", and the scientific statistical short name for arithmetic mean is just "mean", that is "mean" = "average".

I'm a bit unsure though if it's better to use "mean, median, 90th percentile" or "average, median, 90th percentile" in texts like this which are not mainly about statistics, what do you think?
« Last Edit: May 25, 2015, 02:01:09 am by torger »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: Camera profiling target evaluation
« Reply #3 on: May 25, 2015, 11:23:21 am »

I'm a bit unsure though if it's better to use "mean, median, 90th percentile" or "average, median, 90th percentile" in texts like this which are not mainly about statistics, what do you think?

Hi Anders,

Thanks for the report.

FWIW,  I'd say it's best to use "mean, median, 90th percentile", since the resulting numbers are the result of math/stats, and thus avoid potential confusion.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

hugowolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1001
Re: Camera profiling target evaluation
« Reply #4 on: May 25, 2015, 01:50:03 pm »

I'm a bit unsure though if it's better to use "mean, median, 90th percentile" or "average, median, 90th percentile" in texts like this which are not mainly about statistics, what do you think?

I don't think it matters either way. But there are different means: arithmetic mean, geometric mean, and harmonic mean, so average may be better. In Microsoft Excel, the arithmetic mean function is called =average().

Brian A
Logged

keithcooper

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 417
    • Northlight Images
Re: Camera profiling target evaluation
« Reply #5 on: May 29, 2015, 08:48:00 am »

Can you comment on the overall relevance and reliability of a simulated set of measurements?

The virtual testing is quickly mentioned at the start - why is this approach to be trusted?

In particular, it seems to rest on measurements of the spectral response of cameras.

How is this done and why is it a valid approach?
What sources of error have you identified?

Just curious... ;-)
I tried following the recent thread on the software, but once it starts going into stuff about compilers I'm lost
« Last Edit: May 29, 2015, 08:54:50 am by keithcooper »
Logged

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: Camera profiling target evaluation
« Reply #6 on: May 29, 2015, 09:35:07 am »

Can you comment on the overall relevance and reliability of a simulated set of measurements?

The virtual testing is quickly mentioned at the start - why is this approach to be trusted?

In particular, it seems to rest on measurements of the spectral response of cameras.

How is this done and why is it a valid approach?
What sources of error have you identified?

Just curious... ;-)
I tried following the recent thread on the software, but once it starts going into stuff about compilers I'm lost

Good questions. Let's start off with that I'm a software engineer and although I do know a thing or two about color science and been in the academic world some time it's still a learning experience, so I sometimes can make the wrong assumptions, so don't consider me an authority on the subject. I try my best to not say incorrect things though :)

The data is most relevant if you have a Canon 5Dmk2 as the data is from a 5Dmk2 SSF, but having tested other cameras I've noted a strong similarity in results so I assume the data is relevant also for other cameras.

"Virtual" testing may sound like it's worse than "real" testing, but as long as the SSF is measured well, and I have no doubt they aren't (comes from an academic database of SSFs http://www.cis.rit.edu/jwgu/research/camspec/db.php), it's actually more reliable than real testing as any mistakes in measurements is taken out of the equations. However you of course miss out on that glossy targets with many patches are a lot more likely to get measurement errors than a small matte 24 patch target. That is virtual test assumes that the real tester doesn't do measurement mistakes.

What the virtual test actually does is that since you know the spectrum of the camera, the spectrum of the light and the spectra of the patches you can calculate what RGB values the camera will output. This is a well-established method of profiling, but as it requires high end equipment to get the SSF only manufacturers and researchers use it normally so it's not so well-known among photographers in general.

I wouldn't call it sources of errors, but possibilities to get to other conclusions are plentiful, which means that the test should only be used as an indication, not a definitive answer. A few of them: only two cameras tested, only one light tested, only one profiling software tested (my own), aggressiveness in profile stretching not investigated. And of course this is only two sets of reference data tested against, nordic nature and a set of skin colors, if I remember correctly without foundation too (the X-rite patches are probably made to match foundation skintones, not skin without makeup, they still do the naked skin good though).

Testing against real spectra is a more valid approach then testing matching some artificial target, this is also a well-established method in research circles. To do so you need SSF though, so it's not so easy to do for a layman unless you have got the SSF from somewhere. The CIE has even standardized sets of spectra to test against, I haven't got those though as they cost money :)

The general conclusion from experimenting with the software and various targets is that it seems like you can't do that much magic with targets, that's probably the reason the macbeth cc24 from the 1970s is still the number one target (together with that it's quite difficult to shoot large glossy targets correctly), you don't need many patches to get most colors as accurate as they can be, and accuracy for a camera is less than accuracy you can expect in a printing workflow, as the camera has much a tougher task to deal with.

The reason I made these experiments was to check which target I should buy if any. I ended up getting a large cc24 as my small colorchecker passport has so tiny patches I can't measure them with my bulky spectrometer. I complement that with a high saturation printed target, currently working with this combination. Will report results later.

(seem's like we're past the compiling stuff in the DCamProf thread by the way... at least for now :) )
« Last Edit: May 29, 2015, 09:49:10 am by torger »
Logged

keithcooper

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 417
    • Northlight Images
Re: Camera profiling target evaluation
« Reply #7 on: May 29, 2015, 09:47:10 am »

Thanks for that, the question came up in a discussion elsewhere as to whether X-rite's software with the ColorChecker Passport should support some other (larger) targets, especially since several people had SG Cards, which include patches around the edge to allow checks for uniformity of lighting (as was used with i1Match as I recall).
Logged

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: Camera profiling target evaluation
« Reply #8 on: May 29, 2015, 09:53:59 am »

I've added uniformity of lighting correction in DCamProf ("flatfield correction"), the next version will contain a generalized flatfield algorithm so you don't need to have white patches scattered about, you then just shoot an extra reference shot with a large white card in place of the target.

I don't do any linearity correction though (ie I make no use of grayscale steppings), I'm not 100% sure yet if I should. Linearity errors occur not due to the camera but due to glare on the target. Currently I have assumed that if you have glare in your target it's ruined anyway so no value in trying to correct. But I'm not really sure if it's realistic to shoot glare free, even with matte targets. I need more experimentation to clarify that.
Logged

AlterEgo

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1995
Re: Camera profiling target evaluation
« Reply #9 on: May 29, 2015, 01:52:59 pm »

I've added uniformity of lighting correction in DCamProf ("flatfield correction")
do you need all the information from .ti1 for dcamprof testchart-ff or some stripped down version can be used (for colorchecker sg) ? I mean you just need to see which fields are white from .ti1 ... as dcamprof make-testchart crashes I can't see what is the info that dcamprof puts there, I assume not the full like argyll targen might do ?
Logged

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: Camera profiling target evaluation
« Reply #10 on: May 29, 2015, 02:13:30 pm »

do you need all the information from .ti1 for dcamprof testchart-ff or some stripped down version can be used (for colorchecker sg) ? I mean you just need to see which fields are white from .ti1 ... as dcamprof make-testchart crashes I can't see what is the info that dcamprof puts there, I assume not the full like argyll targen might do ?

Add -l then it won't crash (will fix crash bug to next release). A stripped down version of .ti1 works, it only needs it to know where the white patches are, it searches for RGB 100 100 100 entries. The actual layout comes from the command line parameters.

You can also wait for my next release if you like, then you can do it with a white card too and do flatfield on the TIFF, but then you need to make two shots of course.
Logged

AlterEgo

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1995
Re: Camera profiling target evaluation
« Reply #11 on: May 29, 2015, 02:25:37 pm »

Add -l then it won't crash (will fix crash bug to next release). A stripped down version of .ti1 works, it only needs it to know where the white patches are, it searches for RGB 100 100 100 entries. The actual layout comes from the command line parameters.

You can also wait for my next release if you like, then you can do it with a white card too and do flatfield on the TIFF, but then you need to make two shots of course.

thank you, but I was trying to experiment on cameras that I don't have - so there will be no flatfield shot available, so I exactly need to be able to try to flatfield SG by those white patches along the edges...
Logged

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: Camera profiling target evaluation
« Reply #12 on: May 29, 2015, 02:47:03 pm »

thank you, but I was trying to experiment on cameras that I don't have - so there will be no flatfield shot available, so I exactly need to be able to try to flatfield SG by those white patches along the edges...

Ah... ok. I haven't tried myself with the SG, but it should work.
Logged

AlterEgo

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1995
Re: Camera profiling target evaluation
« Reply #13 on: May 29, 2015, 03:32:46 pm »

here is the .ti1 for FF correction of SG for posterity (whites and the rest are blacks - shall work, right ?) :

Quote
CTI1   

DESCRIPTOR "Argyll Calibration Target chart information for ColorChecker SG"
ORIGINATOR "DCamProf"
CREATED "Fri May 29 14:28:09 2015"

NUMBER_OF_FIELDS 7
BEGIN_DATA_FORMAT
SAMPLE_ID RGB_R RGB_G RGB_B XYZ_X XYZ_Y XYZ_Z
END_DATA_FORMAT

NUMBER_OF_SETS 140
BEGIN_DATA
1   100 100 100 96.422 100 82.521   
2   0 0 0 0 0 0
3   0 0 0 0 0 0
4   100 100 100 96.422 100 82.521   
5   0 0 0 0 0 0
6   0 0 0 0 0 0
7   100 100 100 96.422 100 82.521
8   0 0 0 0 0 0
9   0 0 0 0 0 0
10   100 100 100 96.422 100 82.521   
11   0 0 0 0 0 0
12   0 0 0 0 0 0
13   0 0 0 0 0 0
14   0 0 0 0 0 0
15   0 0 0 0 0 0
16   0 0 0 0 0 0
17   0 0 0 0 0 0
18   0 0 0 0 0 0
19   0 0 0 0 0 0
20   0 0 0 0 0 0
21   0 0 0 0 0 0
22   0 0 0 0 0 0
23   0 0 0 0 0 0
24   0 0 0 0 0 0
25   0 0 0 0 0 0
26   0 0 0 0 0 0
27   0 0 0 0 0 0
28   0 0 0 0 0 0
29   0 0 0 0 0 0
30   0 0 0 0 0 0
31   100 100 100 96.422 100 82.521
32   0 0 0 0 0 0
33   0 0 0 0 0 0
34   0 0 0 0 0 0
35   0 0 0 0 0 0
36   0 0 0 0 0 0
37   0 0 0 0 0 0
38   0 0 0 0 0 0
39   0 0 0 0 0 0
40   100 100 100 96.422 100 82.521
41   0 0 0 0 0 0
42   0 0 0 0 0 0
43   0 0 0 0 0 0
44   0 0 0 0 0 0
45   100 100 100 96.422 100 82.521
46   0 0 0 0 0 0
47   0 0 0 0 0 0
48   0 0 0 0 0 0
49   0 0 0 0 0 0
50   0 0 0 0 0 0
51   0 0 0 0 0 0
52   0 0 0 0 0 0
53   0 0 0 0 0 0
54   0 0 0 0 0 0
55   0 0 0 0 0 0
56   0 0 0 0 0 0
57   0 0 0 0 0 0
58   0 0 0 0 0 0
59   0 0 0 0 0 0
60   0 0 0 0 0 0
61   100 100 100 96.422 100 82.521
62   0 0 0 0 0 0
63   0 0 0 0 0 0
64   0 0 0 0 0 0
65   0 0 0 0 0 0
66   0 0 0 0 0 0
67   0 0 0 0 0 0
68   0 0 0 0 0 0
69   0 0 0 0 0 0
70   100 100 100 96.422 100 82.521
71   0 0 0 0 0 0
72   0 0 0 0 0 0
73   0 0 0 0 0 0
74   0 0 0 0 0 0
75   0 0 0 0 0 0
76   0 0 0 0 0 0
77   0 0 0 0 0 0
78   0 0 0 0 0 0
79   0 0 0 0 0 0
80   0 0 0 0 0 0
81   0 0 0 0 0 0
82   0 0 0 0 0 0
83   0 0 0 0 0 0
84   0 0 0 0 0 0
85   0 0 0 0 0 0
86   0 0 0 0 0 0
87   0 0 0 0 0 0
88   0 0 0 0 0 0
89   0 0 0 0 0 0
90   0 0 0 0 0 0
91   100 100 100 96.422 100 82.521
92   0 0 0 0 0 0
93   0 0 0 0 0 0
94   0 0 0 0 0 0
95   0 0 0 0 0 0
96   0 0 0 0 0 0
97   0 0 0 0 0 0
98   0 0 0 0 0 0
99   0 0 0 0 0 0
100   100 100 100 96.422 100 82.521
101   0 0 0 0 0 0
102   0 0 0 0 0 0
103   0 0 0 0 0 0
104   0 0 0 0 0 0
105   0 0 0 0 0 0
106   0 0 0 0 0 0
107   0 0 0 0 0 0
108   0 0 0 0 0 0
109   0 0 0 0 0 0
110   0 0 0 0 0 0
111   0 0 0 0 0 0
112   0 0 0 0 0 0
113   0 0 0 0 0 0
114   0 0 0 0 0 0
115   0 0 0 0 0 0
116   0 0 0 0 0 0
117   0 0 0 0 0 0
118   0 0 0 0 0 0
119   0 0 0 0 0 0
120   0 0 0 0 0 0
121   0 0 0 0 0 0
122   0 0 0 0 0 0
123   0 0 0 0 0 0
124   0 0 0 0 0 0
125   0 0 0 0 0 0
126   0 0 0 0 0 0
127   0 0 0 0 0 0
128   0 0 0 0 0 0
129   0 0 0 0 0 0
130   0 0 0 0 0 0
131   100 100 100 96.422 100 82.521
132   0 0 0 0 0 0
133   0 0 0 0 0 0
134   100 100 100 96.422 100 82.521
135   0 0 0 0 0 0
136   0 0 0 0 0 0
137   100 100 100 96.422 100 82.521
138   0 0 0 0 0 0
139   0 0 0 0 0 0
140   100 100 100 96.422 100 82.521
END_DATA
Logged

AlterEgo

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1995
Re: Camera profiling target evaluation
« Reply #14 on: May 29, 2015, 03:51:14 pm »

here is a quick test, nothing scientific

I-R raw for A7, extracted CGATS with RawDigger PE for CCSG, merged it with my spectral data (and naturally it is not the same as the target that I-R has - note that) to .ti3,
flatfielded that .ti3 with the above .ti1 using dcamprof testchart-ff to a new .ti3, then dcamprof make-profile -i D50 this .ti3 into json profile and dcamprof make-dcp that json profile, then from the same I-R raw I converted a different target CC24 using this new .dcp profile and here is what I have vs CC24 spectral data from BabelColor (again - not the actual measurements of CC24 that I-R has) :



so we have a good match for all except 2 patches ! see the attached PatchTool Compare file



the difference might be just because my spectral data for my CCSG copy is different, I remember that for 1-2 patches it was very different from the spectral data I got with makeinputicc from Iliah Borg... I will try to redo with that one later.

PS: - no the difference does not seem to be relevant to the issue... so what affected the 2 patches then ?
« Last Edit: May 29, 2015, 04:24:07 pm by AlterEgo »
Logged

AlterEgo

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1995
Re: Camera profiling target evaluation
« Reply #15 on: May 29, 2015, 04:45:33 pm »

also - did flat fielding of SG using the white patches along the edge (one one inside) help ? not much in this specific case - the CCSG in that shot was illuminated sufficiently evenly (as rawdigger shows) and like IS with camera/lens mounted on a tripod in some cases it may make things worse or not change anything... need to find or make an unevenly illuminated SG shot then
Logged

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: Camera profiling target evaluation
« Reply #16 on: May 29, 2015, 04:53:10 pm »

also - did flat fielding of SG using the white patches along the edge (one one inside) help ? not much in this specific case - the CCSG in that shot was illuminated sufficiently evenly (as rawdigger shows) and like IS with camera/lens mounted on a tripod in some cases it may make things worse or not change anything... need to find or make an unevenly illuminated SG shot then

My own tests show that you need to have quite uneven light before it makes a significant difference on the end result. If it's just say 10% difference between brightest and darkest it will probably not make much difference, but if it's 50%, sure. The SG doesn't have many white patches inside the target, but I think it should work quite fine anyway unless it's some very strange shape of the light.
Logged

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: Camera profiling target evaluation
« Reply #17 on: May 29, 2015, 05:00:35 pm »

The large error of the two patches I would guess is due to mismatch in reference data, those problem usually look like that. If something is "totally wrong" (failed scan etc) then all patches is usually wrong, here with only two off it does look like those two patches has bad reference data.

You can try to exclude those two patches (-x <textfile with patch names to exclude, one per line>) when running make-profile and see if it becomes better.
Logged

AlterEgo

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1995
Re: Camera profiling target evaluation
« Reply #18 on: May 29, 2015, 06:14:45 pm »

The large error of the two patches I would guess is due to mismatch in reference data, those problem usually look like that. If something is "totally wrong" (failed scan etc) then all patches is usually wrong, here with only two off it does look like those two patches has bad reference data.

You can try to exclude those two patches (-x <textfile with patch names to exclude, one per line>) when running make-profile and see if it becomes better.

I got somewhat better results by excluding as suggested (I had to remember that, that was noted before, but I forgot... need to write that down and put a wall) the white-grey-pathers along the border... that got me with the 2 problem patches from dE2K 9.x to dE2K 6.x
Logged

AlterEgo

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1995
Re: Camera profiling target evaluation
« Reply #19 on: May 29, 2015, 06:32:14 pm »

or just use a matrix profile - no LUT, no glut  :D



PatchTool Compare file is attached

« Last Edit: May 29, 2015, 06:34:56 pm by AlterEgo »
Logged
Pages: [1] 2   Go Up