Pages: [1] 2   Go Down

Author Topic: How to model lens imperfections for camera profiling?  (Read 6947 times)

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
How to model lens imperfections for camera profiling?
« on: May 21, 2015, 09:02:03 am »

Hello camera experts, armchair-bound or not;

I'm developing some new technical camera profiling software already readily usable at http://www.ludd.ltu.se/~torger/dcamprof.html , and I'm currently looking into improving measurement accuracy by modeling how the lens (or other factors) distort the photo of the colorchecker test chart.

I've already made a flatfield algorithm for targets speckled with white patches so one can even out lens vignetting and uneven light (not yet released).

Now I'm considering contrast loss, and/or possibly other lens factors. So far I've assumed that contrast loss is neglible, that is you get a linear reading without contrast loss when you look at the raw file. Therefore I've assumed that the grayscale steppings found in most test targets are a waste, only useful for calibrating non-linear systems.

If it's true I'm fine with that, but I'm not really sure if it is true. A quick comparison between camera raw RGB values and XYZ reference values indicate that I got say 7% contrast loss with the lens (comparing G against Y), that is the span between the darkest and the lightest neutral patch is smaller with the camera than in the spectrometer measured reference values. There could be some other error though.

Lens vignetting is clearly understood and documented, but when it comes to contrast I don't really know where to look. MTF charts only show contrast for details, and there are no small details here, this is about global contrast. If I make a correction for contrast, can I assume that the reproduction between darkest and lightest is linear, and can I assume all three RGB channels are affected equally, or is there some nasty non-linear effects to consider?
Logged

eronald

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6642
    • My gallery on Instagram
Re: How to model lens imperfections for camera profiling?
« Reply #1 on: May 21, 2015, 10:10:42 am »

presumably flare is situation and image  related and not quite repeatable

Hello camera experts, armchair-bound or not;

I'm developing some new technical camera profiling software already readily usable at http://www.ludd.ltu.se/~torger/dcamprof.html , and I'm currently looking into improving measurement accuracy by modeling how the lens (or other factors) distort the photo of the colorchecker test chart.

I've already made a flatfield algorithm for targets speckled with white patches so one can even out lens vignetting and uneven light (not yet released).

Now I'm considering contrast loss, and/or possibly other lens factors. So far I've assumed that contrast loss is neglible, that is you get a linear reading without contrast loss when you look at the raw file. Therefore I've assumed that the grayscale steppings found in most test targets are a waste, only useful for calibrating non-linear systems.

If it's true I'm fine with that, but I'm not really sure if it is true. A quick comparison between camera raw RGB values and XYZ reference values indicate that I got say 7% contrast loss with the lens (comparing G against Y), that is the span between the darkest and the lightest neutral patch is smaller with the camera than in the spectrometer measured reference values. There could be some other error though.

Lens vignetting is clearly understood and documented, but when it comes to contrast I don't really know where to look. MTF charts only show contrast for details, and there are no small details here, this is about global contrast. If I make a correction for contrast, can I assume that the reproduction between darkest and lightest is linear, and can I assume all three RGB channels are affected equally, or is there some nasty non-linear effects to consider?
Logged
If you appreciate my blog posts help me by following on https://instagram.com/edmundronald

AlterEgo

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1995
Re: How to model lens imperfections for camera profiling?
« Reply #2 on: May 21, 2015, 10:39:25 am »

I've already made a flatfield algorithm for targets speckled with white patches so one can even out lens vignetting and uneven light (not yet released).
speckled as in custom made targets (where you can do it more or less evenly across the target) or ColorCheckerSG-like (where you have borders only really) or QPCard where the material between patches can be used or is it actually possible to use also non white/grey patches of the measured target to use all the patches for flatfielding - you can try to detect some trend in how illumination is even if they are way non neutral chromacity wise?

people behind RawDigger were absolutely against any such approach instead taking a position that if you do such work than you can do a real flatfielding by using a proper uniform patch instead of your target, covering the whole area of the target & 2nd shot with the same light + camera position + stand position... and then using this 2nd shot data.
« Last Edit: May 21, 2015, 10:43:32 am by AlterEgo »
Logged

NancyP

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2513
Re: How to model lens imperfections for camera profiling?
« Reply #3 on: May 21, 2015, 10:52:34 am »

Torger, consult some astrophotographers with higher-level processing skills than mine. I have done a vignetting profile on the Samyang 14mm for use in Nebulosity program, but it didn't involve contrast. Vignetting profile was: get large sheet of thin white plexiglass/perspex/whatever. Position lens so it is pointing straight at sun in cloudless sky, with no obstructions in field of view. Hold the perspex parallel to image plane so it fills field of view. Snap 10 or so frames at each f stop you may wish to use. Each f/stop set gets averaged and used as a lens correction template. I did it this way because I was also loading up sets of dark frames at various ambient temperatures in this program. I am sure that there are better ways. I assume contrast correction would involve a different type of target but the same physical procedure, then a different mathematical algorithm (about which I have no clue - that's why you need the experienced astrophotographer).
Logged

eronald

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6642
    • My gallery on Instagram
Re: How to model lens imperfections for camera profiling?
« Reply #4 on: May 21, 2015, 11:03:15 am »

Of course, I have very little understanding of color and theory, but when people sent me images, the issues they thought were bad profiles often turned out to be issues with in-camera flare. A follow-up thought would be that both any profiling target image just like any real-world image will be subject to flare. Profiles directly made from abstract data or monochromator work will not be subject to flare error. Flare is very much image dependent.

Edmund
« Last Edit: May 21, 2015, 11:07:15 am by eronald »
Logged
If you appreciate my blog posts help me by following on https://instagram.com/edmundronald

Rainer SLP

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 727
    • RS-Fotografia
Re: How to model lens imperfections for camera profiling?
« Reply #5 on: May 21, 2015, 11:05:22 am »

Torger, consult some astrophotographers with higher-level processing skills than mine. I have done a vignetting profile on the Samyang 14mm for use in Nebulosity program, but it didn't involve contrast. Vignetting profile was: get large sheet of thin white plexiglass/perspex/whatever. Position lens so it is pointing straight at sun in cloudless sky, with no obstructions in field of view. Hold the perspex parallel to image plane so it fills field of view. Snap 10 or so frames at each f stop you may wish to use. Each f/stop set gets averaged and used as a lens correction template. I did it this way because I was also loading up sets of dark frames at various ambient temperatures in this program. I am sure that there are better ways. I assume contrast correction would involve a different type of target but the same physical procedure, then a different mathematical algorithm (about which I have no clue - that's why you need the experienced astrophotographer).

That is what we call FLAT frames. The described way is correct, except pointing the lens into the sun, you can use any part of the sky but you have to take care of the well depth and this depends of the 100% well depth of the chip. If you do not know the values, most DSLR producers do not tell you this data because for this type of photography it is quite irrelevant, get a medium grey frame with a median of 128. It should have a nice bell shaped histogram.

Let me see if I find some. I am going today to my observatory and can dig in my files.

When doing FLATS do not move the focus position. In Astrophotography normally on a telescope it is infinity  ;D

You can also make a FLATusing a white T-shirt over the lens and shooting against the north ( when in northern hemisphere) or south when in southern hemisphere
Logged
Thanks and regards Rainer
 I am here for

Alexey.Danilchenko

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 257
    • Spectron
Re: How to model lens imperfections for camera profiling?
« Reply #6 on: May 21, 2015, 11:12:33 am »

Lens vignetting is clearly understood and documented, but when it comes to contrast I don't really know where to look. MTF charts only show contrast for details, and there are no small details here, this is about global contrast. If I make a correction for contrast, can I assume that the reproduction between darkest and lightest is linear, and can I assume all three RGB channels are affected equally, or is there some nasty non-linear effects to consider?

Have you took flare into consideration?
Logged

NancyP

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2513
Re: How to model lens imperfections for camera profiling?
« Reply #7 on: May 21, 2015, 01:04:29 pm »

Thanks, Rainer, I had temporarily forgotten the "flats" term. Rainer, I was shooting in late afternoon with an ultra-wide angle lens (Samyang 14mm, a nice lens with whopping vignetting wide open), so either dead at or dead away from sun seemed to be the best option, and the thin white perspex is shiny, so that meant I couldn't use the 180 degrees from sun orientation (reflection and shadow). I did use "infinity" (approx) focus on this lens without a hard stop, and did aim for "18% grey" (0 on metering), didn't get clipping.
Logged

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: How to model lens imperfections for camera profiling?
« Reply #8 on: May 21, 2015, 01:06:00 pm »

Thanks for the replies. Some responses;

Concerning the current flatfield algorithm: speckled whites in custom targets to start with, but is already possible to use with ColorChecker Digital SG. Regardless what Rawdigger folks is thinking I think it's a better method than switching in a gray card, as you only get one shot and can thus use it in changing outdoor light. Looking at the difference it makes from a typical well-exposed shot in outdoor overcast light I'd say that it doesn't make much of a real difference, corrections are around 3-4% or so, you won't really see a difference in the resulting profile. I haven't tried to push it with really uneven light though.

Flatfield correction with a white plexi I'm very familiar with as I shoot with a MFD tech cam (they have color cast issues on wide angle lenses), I don't think that will contribute any additional value than the flatfield on speckled whites as I do now.

The thing is that I'm seeing unexplained issues in some test charts I'm doing, which seems that the camera is registering too low saturation of the colors. When I do a simulation with the same target using the camera's spectral sensitivity functions it works out fine, so I'm thinking that I could have some issues with contrast. Flare sounds like a theory, but I don't know how to model that, simple linear reduction of contrast, measured on black vs white patches? If flare is badly non-linear so it really can't be modeled we'll have to live with it.

I've attached how the test shot looks (not too flare-prone shooting condition, right?), here a linear rendering with the SSF-generated profile applied (which makes up a correct result). The test chart is simply printed on semigloss paper using a pigment inkjet, spectral spread is not super-good but not too bad either. I'm investigating the performance of these custom charts. In simulations they do fine as said, but I get problems in practice. I'll test make a matte chart and see if it works better with less saturated colors too. (The chart has only maximally saturated light colors, ie no brown as it's just dark orange, which is also a test I'm doing, as the LUT is 2.5D dark patches should really not be needed.) And yes, the paper is OBA-free.

Most likely a lower saturation target will work better, it seems like high saturation colors makes errors worse, and well it's quite natural as with higher saturation colors you have the less linear combinations of the camera raw channels will work. I do not yet know however if high saturation color patches on a target is impossible due to measurement limitations, or if they can be made to work. Of course there's also a risk that I've made some bug, but I don't think so as the profiling works fine if I feed it with the same target spectra and the camera's SSF.
« Last Edit: May 21, 2015, 01:13:40 pm by torger »
Logged

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: How to model lens imperfections for camera profiling?
« Reply #9 on: May 22, 2015, 02:10:54 am »

I've done further testing, and a matte target works much better. I've also looked closer on the simulations, and there are indeed some problems with matching high saturation colors, but they are typically less than in real shots.

My current theory is that high saturation colors are difficult because 1) larger relative errors on spectral reflectance measurements, due to the limited range of the spectrometer instrument (mine only does 420-730), 2) larger impact on camera measurement errors due to low signal on one or two channels, 3) colors harder to match for the camera, 4) hard to not get any gloss glare effects in shot.

That is I think modeling flare wouldn't help that much, as it's probably a small error compared to the other challenges. I think I will make a contrast matching adjustment in the flatfield algorithm though. With inkjet printed charts you have a good white and a good black, but grays don't have flat enough spectra (not from the Canon at least despite multiple grays) so I won't make support for curve matching to start with.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: How to model lens imperfections for camera profiling?
« Reply #10 on: May 22, 2015, 10:23:09 am »

I've done further testing, and a matte target works much better. I've also looked closer on the simulations, and there are indeed some problems with matching high saturation colors, but they are typically less than in real shots.

My current theory is that high saturation colors are difficult because 1) larger relative errors on spectral reflectance measurements, due to the limited range of the spectrometer instrument (mine only does 420-730), 2) larger impact on camera measurement errors due to low signal on one or two channels, 3) colors harder to match for the camera, 4) hard to not get any gloss glare effects in shot.

Hi Anders,

My experiece with shooting test charts (mostly resolution tests, not as many color profiling ones due to insufficient evaluation tools until now), is that reflective targets do cause false readings, the darker the tone, the larger the added percentage ambient and specular reflection. So it also is important to make sure that everything within the angle of view of the reflection, is neutral in tone or very dark.

Because a semi-gloss or diffuse target surface picks up more reflection from a wider, more diffused angle, the problem of surface reflection is still there, it is just less specular in reflecting the origin of the camera and surroundings. It therefore affects the calibration for illuminating error less, but reflected color surrounding the camera is added more evenly. I don't think the saturation of the patches plays a role in that, unless the luminance component is low. Glossy black is a good reflector of any brightness that's surrounding the camera position (including a photographer with a colorful shirt).

In that sense it's easier to shoot a uniform surface, like an LCC, to nail the lens shading and vignetting and the illumination irregularities, especially when one has two shots with the camera rotated 180 degrees around it's optical axis (as Iliah Borg suggests, to keep the lighting stable, and isolate the camera's influence). Averaging two (or more) frames also reduces noise.

Quote
That is I think modeling flare wouldn't help that much, as it's probably a small error compared to the other challenges. I think I will make a contrast matching adjustment in the flatfield algorithm though. With inkjet printed charts you have a good white and a good black, but grays don't have flat enough spectra (not from the Canon at least despite multiple grays) so I won't make support for curve matching to start with.

I thought the effect of veiling glare on acquiring a target shot is/was an interesting subject to investigate, but a specific model for veiling glare is very much lens design dependent. The glare issue is most prominent in the direct vicinity of the source brightness (thus affecting nearby darker target tiles more than the others), and then drops off very fast to a generic increase of the signal which (as a percentage) affects shadows more than lighter tones. So the simplest model would just add a fixed amount of glare to all patches in linear gamma, maybe with a sharper increase near bright patches (or the arm rests in your test image!). IOW a very spiked towards the center deconvolution kernel with long tails.

Cheers,
Bart

P.S. This paper describes a generic model for veiling glare in section 4.2 . It is not what they propose in the end (using an occlusion mask) but that section may help to identify localized (spatially variant) effects versus generic (full image) solutions. Deconvolution with the generic Glare Spread Function (GSF) may still be useful.
« Last Edit: May 22, 2015, 10:42:17 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: How to model lens imperfections for camera profiling?
« Reply #11 on: May 22, 2015, 10:40:30 am »

Thanks for the input.

After some thinking my current theory is that any lens contrast errors is negligible, ie in this type of shooting condition it will be almost 100% linear. From my Lumariver HDR coding I remember that halving exposure almost exactly halves the raw values, so I find it unlikely that lens non-linearity would be a significant source of error, so I need to search elsewhere.

Vignetting and light non-uniformity is easily seen, but I find it hard to believe that you would do any significant gains from a multishot method, compared to using thin plate spline correction using spread white patches as handles as I do.

I always get higher contrast from the Colormunki spectrometer measurements, but I think it's quite inprecise at measuring dark colors, or rather the noise is constant regardless of reflectance, so the lower reflectance the larger relative error. Calibrating against a black/white or grayscale measured with the spectrometer becomes meaningless. Even with 100% perferct measurement (as I can do in simulations) there will be problems as the spectrum is not exactly flat. On the black it varies easily 15% depending on where on the curve one looks. I don't want to end up "correcting" a very small error using sources of relatively large error and just make things worse.

Still the difference between spectrometer and camera is larger than I expect, and various glare effects of the glossy target could be the answer, thanks for the description. I'll try to make a more careful setup and see how results improve.

I do suspect thought that the end result can be that glossy targets ain't worth it, adding more measurement errors than corrections they contribute. A matte target has already a decent gamut, and supersaturated colors are already difficult for the camera to match so a robust profile would compromise a bit with them anyway.

A challenge when doing these type of systems is estimating the size and importance of various error sources. I've seen many do things like reducing a 2% error to 0.05% by some elaborate technique, and skipping over a 15% error. I currently think that lens non-linearity is one of those 2% errors so I should probably not look into that more and instead try to figure out which the larger error sources are.
« Last Edit: May 22, 2015, 10:46:23 am by torger »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: How to model lens imperfections for camera profiling?
« Reply #12 on: May 22, 2015, 10:46:21 am »

Still the difference between spectrometer and camera is larger than I expect, and various glare effects of the glossy target could be the answer, thanks for the description. I'll try to make a more careful setup and see how results improve.

I'd first try and quantify the effect of noise on the accuracy of the dark patches, by averaging multiple shots. If that can be eliminated as source of the issue, it then becomes easier to focus on something else. In this statement from Iliah Borg he also refers to the Veiling glare influence as significant. So I'm not sure which is the larger culprit.

Cheers,
Bart
« Last Edit: May 22, 2015, 10:52:28 am by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: How to model lens imperfections for camera profiling?
« Reply #13 on: May 22, 2015, 11:05:33 am »

The range between darkest and brightest on a glossy target is less than 4 stops, and we're not shooting backlit, so I don't think glare should be significant here, and with ETTR noise levels from camera is relatively low. The spectrometer precision is less documented though, that my cuts at 420nm is a bit of a problem. I shall try and borrow a higher end instrument.

I'll do some experiments.
« Last Edit: May 22, 2015, 11:07:44 am by torger »
Logged

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: How to model lens imperfections for camera profiling?
« Reply #14 on: May 22, 2015, 12:15:01 pm »

Okay, it's totally some glare issue from measuring outside in relatively bright surroundings. I'd guess glare from the target rather than some flare in the lens.

Anyway I made this quick still suboptimal shot, note the uneven lighting, with a fluorescent, high CRI one, but still fluorescent (attached).

The uneven light corrected via the TPS flatfield algorithm, over 1 stop correction here, I rendered a profile and, wow almost indistinguishable from the SSF result. (For those new to camera profiling, SSF = spectral sensitivity functions, that is you have the camera's filter response so you can do the profiling process virtually without shooting any target processing the spectra directly, I have that for the 5D mark II used here in my tests and use it as a baseline to compare results to)

It just bothers me that I don't really know for sure what's wrong with the first outdoor shot, I'm documenting my software and I'd like to write some good advice of how to shoot a glossy target. I was thinking as long as I don't see any reflection from the target, I'm fine, and there is indeed no reflection there. But maybe the bright diffuse light coming in from everywhere makes the gloss brighten up and lower the contrast of the target?

Comparing parts of the target that have a bright and a dark patch besides them I see that when my first outdoor shot for example has RawGreenLight/RawGreenDark = 3.95 my indoor has 5.46 for the same pair, 40% difference. Same target, same camera, same lens, different setup.

Give me your best theory why the outdoor setup (image a few post back) worked so bad, and what makes this indoor setup work so much better (after flatfield correction of course)
« Last Edit: May 22, 2015, 12:17:20 pm by torger »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: How to model lens imperfections for camera profiling?
« Reply #15 on: May 22, 2015, 12:43:17 pm »

Give me your best theory why the outdoor setup (image a few post back) worked so bad, and what makes this indoor setup work so much better (after flatfield correction of course)

My guess is that when a target lays flat on the ground/chair/table/whatever, then the glossy targets will specularly reflect (overcast) sky. The reflection component adds relatively more to dark patches. A matte target would add neighboring trees and/or houses to the diffuse reflection, but not as disproportionally much to the black/dark patches, but more to all patches. Only when we shoot a target in a controlled situation (through a large black panel or curtain) will the reflection of the camera and its surroundings also be black.

Your indoor setup was probably darker, and thus the reflection as seen in the target was also darker.

To give an idea about dark and glossy surfaces, one of the best targets to measure D-min or for quantifying veiling glare is a hollow cone that is glossy black on the inside. All light will be attenuated maximal and reflected light will travel deeper inside, while being largely absorbed at each bounce of a wall as it travels deeper and deeper. Hardly any light will be able to escape back to the aperture. It works much better than black flocking material, it's a light trap. Gloss is a mirror.

Cheers,
Bart
« Last Edit: May 22, 2015, 12:45:09 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: How to model lens imperfections for camera profiling?
« Reply #16 on: May 22, 2015, 12:57:48 pm »

Hi Anders,

I have some bias to agree with that statement…

My guess is essentially that proper white balance pretty much eliminates the lens as a major parameter.

Best regards
Erik

…

A challenge when doing these type of systems is estimating the size and importance of various error sources. I've seen many do things like reducing a 2% error to 0.05% by some elaborate technique, and skipping over a 15% error. I currently think that lens non-linearity is one of those 2% errors so I should probably not look into that more and instead try to figure out which the larger error sources are.
Logged
Erik Kaffehr
 

torger

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 3267
Re: How to model lens imperfections for camera profiling?
« Reply #17 on: May 22, 2015, 01:39:40 pm »

My guess is that when a target lays flat on the ground/chair/table/whatever, then the glossy targets will specularly reflect (overcast) sky. The reflection component adds relatively more to dark patches. A matte target would add neighboring trees and/or houses to the diffuse reflection, but not as disproportionally much to the black/dark patches, but more to all patches. Only when we shoot a target in a controlled situation (through a large black panel or curtain) will the reflection of the camera and its surroundings also be black.

Yes I shot in a dark room with fairly dark surroundings, only with a weak print viewing lamp. However, wouldn't the dark patch in this case reflect also the light source?

Schematically speaking the difference is that we have a very large diffuse light source (outdoor shot) or a small spot light source (indoor shot).

I'm probably saying the same thing here, but just to test if I've understood; could it it be that the "gloss mirror" with a spot source on the side only reflects that light away down on the dark towel that kills it, and the reflection that comes at your camera is only a small diffuse component, while with in the outdoor shot light is coming from all directions so you then get more of those "first hand" (specular) reflections? Sealing off the sides and bottom and only letting in outdoor light from above or something would then solve the problem, but it would be a very cumbersome setup. So I guess the recommendation will be matte targets outdoor and glossy target only for indoor "lab" setups.

I don't think the problem in the outdoor shot would be colored reflections, it was really only gray concrete and overcast rain and stuff, but just that the diffuse large light source somehow lowers the contrast of the target.
Logged

AlterEgo

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1995
Re: How to model lens imperfections for camera profiling?
« Reply #18 on: May 22, 2015, 02:01:08 pm »

I shall try and borrow a higher end instrument.

if budget is an issue or nobody around with i1pro(1/2) try EFIs, they are cheaper i1Pro(1/2) rebadging, Argyll supports EFI 1000, not sure about EFI 2000... I tried to use ColorMunki spectrophotometer, but after several days I was so mad with usability that I got access to i1Pro2...
Logged

AlterEgo

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1995
Re: How to model lens imperfections for camera profiling?
« Reply #19 on: May 22, 2015, 02:30:24 pm »

The uneven light corrected via the TPS flatfield algorithm, over 1 stop correction here
I think you can easily do better indoors... I finally got myself a proper halogen 3200K lamp x 1200w bulb, put 80A filter on lens, put target flat towards camera and lamp at angle to target (avoid reflections as much as possible) and got Lab L* difference ~1.0 min - 2.0 max across the target field and I then had a flat field shot for rawdigger because the target was mounted on top of uniform (steel behind) surface with magnets ... target was filling ~1/6 of the frame in the center, that also helped
Logged
Pages: [1] 2   Go Up