Equipment & Techniques > Cameras, Lenses and Shooting gear

How to model lens imperfections for camera profiling?

<< < (2/5) > >>

Rainer SLP:

--- Quote from: NancyP on May 21, 2015, 10:52:34 am ---Torger, consult some astrophotographers with higher-level processing skills than mine. I have done a vignetting profile on the Samyang 14mm for use in Nebulosity program, but it didn't involve contrast. Vignetting profile was: get large sheet of thin white plexiglass/perspex/whatever. Position lens so it is pointing straight at sun in cloudless sky, with no obstructions in field of view. Hold the perspex parallel to image plane so it fills field of view. Snap 10 or so frames at each f stop you may wish to use. Each f/stop set gets averaged and used as a lens correction template. I did it this way because I was also loading up sets of dark frames at various ambient temperatures in this program. I am sure that there are better ways. I assume contrast correction would involve a different type of target but the same physical procedure, then a different mathematical algorithm (about which I have no clue - that's why you need the experienced astrophotographer).

--- End quote ---

That is what we call FLAT frames. The described way is correct, except pointing the lens into the sun, you can use any part of the sky but you have to take care of the well depth and this depends of the 100% well depth of the chip. If you do not know the values, most DSLR producers do not tell you this data because for this type of photography it is quite irrelevant, get a medium grey frame with a median of 128. It should have a nice bell shaped histogram.

Let me see if I find some. I am going today to my observatory and can dig in my files.

When doing FLATS do not move the focus position. In Astrophotography normally on a telescope it is infinity  ;D

You can also make a FLATusing a white T-shirt over the lens and shooting against the north ( when in northern hemisphere) or south when in southern hemisphere

Alexey.Danilchenko:

--- Quote from: torger on May 21, 2015, 09:02:03 am ---Lens vignetting is clearly understood and documented, but when it comes to contrast I don't really know where to look. MTF charts only show contrast for details, and there are no small details here, this is about global contrast. If I make a correction for contrast, can I assume that the reproduction between darkest and lightest is linear, and can I assume all three RGB channels are affected equally, or is there some nasty non-linear effects to consider?

--- End quote ---

Have you took flare into consideration?

NancyP:
Thanks, Rainer, I had temporarily forgotten the "flats" term. Rainer, I was shooting in late afternoon with an ultra-wide angle lens (Samyang 14mm, a nice lens with whopping vignetting wide open), so either dead at or dead away from sun seemed to be the best option, and the thin white perspex is shiny, so that meant I couldn't use the 180 degrees from sun orientation (reflection and shadow). I did use "infinity" (approx) focus on this lens without a hard stop, and did aim for "18% grey" (0 on metering), didn't get clipping.

torger:
Thanks for the replies. Some responses;

Concerning the current flatfield algorithm: speckled whites in custom targets to start with, but is already possible to use with ColorChecker Digital SG. Regardless what Rawdigger folks is thinking I think it's a better method than switching in a gray card, as you only get one shot and can thus use it in changing outdoor light. Looking at the difference it makes from a typical well-exposed shot in outdoor overcast light I'd say that it doesn't make much of a real difference, corrections are around 3-4% or so, you won't really see a difference in the resulting profile. I haven't tried to push it with really uneven light though.

Flatfield correction with a white plexi I'm very familiar with as I shoot with a MFD tech cam (they have color cast issues on wide angle lenses), I don't think that will contribute any additional value than the flatfield on speckled whites as I do now.

The thing is that I'm seeing unexplained issues in some test charts I'm doing, which seems that the camera is registering too low saturation of the colors. When I do a simulation with the same target using the camera's spectral sensitivity functions it works out fine, so I'm thinking that I could have some issues with contrast. Flare sounds like a theory, but I don't know how to model that, simple linear reduction of contrast, measured on black vs white patches? If flare is badly non-linear so it really can't be modeled we'll have to live with it.

I've attached how the test shot looks (not too flare-prone shooting condition, right?), here a linear rendering with the SSF-generated profile applied (which makes up a correct result). The test chart is simply printed on semigloss paper using a pigment inkjet, spectral spread is not super-good but not too bad either. I'm investigating the performance of these custom charts. In simulations they do fine as said, but I get problems in practice. I'll test make a matte chart and see if it works better with less saturated colors too. (The chart has only maximally saturated light colors, ie no brown as it's just dark orange, which is also a test I'm doing, as the LUT is 2.5D dark patches should really not be needed.) And yes, the paper is OBA-free.

Most likely a lower saturation target will work better, it seems like high saturation colors makes errors worse, and well it's quite natural as with higher saturation colors you have the less linear combinations of the camera raw channels will work. I do not yet know however if high saturation color patches on a target is impossible due to measurement limitations, or if they can be made to work. Of course there's also a risk that I've made some bug, but I don't think so as the profiling works fine if I feed it with the same target spectra and the camera's SSF.

torger:
I've done further testing, and a matte target works much better. I've also looked closer on the simulations, and there are indeed some problems with matching high saturation colors, but they are typically less than in real shots.

My current theory is that high saturation colors are difficult because 1) larger relative errors on spectral reflectance measurements, due to the limited range of the spectrometer instrument (mine only does 420-730), 2) larger impact on camera measurement errors due to low signal on one or two channels, 3) colors harder to match for the camera, 4) hard to not get any gloss glare effects in shot.

That is I think modeling flare wouldn't help that much, as it's probably a small error compared to the other challenges. I think I will make a contrast matching adjustment in the flatfield algorithm though. With inkjet printed charts you have a good white and a good black, but grays don't have flat enough spectra (not from the Canon at least despite multiple grays) so I won't make support for curve matching to start with.

Navigation

[0] Message Index

[#] Next page

[*] Previous page

Go to full version