Luminous Landscape Forum

Equipment & Techniques => Digital Cameras & Shooting Techniques => Topic started by: Jim Kasson on May 08, 2014, 05:23:15 pm

Title: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 08, 2014, 05:23:15 pm
There has been much discussion in this and other forums of how much resolution do we need in our sensors. Erik Kaffehr started a thread about how much can we see. I’d like to come at it from another angle.

I have been reading a book by Robert Fiete, entitled Modeling the Imaging Chain of Digital Cameras (http://spie.org/Publications/Book/868276):

There’s a chapter on balancing the resolution of the lens and the sensor, which introduces the concept of system Q, defined as:

Q = 2 * fcs / fco

where fcs is the cutoff frequency of the sampling system (sensor), and fco is the cutoff frequency of the optical system (lens).

An imaging system is in some sense “balanced” when the frequencies are the same, and thus Q=2.

The assumptions of the chapter in the book where Q is discussed are probably appropriate for the kinds of aerial and satellite surveillance systems the author works with, but they are not usually met in the photographic systems that most of us work with.

1)   Monochromatic sensors (no CFA)
2)   Diffraction-limited optics
3)   No anti-aliasing filter

Under these assumptions, the cutoff frequency of the sensor is half the inverse of the sensel pitch; we get that from Nyquist.

To get the cutoff frequency of the lens, we need to define the point where diffraction prevents the detection of whether we’re looking at one point or two. Lord Rayleigh came up with this formula in the 19th century:

R = 1.22 * lambda * N, where lambda is the wavelength of the light, and N is the f-stop.

Fiete uses a criterion that makes it harder on the sensor, the rounded Sparrow criterion:

S = lambda * N

Or, in the frequency domain, fco = 1 / (lambda * N)

Thus Q is:

Q = lambda * N / pitch

I figure that some of the finest lenses that we use are close to diffraction-limited at f/8. If that’s true, for 0.5 micrometer light (in the middle of the visible spectrum), a Q of 2 implies:

Pitch = N /4

At f/8 we want a 2-micrometer pixel pitch, finer than currently available for any available sensors sized at micro 4/3 and larger. A full frame sensor with that pitch would have 216 megapixels.

You can try to come up with a correction to take into account the Bayer array. Depending on the assumptions, the correction should be between 1 and some number greater than 2, but in any case, the pixel pitch should be at least as fine as for a monochromatic sensor.

As an aside, note that you don’t need an AA filter for a system with a Q of 2, since the lens diffraction does the job for you. That’s not true with a Bayer CFA.

I have several questions for anyone who cares to get involved in a discussion:

1)   Is any of this relevant to our photography?
2)   Have I made a math or logical error?
3)   At what aperture do our best lenses become close to being diffraction-limited?
4)   What other questions should I be asking?

For details about the Sparrow criterion, click here (http://blog.kasson.com/?p=5720):
For more details on calculating Q, take a look here (http://blog.kasson.com/?p=5742).
For ruminations on corrections for a Bayer CFA, look at this (http://blog.kasson.com/?p=5752).

Thanks,

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: ErikKaffehr on May 08, 2014, 06:14:36 pm
Hi,

Thanks for sharing knowledge!

I don't know the relevance of this in this context, but I made a test with my P45+ and I had very significant aliasing at f/11 but virtually none at f/16. That sensor has 6.8 micron pitch. You can find it here (look for Aliasing): http://echophoto.dnsalias.net/ekr/index.php/photoarticles/80-my-mfd-journey-summing-up?start=1

My understanding is that for balance performance at f/16 a pitch of four microns would be needed.

The lens I used here was my Sonnar 150/4, and I would guess that it reaches maximum performance around f/8. To that comes also the amount of defocus.

My Sony Alpha 77 with 3.9 micron pixels is able to show moiré at f/8 but it is very little. That sensor probably has OLP filter.

I don't have high end primes, just pretty decent zooms.

Best regards
Erik
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Fine_Art on May 08, 2014, 09:21:46 pm
R in your Rayleigh formula is Radius of each diffraction spot so you have to double it to get the distance between 2 spots matching your pixel grid.

3) Its easy to find out. Do a test of your lens at a variety of aperture settings. A good zoom, probably f8. A prime probably f5.6, a top prime f4, the Otus f2. Further stopping down will give less detail. Wider lens aberrations damage the output.

Does it matter? It depends on what you need the image for.
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: ErikKaffehr on May 08, 2014, 11:14:24 pm
Hi Jim,

I was considering my results a bit, and I would mention that there is a considerable diffusion of light in the pixels. I have the impression the diffusion length of light  in silicon is around 2 microns and more depending on wave length.

Best regards
Erik
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 08, 2014, 11:25:26 pm
R in your Rayleigh formula is Radius of each diffraction spot so you have to double it to get the distance between 2 spots matching your pixel grid.

The radius of the Airy function is not well-defined, since the distance to which it extends depends on the precision used in the calculations. Can you resolve the 3rd ring? The 9th? The 38465rd? If you say that the radius is the distance to the first zero, then the R in the formula is that. However, the Rayleigh criterion is more subtle than that.

Lord Rayleigh's criterion states that the if distance between the center positions of two impulses, or spatial Dirac delta functions (although he never used that term, having predated Paul Dirac) is such that the center of each lies on the first zero of the other, the two points are barely resolvable.

(http://www.kasson.com/ll/rayleigh%20sum%20all.PNG)

Subsequently, astronomers found that they could resolve (in the sense that they could tell if there were one or two stars) points closer than that. Hence the Sparrow criterion.

(http://www.kasson.com/ll/sparrow%20sum%20plus.PNG)

Jim



Title: Re: How much sensor resolution do we need to match our lenses?
Post by: ErikKaffehr on May 09, 2014, 12:46:26 am
Hi Jim,

Although I recently have focused on what is visible in print I have also looked into the effects of aliasing on large and small pixels, the best article probably being this: http://echophoto.dnsalias.net/ekr/index.php/photoarticles/78-aliasing-and-supersampling-why-small-pixels-are-good

I would say that the feather shots in that article are quite interesting.

One thing I have noticed in "the differences at A2 size" article is that the obvious differences in image quality are areas showing high contrast detail with significant aliasing. So aliasing has a negative effect on image quality. In this case the IQ-180 showed less aliasing, probably because the sensor mostly outresolved the test target.
 
Best regards
Erik
There has been much discussion in this and other forums of how much resolution do we need in our sensors. Erik Kaffehr started a thread about how much can we see. I’d like to come at it from another angle.



Thanks,

Jim

Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Fine_Art on May 09, 2014, 01:13:40 am
The radius of the Airy function is not well-defined, since the distance to which it extends depends on the precision used in the calculations. Can you resolve the 3rd ring? The 9th? The 38465rd? If you say that the radius is the distance to the first zero, then the R in the formula is that. However, the Rayleigh criterion is more subtle than that.

Lord Rayleigh's criterion states that the if distance between the center positions of two impulses, or spatial Dirac delta functions (although he never used that term, having predated Paul Dirac) is such that the center of each lies on the first zero of the other, the two points are barely resolvable.

(http://www.kasson.com/ll/rayleigh%20sum%20all.PNG)

Subsequently, astronomers found that they could resolve (in the sense that they could tell if there were one or two stars) points closer than that. Hence the Sparrow criterion.

(http://www.kasson.com/ll/sparrow%20sum%20plus.PNG)

Jim





That is fine. What does your CFA need to make your raw converter generate 2 distinct spots? The spots may or may not align with your pixels. I would venture that you need peaks at least a diagonal apart.
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Fine_Art on May 09, 2014, 01:29:42 am
To make this conversation more practical for photography maybe we can ask a new question. If our eyes resolve about 1 minute of arc, we need systems with the same or greater ability to give textures the same look. If we photograph a fabric and it has more plastic look than satin sheen, it may be that the components you refer to are lacking. Or skin tones which we were discussing in another thread. The surface is somewhat translucent. There are also very fine lines with very different contrast in most light.

So do our lenses always handle 1 minute of arc? I would say wide lenses no. If we do photograph with enough detail to get the 1 minute do our screens or prints show the detail while still showing the "forest"? Big prints yes. Our screens no. Prints lack the dynamic range of our eyes. Screens are getting good enough. So maybe the weakest link is our screen resolution. We need to get to 8K soon to be able to show our images as they were seen.
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: EinstStein on May 09, 2014, 10:05:20 am
I dont' think the definition of Q and the balance criteria (=2) is correct.
There is a well know theory about the required sampling rate for a band limited signals, which requires

Sampling rate = 2 x BW, where BW is the difference of the min. Frequency and the max. Frequency.

This equation should work even if min. Frequency is not 0, for the original signal can be shfited in the frequency domain by convolution with the sinusoidal function of min. Frequency.

If we focus on the case with min. Frequency =0, then a system with balanced sensor and lens should have

Sensor resolution = lens resolution x 2.
 so a better definition of Q should be

Q = lens resolution x 2 / sensor resolution.
The balance criterior should be Q = 1.

Title: Re: How much sensor resolution do we need to match our lenses?
Post by: bjanes on May 09, 2014, 10:05:43 am
There has been much discussion in this and other forums of how much resolution do we need in our sensors. Erik Kaffehr started a thread about how much can we see. I’d like to come at it from another angle.

I have several questions for anyone who cares to get involved in a discussion:

1)   Is any of this relevant to our photography?
2)   Have I made a math or logical error?
3)   At what aperture do our best lenses become close to being diffraction-limited?
4)   What other questions should I be asking?

Jim


Jim,

Thanks for an excellent post. With regard to question 1, the contrast at the Rayleigh limit (usually regarded at 9-10 lp/mm 9-10% although Bart van der Wolf has calculated a higher value), is too low to be photographically useful. 50% contrast is often quoted as corresponding most closely to perceived image sharpness and, if one uses that criterion, the equations change. Table 1 of the excellent treatise by Osuna and Garcia, Do Sensors Outresolve Lenses (http://www.luminous-landscape.com/tutorials/resolution.shtml), lists resolutions for 50% and 80% contrast. Using the Nikon D800e for reference, the sensor has a Nyquist of 103 lp/mm. Using the Rayleigh criterion, the sensor is nowhere up to outresolving a diffraction limited lens, but few mass produced lenses are diffraction limited at their widest apertures. The best lenses are nearly diffraction limited at mid apertures. Using the Rayleigh criterion and the 800e, a diffraction limited lens will outresolve the sensor until f/16. However, if one uses 50% or 80%, the equation changes. Furthermore, as Osuna and Garcia point out, one may need to sample at more than 2 pixels per lp.

Also, as SQF analysis points out, human perception is most sensitive to high contrast at relatively low frequencies. Depending on the print size, a lens with high contrast at lower frequencies may give better results than a lens with resolution to the Rayleigh limit.

Regards,

Bill

ps edited 16May2014 to correct typo. Contrast at Rayleigh is in terms of percent, not lp/mm.
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 09, 2014, 01:18:54 pm
I dont' think the definition of Q and the balance criteria (=2) is correct.
There is a well know theory about the required sampling rate for a band limited signals, which requires

Sampling rate = 2 x BW, where BW is the difference of the min. Frequency and the max. Frequency.

This equation should work even if min. Frequency is not 0, for the original signal can be shfited in the frequency domain by convolution with the sinusoidal function of min. Frequency.

If we focus on the case with min. Frequency =0, then a system with balanced sensor and lens should have

Sensor resolution = lens resolution x 2.
 so a better definition of Q should be

Q = lens resolution x 2 / sensor resolution.
The balance [criterion] should be Q = 1.



Thanks for pointing out the bandwidth/min frequency nicety. I hadn't thought of that in a long time. Let's assume min freq is zero, though.

I believe your restatement of Q, essentially changing it from frequency to distance, is accurate, if lens resolution is interpreted as half the Sparrow distance. Stated in the frequency domain, Q=2 says that the lens stops delivering contrast (the contrast actually drops to 1%, which is close enough) at one half cycle per pixel, so there can be no aliasing. Let's look at it in distance. Q = 2 says that there are 4.88 samples across the first ring of the Airy function. For two point spread functions at the rounded Sparrow distance, there is one sample between the two peaks, assuming the sampling grid is aligned with the two peaks. That one sample in the middle is necessary to distinguish a drop in signal level as the points move apart. 

If your definition of Q says that the lens resolution is the Sparrow distance, it's a new definition, and yes, I think that in that case, Q should be 1 for a "balanced" system.

Let's work through an example, with an f/8 lens and 0.5 micrometer light. The Sparrow distance is

S = 0.5 * 8 = 4 micrometers. The resolution of the lens is half that, or 2 micrometers. Remember, we're (barely) resolving a point halfway in between the two Airy functions.

Let's say the sensor pitch is 2 micrometers.

Using your formula,  Q = lens resolution x 2 / sensor resolution = 2 * 2 / 2 = 2.

Does that make sense?

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 09, 2014, 03:03:29 pm
...the contrast at the Rayleigh limit (usually regarded at 9-10 lp/mm, although Bart van der Wolf has calculated a higher value), is too low to be photographically useful. 50% contrast is often quoted as corresponding most closely to perceived image sharpness and, if one uses that criterion, the equations change. Table 1 of the excellent treatise by Osuna and Garcia, Do Sensors Outresolve Lenses (http://www.luminous-landscape.com/tutorials/resolution.shtml), lists resolutions for 50% and 80% contrast. Using the Nikon D800e for reference, the sensor has a Nyquist of 103 lp/mm. Using the Rayleigh criterion, the sensor is nowhere up to outresolving a diffraction limited lens, but few mass produced lenses are diffraction limited at their widest apertures. The best lenses are nearly diffraction limited at mid apertures. Using the Rayleigh criterion and the 800e, a diffraction limited lens will outresolve the sensor until f/16. However, if one uses 50% or 80%, the equation changes. Furthermore, as Osuna and Garcia point out, one may need to sample at more than 2 pixels per lp.

Bill, thanks for the perspective, and for the link to the excellent tutorial.

Jack Hogan has pointed out elsewhere that there are ways to do presampling AA filtering that don't affect spatial frequencies under the Nyquist frequency as much as doing AA filtering with diffraction. But there seems to be a trend towards sensors with no AA filtering, so it's probably worthwhile to look at that case.

The rounded Sparrow criterion gives a contrast of 2% (peak - valley) / peak). My calculations for the Rayleigh criterion put the contrast at (1 - 0.75) / 1 = 0.25. Am I using the wrong definition of contrast? Maybe it should be half those numbers. If we can agree on a definition of contrast, then I can do a plot of contrast versus point spread function separation, and we'll have a way to correct the Q definition for whatever level of contrast we deem crucial, or, probably better, pick a target Q based on our contrast criterion.

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Iluvmycam on May 09, 2014, 03:08:57 pm
Don't know. Flat bed scanned 35mm film = 3 or 4 mp. Everything above that is gravy for me.

http://photographycompared.tumblr.com/
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 09, 2014, 03:48:31 pm
http://photographycompared.tumblr.com/

Nice real-world comparisons. Thanks.

WRT to film comparisons, I recall an experience I had at the Kodak research center in Rochester in the early 90s, about a year before they announced the PhotoCD. They handed me two 16x20s, one from an 35mm Ektar 25 negative, and one scanned at what we quaintly called then 6 megapixel resolution (if was 3K by 2K, but since it had separate sensors for the red, green, and blue pixels, we'd call it a 18 MP capture today) from that negative, edited, output to a 5x7 interneg on a film recorder, and printed from that interneg. They challenged me to tell the difference, and were disappointed when I told them which was which. It was close, but I had my secret weapon. I'm nearsighted, and I took off my glasses.

The buzz on the street at that time was that Ektar 25 could resolve 200 lp/mm. It would take a sensor with a 2.5 micrometer pitch to do that. At full frame 35mm size, that would be 138 MP. The Kodak folks never explained why they thought they could digitize the film well with a sensor with a 12 micrometer pitch. I suspect the answer had something to do with MTF50 trumping MTF0.

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 09, 2014, 04:15:39 pm
3) Its easy to find out. Do a test of your lens at a variety of aperture settings. A good zoom, probably f8. A prime probably f5.6, a top prime f4, the Otus f2. Further stopping down will give less detail. Wider lens aberrations damage the output.

I get that it is a necessary condition for a diffraction-limited lens that resolution decreases as f-stops numerically increase. I'm not sure that it is a sufficient condition, and I'd appreciate guidance on that point.

Let's assume that we have a test protocol that holds camera ISO setting and shutter speed constant, and compensates for the exposure differences at various f-stops by varying the amount of light on the test chart or by using a variable-absorption ND filter, or both. (If we use flash we have to use a variable ND filter or be sure that the camera's vibration is not a factor, since varying the output of strobes varies the flash duration.)

If we focus wide open, could focus shift upon stopping down cause a change so great the DOF wouldn't compensate, and we'd see the resolution decreasing du to focus error? A fix for this would be to focus at shooting aperture for each exposure, but we'd have to make enough exposures to gather statistics on each group to make sure that focusing error isn't skewing the result.

Are there lens optical defects that increase with numerically-increasing f-stops that could cause us to think we'd found diffraction before it actually occurs?

The best way to tell when a lens is diffraction-limited is to put it on a bench and look for the rings, but I don't have the equipment to do that.

By the way, my admittedly cursory testing of the Otus makes me think that f/2 is not its sharpest f-stop, but now you've got me interested and I'll have to get serious.

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: bjanes on May 09, 2014, 05:59:03 pm
Bill, thanks for the perspective, and for the link to the excellent tutorial.

Jack Hogan has pointed out elsewhere that there are ways to do presampling AA filtering that don't affect spatial frequencies under the Nyquist frequency as much as doing AA filtering with diffraction. But there seems to be a trend towards sensors with no AA filtering, so it's probably worthwhile to look at that case.

The rounded Sparrow criterion gives a contrast of 2% (peak - valley) / peak). My calculations for the Rayleigh criterion put the contrast at (1 - 0.75) / 1 = 0.25. Am I using the wrong definition of contrast? Maybe it should be half those numbers. If we can agree on a definition of contrast, then I can do a plot of contrast versus point spread function separation, and we'll have a way to correct the Q definition for whatever level of contrast we deem crucial, or, probably better, pick a target Q based on our contrast criterion.

Jim

Jim,

I have not seen Jack Hogan's presampling method. Do you have a link?

As to MTF at the Rayleigh criterion, I am no expert and don't know how to do the calcuation, but Osuna and Garcia (among others) cite a value of 9%. Bart van der Wolf did come calculations and derived a considerably higher value, but I don't remember the link. I would think that 25% would be usable in a terrestrial photographic situation, which is more demanding than separating point sources (start) on a dark background.

Bill
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Fine_Art on May 09, 2014, 06:24:17 pm
I get that it is a necessary condition for a diffraction-limited lens that resolution decreases as f-stops numerically increase. I'm not sure that it is a sufficient condition, and I'd appreciate guidance on that point.

Let's assume that we have a test protocol that holds camera ISO setting and shutter speed constant, and compensates for the exposure differences at various f-stops by varying the amount of light on the test chart or by using a variable-absorption ND filter, or both. (If we use flash we have to use a variable ND filter or be sure that the camera's vibration is not a factor, since varying the output of strobes varies the flash duration.)

If we focus wide open, could focus shift upon stopping down cause a change so great the DOF wouldn't compensate, and we'd see the resolution decreasing du to focus error? A fix for this would be to focus at shooting aperture for each exposure, but we'd have to make enough exposures to gather statistics on each group to make sure that focusing error isn't skewing the result.

Are there lens optical defects that increase with numerically-increasing f-stops that could cause us to think we'd found diffraction before it actually occurs?

The best way to tell when a lens is diffraction-limited is to put it on a bench and look for the rings, but I don't have the equipment to do that.

By the way, my admittedly cursory testing of the Otus makes me think that f/2 is not its sharpest f-stop, but now you've got me interested and I'll have to get serious.

Jim


Manual focus with the DoF preview button pressed would take care of focus shift.

To my limited knowledge of lens aberrations, they are strongest over the full set of rays (wide open). My subjective testing of the Art 35 and the nikon 85 1.8G gave the best results at f4. My old minolta primes were best at f5.6 for the 2.8 or wider lenses. You can only see the rings on a diffraction limited lens. Most photography lenses are not diffraction limited. Sure you can stop them down to make them so, im not sure that would work as expected. In amateur astronomy we make artificial stars for testing simply. A bright light source behind a pinhole off in the distance at night.

My dob is supposedly diffraction limited at 0.5 seconds of arc. I modified it with a smaller secondary ( about 35% down to about 15-18%) so I don't know what it is now. I think it is now mirror limited at about 1/4 wave.
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: EinstStein on May 10, 2014, 11:25:42 am

it OK to play with the definition of each term in the equation, but it should not aspect the result of the following statement:

If the lens can resolve 100lines in 1 mm, the lens resolving power has spatial wave length =10um.
The required sampling rate on the sensor should have pixel to pixel distance = 5um.

Note that this theory is based on square band figure, the ideal one. In the real world, the max. Frequency has a long tail, which can pinch in in the aliasing. The sensor resolution would be even higher. The quantification of that effect should be an engineering art. I can accept any fudge factor, but that would be a matter of taste, ... I would like to know if someone can find the statistically popular taste.





Thanks for pointing out the bandwidth/min frequency nicety. I hadn't thought of that in a long time. Let's assume min freq is zero, though.

I believe your restatement of Q, essentially changing it from frequency to distance, is accurate, if lens resolution is interpreted as half the Sparrow distance. Stated in the frequency domain, Q=2 says that the lens stops delivering contrast (the contrast actually drops to 1%, which is close enough) at one half cycle per pixel, so there can be no aliasing. Let's look at it in distance. Q = 2 says that there are 4.88 samples across the first ring of the Airy function. For two point spread functions at the rounded Sparrow distance, there is one sample between the two peaks, assuming the sampling grid is aligned with the two peaks. That one sample in the middle is necessary to distinguish a drop in signal level as the points move apart. 

If your definition of Q says that the lens resolution is the Sparrow distance, it's a new definition, and yes, I think that in that case, Q should be 1 for a "balanced" system.

Let's work through an example, with an f/8 lens and 0.5 micrometer light. The Sparrow distance is

S = 0.5 * 8 = 4 micrometers. The resolution of the lens is half that, or 2 micrometers. Remember, we're (barely) resolving a point halfway in between the two Airy functions.

Let's say the sensor pitch is 2 micrometers.

Using your formula,  Q = lens resolution x 2 / sensor resolution = 2 * 2 / 2 = 2.

Does that make sense?

Jim

Title: Re: How much sensor resolution do we need to match our lenses?
Post by: ErikKaffehr on May 10, 2014, 11:36:19 am
Hi,

On the other hand, we probably would like to avoid aliasing and fake detail. It seems that little contrast (MTF) at Nyquist is needed to generate significant aliasing.

Best regards
Erik


Jim,

Thanks for an excellent post. With regard to question 1, the contrast at the Rayleigh limit (usually regarded at 9-10 lp/mm, although Bart van der Wolf has calculated a higher value), is too low to be photographically useful. 50% contrast is often quoted as corresponding most closely to perceived image sharpness and, if one uses that criterion, the equations change. Table 1 of the excellent treatise by Osuna and Garcia, Do Sensors Outresolve Lenses (http://www.luminous-landscape.com/tutorials/resolution.shtml), lists resolutions for 50% and 80% contrast. Using the Nikon D800e for reference, the sensor has a Nyquist of 103 lp/mm. Using the Rayleigh criterion, the sensor is nowhere up to outresolving a diffraction limited lens, but few mass produced lenses are diffraction limited at their widest apertures. The best lenses are nearly diffraction limited at mid apertures. Using the Rayleigh criterion and the 800e, a diffraction limited lens will outresolve the sensor until f/16. However, if one uses 50% or 80%, the equation changes. Furthermore, as Osuna and Garcia point out, one may need to sample at more than 2 pixels per lp.

Also, as SQF analysis points out, human perception is most sensitive to high contrast at relatively low frequencies. Depending on the print size, a lens with high contrast at lower frequencies may give better results than a lens with resolution to the Rayleigh limit.

Regards,

Bill
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 10, 2014, 12:09:53 pm
If the lens can resolve 100lines in 1 mm, the lens resolving power has spatial wave length =10um.
The required sampling rate on the sensor should have pixel to pixel distance = 5um.

Are you sure you don't mean, "If the lens can resolve 100 line pairs in 1 mm, the lens resolving power has spatial wave length =10um."

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Fine_Art on May 10, 2014, 12:36:29 pm
http://www.dxo.com/sites/dump.dxo.com/files/dxoimages/ei/sci-publications/Information_Capacity_EI2010.pdf (http://www.dxo.com/sites/dump.dxo.com/files/dxoimages/ei/sci-publications/Information_Capacity_EI2010.pdf)
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 10, 2014, 01:00:59 pm
I have not seen Jack Hogan's presampling method. Do you have a link?

Bill, take a look here:

http://www.dpreview.com/forums/post/53643731

Look at the blue dotted curve in the second figure, which Jack says models the AA filter in the Nikon 610.

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: EinstStein on May 11, 2014, 03:47:17 pm
Are you sure you don't mean, "If the lens can resolve 100 line pairs in 1 mm, the lens resolving power has spatial wave length =10um."

Jim

Wave length is the distance from one peak to the next peak. So if there are 100 lines in 1 mm, the wave length is 1mm/100 = 10um.
If it's 100 line pairs in 1mm, it's 00 lines in 1mm. The wavelength will be 5um.

For a discrete sampling system, the pixel to pixel distance should be half the object half wavelength.
So, for 100 lines/mm, the sensor's pixel to pixel space should be 5um, and for 100 line pairs, it would be 2.5um.
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: ErikKaffehr on May 11, 2014, 03:59:00 pm
I guess you are discussing the length of different waves. Jim mean the wave length of green light.

Best regards
Erik



Wave length is the distance from one peak to the next peak. So if there are 100 lines in 1 mm, the wave length is 1mm/100 = 10um.
If it's 100 line pairs in 1mm, it's 00 lines in 1mm. The wavelength will be 5um.

For a discrete sampling system, the pixel to pixel distance should be half the object half wavelength.
So, for 100 lines/mm, the sensor's pixel to pixel space should be 5um, and for 100 line pairs, it would be 2.5um.
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 11, 2014, 03:59:53 pm
Wave length is the distance from one peak to the next peak. So if there are 100 lines in 1 mm, the wave length is 1mm/100 = 10um.
If it's 100 line pairs in 1mm, it's 00 lines in 1mm. The wavelength will be 5um.

For a discrete sampling system, the pixel to pixel distance should be half the object half wavelength.
So, for 100 lines/mm, the sensor's pixel to pixel space should be 5um, and for 100 line pairs, it would be 2.5um.

Uh, a line pair is one light line and one dark line. Thus one cycle.

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 11, 2014, 04:02:15 pm
I guess you are discussing the length of different waves. Jim mean the wave length of green light.

Not in this case, Erik. We're talking about spatial wavelengths and spatial frequencies in test targets.

[Added: and I think we're mostly talking about definitions, not sampling theory. But I could be wrong. We'll get it sorted out.]

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: ErikKaffehr on May 11, 2014, 04:08:55 pm
OK! Sorry!

Best regards
Erik

Not in this case, Erik. We're talking about spatial wavelengths and spatial frequencies in test targets.

Jim
Title: Pictures!
Post by: Jim Kasson on May 11, 2014, 04:31:10 pm
Here are four images off the camera simulator. ISO 12233 target, with the white point at 200 and the black point at 55 to make sure there’s no clipping. Perfect, diffraction-limited lens set at f/8. 550 nm light used to calculate diffraction, even for the red and blue planes. Kernel used for diffraction convolution 4 times as wide and tall as the Sparrow distance. No vibration, no focus errors, no photon noise. 14 bit perfect ADCs in the camera. 100% fill factor. The camera's primaries are the AdobeRGB primaries. Sensor dimensions are 3840 um x 2560 um. Bilinear interpolation for demosaicing. As Q goes up, the sensels get smaller, and there are more of them.

[Added: RGGB Bayer CFA with the Adobe RGB primaries as the sensel wavelength-sensitivity-corrected dye set.]

You may want to look at the images at actual size. I'll put links in another post.

Q = 1:

(http://www.kasson.com/ll/ISO_12233DevelopedQ1lc.jpg)

Q = 1.414:

(http://www.kasson.com/ll/ISO_12233DevelopedQ1414lc.jpg)

Q = 2:

(http://www.kasson.com/ll/ISO_12233DevelopedQ2lc.jpg)

Q = 2.828

(http://www.kasson.com/ll/ISO_12233DevelopedQ2828lc.jpg)

The false color artifacts are, not surprisingly, worse when the target values go from 0 to 255.

I'll be doing some slanted edge MTF analysis.

Jim
Title: Links to full-sized ISO 12233 images
Post by: Jim Kasson on May 11, 2014, 04:37:19 pm
Here are links to full sized versions of the images from the preceding post.

Q=1 (http://www.kasson.com/ll/ISO_12233DevelopedQ1lc.jpg)

Q=1.414 (http://www.kasson.com/ll/ISO_12233DevelopedQ1414lc.jpg)

Q=2 (http://www.kasson.com/ll/ISO_12233DevelopedQ2lc.jpg)

Q=2.828 (http://www.kasson.com/ll/ISO_12233DevelopedQ2828lc.jpg)

Jim
Title: Re: Pictures!
Post by: ErikKaffehr on May 11, 2014, 04:38:11 pm
Great Work!

Erik

Here are four images off the camera simulator. ISO 12233 target, with the white point at 200 and the black point at 55 to make sure there’s no clipping. Perfect, diffraction-limited lens set at f/8. 550 nm light used to calculate diffraction, even for the red and blue planes. Kernel used for diffraction convolution 4 times as wide and tall as the Sparrow distance. No vibration, no focus errors, no photon noise. 14 bit perfect ADCs in the camera. 100% fill factor. The camera's primaries are the AdobeRGB primaries. Sensor dimensions are 3840 um x 2560 um. Bilinear interpolation for demosaicing. As Q goes up, the sensels get smaller, and there are more of them.

You may want to look at the images at actual size. I'll put links in another post.

Q = 1:

(http://www.kasson.com/ll/ISO_12233DevelopedQ1lc.jpg)

Q = 1.414:

(http://www.kasson.com/ll/ISO_12233DevelopedQ1414lc.jpg)

Q = 2:

(http://www.kasson.com/ll/ISO_12233DevelopedQ2lc.jpg)

Q = 2.828

(http://www.kasson.com/ll/ISO_12233DevelopedQ2828lc.jpg)

The false color artifacts are, not surprisingly, worse when the target values go from 0 to 255.

I'll be doing some slanated edge MTF analysis.

Jim
Title: Slanted edge results delayed
Post by: Jim Kasson on May 11, 2014, 04:50:43 pm
I got a clipping warning from Imatest  (http://www.imatest,com)when I tried to do the slanted edge MTF analysis.

(http://www.kasson.com/ll/clipwarn.PNG)

I guess I'll have to rerun the sim with an even lower-contrast target.

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: EinstStein on May 11, 2014, 06:05:25 pm
>> Uh, a line pair is one light line and one dark line. Thus one cycle.

You may have a better handling in the terminology. That's fine. I still stand with my statement that the spatial wavelength is from peak to the next peak.
I hope no one think its the high peak to the low peak. Yes, when said peak to peak, it means high peak to high peak or low peak to low peak.

   

Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 11, 2014, 06:43:45 pm
>> Uh, a line pair is one light line and one dark line. Thus one cycle.

You may have a better handling in the terminology. That's fine. I still stand with my statement that the spatial wavelength is from peak to the next peak.
I hope no one think its the high peak to the low peak. Yes, when said peak to peak, it means high peak to high peak or low peak to low peak.

Then we agree.

Jim
Title: Re: Slanted edge results delayed
Post by: Bart_van_der_Wolf on May 12, 2014, 04:30:26 am
I got a clipping warning from Imatest  (http://www.imatest,com)when I tried to do the slanted edge MTF analysis.

(http://www.kasson.com/ll/clipwarn.PNG)

I guess I'll have to rerun the sim with an even lower-contrast target.

Hi Jim,

I just started reading this thread, so I won't comment just yet, but I noticed in the Imatest chart that you used a Gamma 2.20, where it is supposed to be 0.454545 (or 0.5, which is also close enough for most comparisons). Just wanted to warn at this stage, before I got through reading everything said so far.

Cheers,
Bart
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: hjulenissen on May 12, 2014, 04:46:49 am
That is fine. What does your CFA need to make your raw converter generate 2 distinct spots? The spots may or may not align with your pixels. I would venture that you need peaks at least a diagonal apart.
To the degree that the sampling system adheres to Nyquist, the exact alignment of the spots does not matter, only the continous bandwidth of the signal.

Of course, sampling systems does not adhere strictly to Nyquist, but the combination of diffraction, OLPF, sensel active area, camera movement etc should make alignement less of an issue.

-h
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Bart_van_der_Wolf on May 12, 2014, 06:04:34 am
To the degree that the sampling system adheres to Nyquist, the exact alignment of the spots does not matter, only the continous bandwidth of the signal.

Of course, sampling systems does not adhere strictly to Nyquist, but the combination of diffraction, OLPF, sensel active area, camera movement etc should make alignement less of an issue.

Hi,

Correct, as can be determined/verified with my Slanted Edge Evaluation tool (http://bvdwolf.home.xs4all.nl/main/foto/psf/SlantedEdge.html). One compares the successive responses of some 10 (in case of a 5.7 degree slant) edge transitions (sample a selection, go 1 line down and 10 positions to the right, resample, ... repeated 10x). This will show at 1/10th of a pixel sub-sampled intervals how the edge transition behaves. A wave indicates phase alignment issues, a more or less level performance indicates insensitivity to sensel alignment (see attached example of my EF 100mm f/2.8L Macro IS).

Cheers,
Bart
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: hjulenissen on May 12, 2014, 06:56:35 am
Correct, as can be determined/verified with my Slanted Edge Evaluation tool (http://bvdwolf.home.xs4all.nl/main/foto/psf/SlantedEdge.html). One compares the successive responses of some 10 (in case of a 5.7 degree slant) edge transitions (sample a selection, go 1 line down and 10 positions to the right, resample, ... repeated 10x). This will show at 1/10th of a pixel sub-sampled intervals how the edge transition behaves. A wave indicates phase alignment issues, a more or less level performance indicates insensitivity to sensel alignment (see attached example of my EF 100mm f/2.8L Macro IS).
If we go further down this route:
Would not (something like) the D800/A7r (or D7100) with high sensel density and no OLPF be better suited for estimating lens response >fs/2 since they do less prefiltering, than do their OLPF-equipped brethren? By having many (slightly offset, aliased) samples, it ought to be possible to gain information about the MTF of the lens beyond the regular passband of the sensor, thus knowing how (e.g.) your 100mm will perform on a future 54MP FF (or 36MP crop) camera.

Since pixel site area will in itself work as a lowpass filter (and diffraction is inevitable), one might expect the SNR at those spatial frequencies to be low (even for optimal lenses), meaning that a large number of samples might have to be effectively averaged in order to have robust estimates.

I am inspired here by things such as interlacing (where something approaching 480/576 spatial lines can be recreated from a signal that is 240/288 fields, alternately offset by 1/2, provided that there is no other movement). If interlacing had used "Nyquistian perfect" spatial pre-filtering, then there would be no aliasing information from which to recreate the higher-resolution image.

-h
Title: Re: Slanted edge results delayed
Post by: Jim Kasson on May 12, 2014, 11:42:50 am
I just started reading this thread, so I won't comment just yet, but I noticed in the Imatest chart that you used a Gamma 2.20, where it is supposed to be 0.454545 (or 0.5, which is also close enough for most comparisons). Just wanted to warn at this stage, before I got through reading everything said so far.

Good catch, Bart. I've no idea how that got there, but I fixed it and reran Imatest  (http://www.Imatest,com)on a series of slanted edge pictures with lambda = 550 nm, and a diffraction-limited f/8 lens with Q starting at 1 and increasing by a factor of sqrt(2) to 5.7 (hey, why is it that our lenses aren't labeled f/5.7 instead of f/5.6?).

Here is the same MTF50 and MTF30 data plotted against three ways of looking at sensel density:

First vs Q:

(http://www.kasson.com/ll/MTF3050cyph.PNG)

Then vs picture height in pixels for a 36x24mm sensor:

(http://www.kasson.com/ll/MTF3050cyphvph.PNG)

And vs sensor megapixels for a 36x24mm sensor:

(http://www.kasson.com/ll/MTF3050cyphvmp.PNG)

Here is the Q=1 Imatest plot, showing plenty of aliasing:

(http://www.kasson.com/ll/SlantedEdgeDeveloped.tif1000_YA6_01_sfr.png)

And here's the Q=2 Imatest plot, showing much less as the lens diffraction acts as a pretty good AA filter:

(http://www.kasson.com/ll/SlantedEdgeDeveloped.tif2000_YAL9_01_sfr.png)

Note that the AA filtering isn't perfect, as it would be in a monochromatic sensor, one with no CFA.

I don't know why Imatest shows different contrast in the thumbnails of the two images; I checked teh files and the contrast is identical.

Jim
Title: A graphical look at aliasing
Post by: Jim Kasson on May 12, 2014, 12:46:07 pm
Here's a graph of the MTF at the Nyquist frequency for the slanted edge charts shown in the above post:

(http://www.kasson.com/ll/MTFNyquistcypx.PNG)

The two right-most points are suspect (the Q=5.7 one obviously so), since the resolution of the simulated camera exceeds the resolution of my test chart for those.

The curves here seem to correlate the visual results with the ISO 12233 charts; Q = 2.8 is enough to get rid of most all false color artifacts, although not quite all aliasing artifacts -- look at a blowup of the ISO 12233 chart at Q=2.828:

(http://www.kasson.com/ll/12233Q2828aliasing.JPG)

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 14, 2014, 02:58:25 pm
3) Its easy to find out. Do a test of your lens at a variety of aperture settings. A good zoom, probably f8. A prime probably f5.6, a top prime f4, the Otus f2. Further stopping down will give less detail. Wider lens aberrations damage the output.

I tested the Otus, and, as my informal testing had made me believe,  f/2 is not the best aperture. f/5.6 is. Whether any f-stop except f/16 is diffraction-limited will take some more work, but diffraction is clearly adversely affecting f/11.

(http://www.kasson.com/ll/a7reotusfseries.PNG)

10 shots at each aperture. Moderate contrast target. a7R, ISO 100, 1/10 sec, trailing curtain synch, Paul Buff Einstein illumination (light output varied to equalize exposure), RRS CF tripod, Arca Swiss C1 cube. Developed in Lr, sharpening and noise reduction turned off.

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 14, 2014, 06:48:42 pm
Whether any f-stop except f/16 is diffraction-limited will take some more work, but diffraction is clearly adversely affecting f/11.

Here's a comparison of the actual Sony a7R/Otus 55/1.4 slanted edge MTF vs the results of my camera simulator modeling a camera of the same pixel pitch with a diffraction limited perfect lens at 550 nm illumination.

(http://www.kasson.com/ll/diffvsOtus.PNG)

It looks like the Otus is diffraction-limited at f/11, and pretty close at f/8.

I'll do a complete write-up on this, but I've been programming all afternoon, and just want to get the results up now.

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Fine_Art on May 14, 2014, 09:18:10 pm
Here's a comparison of the actual Sony a7R/Otus 55/1.4 slanted edge MTF vs the results of my camera simulator modeling a camera of the same pixel pitch with a diffraction limited perfect lens at 550 nm illumination.

(http://www.kasson.com/ll/diffvsOtus.PNG)

It looks like the Otus is diffraction-limited at f/11, and pretty close at f/8.

I'll do a complete write-up on this, but I've been programming all afternoon, and just want to get the results up now.

Jim

I don't understand that. Are you saying resolution will not increase at apertures wider than f11?
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 14, 2014, 09:26:17 pm
I don't understand that. Are you saying resolution will not increase at apertures wider than f11?

No, the MTF plots show resolution increasing as the lens is opened  up all the way to f/5.6, but not as much as a diffraction-limited lens at apertures wider than f/11.

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Fine_Art on May 14, 2014, 09:28:49 pm
My understanding of the term diffraction limited is the point at which diffraction reduces resolution. It would be the peak, first derivative = 0. Are you using a different (engineering) definition? The graph of the Otus looks wrong either way. I find it hard to believe it maxes out at f 5.6. I read a review that said f2.
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Fine_Art on May 14, 2014, 09:33:22 pm
Cross posted. Ok, so you are saying the perfect lens, with only diffraction would be the higher curve, a real lens, with other components in the PSF is blending the aberations with diffraction.
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 14, 2014, 11:30:26 pm
My understanding of the term diffraction limited is the point at which diffraction reduces resolution. It would be the peak, first derivative = 0. Are you using a different (engineering) definition?

By diffraction-limited, I mean that the resolution is essentially determined by diffraction. All lens defects (other than diffraction, if you consider that to be a defect), are not material. The traditional way to say that is, "An optical system with the ability to produce images with angular resolution as good as the instrument's theoretical limit is said to be diffraction limited."

The graph of the Otus looks wrong either way. I find it hard to believe it maxes out at f 5.6. I read a review that said f2.

Please cite the review.

Thanks,

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 14, 2014, 11:32:29 pm
Cross posted. Ok, so you are saying the perfect lens, with only diffraction would be the higher curve, a real lens, with other components in the PSF is blending the aberations with diffraction.

That's what I'm saying, although I would not have said it quite that way.

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: ErikKaffehr on May 15, 2014, 12:56:52 am
Hi,

I am pretty sure Otus maxes somewhere around f/4, see figure below from Lensrentals
(http://www.lensrentals.com/blog/media/2014/04/zeissaper.jpg)
DxO-mark shows similar data.

My understanding is that the classic definition is that on a diffraction limited lens the first Airy ring can be clearly seen.

Note that Lensrentals uses LP/IH and not the more usual LW/IH, which gives half the figure and also that they don't use sharpening.

Best regards
Erik


By diffraction-limited, I mean that the resolution is essentially determined by diffraction. All lens defects (other than diffraction, if you consider that to be a defect), are not material. The traditional way to say that is, "An optical system with the ability to produce images with angular resolution as good as the instrument's theoretical limit is said to be diffraction limited."

Please cite the review.

Thanks,

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 15, 2014, 10:18:10 am
I am pretty sure Otus maxes somewhere around f/4, see figure below from Lensrentals. DxO-mark shows similar data.

Erik, the LR curve shows on-axis performance of the Otus at f/4 slightly better than f/5.6. My testing shows it slightly worse. The difference is not large, and could be due to:

statistical variation (there are no confidence lines or bars on the LR curves)
difference contrast targets (I used a contrast of 4, in Imatest terms)
difference lighting spectra (I used a studio strobe)
sample variations in the lens
not using the same shutter speed for all tests (I varied the light level. If someone didn't do that, but compensated for exposure as the f-stop is opened up by increasing shutter speed, that could reduce vibration on the widers aperture test images
a different sensor (I'm assuming that the LR test images were made with a D800E, although you didn't say so. The D800E has a strange "non-AA" filter. I believe the a7R has none)
many other things

I don't consider the differences in the two test results to be particularly important.

My understanding is that the classic definition is that on a diffraction limited lens the first Airy ring [you mean the zero, I believe] can be clearly seen.

I don't have an optical bench. If I did, that's the test I would have performed; it's the gold standard. A 36 MP sensor can't clearly resolve the Airy pattern even af f/16. SO I used a more indirect way to get the answer. I thought it was pretty clever, but I've always been a big fan of my own ideas. :)

Note that Lensrentals uses LP/IH and not the more usual LW/IH, which gives half the figure and also that they don't use sharpening.

We both use Imatest. I didn't use sharpening either. I took the readouts in cycles/pixel, and converted them to cycles/picture height for the full frame. As has been discussed in this thread, a line pair is a cycle, and line pairs/PH should be close to (exactly?) cycles/PH.  

But the LR numbers are way too low for a 36 MP sensor, which is 4912 pixels high and can theoretically resolve half that many line pairs. I think I know what happened. Imatest  (http://www.imatest.com/docs/sfr_instructions2/)reports results in LW/PH based on the height of the sample image it sees. I always feed it a cropped image, and thus any readout that is per picture height is looking at the wrong height. It's possible that the LR folks didn't correct for that, figuring, quite properly, that the absolute numbers weren't important, just the relative ones.

Eric, have you seen any claims that the Otus is the sharpest at f/2?

Jim

Title: Re: How much sensor resolution do we need to match our lenses?
Post by: ErikKaffehr on May 15, 2014, 11:14:09 am
Jim,

I guess I responded to the wrong posting…. My response was more direct to "FineArt"

Anyway, from what I have seen the Otus peaks somewhere between f/4 and f/5.6.

The Lensrentals data is interesting, but the data they present is much lower than the data normally reported. I guess they use DC-raw for conversion. I felt it needed to be pointed out that they use LP (line pairs) and not LW (line widths) which may be more common.

I think there is some variation in Imatest results depending on target and other factors. My guess is that demosaic technique also plays a role. In one case I have seen that LR and Capture one was very close but RawTherapee gave different Imatest results. I also found that another experienced and knowledgeable poster got slightly better results with ACR than what I got in Lightroom on the same image. Maybe the are improvements in ACR that are still not in LR 5.4?

I am thankful you started this thread and also for sharing the findings of your research.

Best regards
Erik



Eric, the LR curve show on-axis performance of the Otus at f/4 slightly better than f/5.6. My testing shows it slightly worse. The difference is not large, and could be due to:

statistical variation (there are no confidence lines or bars on the LR curves)
difference contrast targets (I used a contrast of 4, in Imatest terms)
difference lighting spectra (I used a studio strobe)
sample variations in the lens
not using the same shutter speed for all tests (I varied the light level. If someone didn't do that, but compensated for exposure as the f-stop is opened up by increasing shutter speed, that could reduce vibration on the widers aperture test images
a different sensor (I'm assuming that the LR test images were made with a D800E, although you didn't say so. The D800E has a strange "non-AA" filter. I believe the a7R has none)
many other things

I don't consider the differences in the two test results to be particularly important.

I don't have an optical bench. If I did, that's the test I would have performed; it's the gold standard. A 36 MP sensor can't clearly resolve the Airy pattern even af f/16. SO I used a more indirect way to get the answer. I thought it was pretty clever, but I've always been a big fan of my own ideas. :)

We both use Imatest. I didn't use sharpening either. I took the readouts in cycles/pixel, and converted them to cycles/picture height for the full frame. As has been discussed in this thread, a line pair is a cycle, and line pairs/PH should be close to (exactly?) cycles/PH.  

But the LR numbers are way too low for a 36 MP sensor, which is 4912 pixels high and can theoretically resolve half that many line pairs. I think I know what happened. Imatest  (http://www.imatest.com/docs/sfr_instructions2/)reports results in LW/PH based on the height of the sample image it sees. I always feed it a cropped image, and thus any readout that is per picture height is looking at the wrong height. It's possible that the LR folks didn't correct for that, figuring, quite properly, that the absolute numbers weren't important, just the relative ones.

Eric, have you seen any claims that the Otus is the sharpest at f/2?

Jim


Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 15, 2014, 11:45:40 am
The Lensrentals data is interesting, but the data they present is much lower than the data normally reported. I guess they use DC-raw for conversion. I felt it needed to be pointed out that they use LP (line pairs) and not LW (line widths) which may be more common.

I think there is some variation in Imatest results depending on target and other factors. My guess is that demosaic technique also plays a role. In one case I have seen that LR and Capture one was very close but RawTherapee gave different Imatest results. I also found that another experienced and knowledgeable poster got slightly better results with ACR than what I got in Lightroom on the same image. Maybe the are improvements in ACR that are still not in LR 5.4?

Erik, you are absolutely right that the slanted edge MTF results are sensitive to the demosaicing algorithm. Even naming the demosacing program doesn't completely specify the algorithm -- DCRAW offers four algorithms, from bilinear interpolation to adaptive homogeneity-directed interpolation. Some of these algorithms seem to do sharpening, which makes the absolute MTF get larger.

We should remember that the absolute MTF numbers are not the important thing in a f-stop series like this -- there are too many variables that affect them. The relative numbers seem to hold up over a range of demosaicing techniques, for which I am grateful.

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Fine_Art on May 15, 2014, 09:42:28 pm
I did a brief search for the review, could not spot it on the first 2 pages of returns.

Erik's f4 is more likely. I will go with that.

This review has a higher figure
"I used Imatest to check the sharpness of the Otus using a standard SFRPlus test chart. At f/1.4 it’s an outstanding performer, scoring 3,015 lines per picture height on a center-weighted test. That’s much, much higher than the 1,800 lines we require for a photo to be called sharp, and impressively the extreme edges of the frame are just as sharp as the center. It only gets better as you stop down: 3,265 lines at f/2, 3,602 lines at f/2.8, 3,829 lines at f/4, 3,899 lines at f/5.6, and 3,951 lines at f/8. Distortion is a nonissue; the lens shows 0.7 percent barrel distortion, but that’s barely relevant in field conditions. There’s no evidence of falloff at the corners; the vignette in the shot below was added in Lightroom.
Read more at http://www.itreviews.com/zeiss-otus-1-455/#LScdCVIPGVimjE26.99"

It does not really matter in the real world, we will not give up DoF for a few more lines of resolution.
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: ErikKaffehr on May 16, 2014, 02:24:35 am
Hi,

Just as a small point, the reason the Otus is that expensive is that it pushes performance at large apertures. Most decent lenses are pretty sharp at medium apertures.

I include some MTF data from DxO mark, comparing Nikon 50/1.8, Sigma 50/1.4 EX (not the A-series) and Zeiss Otus

The Zeiss really shines at maximum aperture, but stopping down to f/2.8 removes much of the advantage. At f/8 there is not much between the lenses.

It may be argued, that buying an Otus for shooting medium apertures makes little sense. In addition, the Otus has an awful lot of glass which may make it more sensitive to flare and ghosting.

I would make a point for making well corrected medium aperture lenses like f/2.8.

Regarding MTF data, it needs to be considered what is mesaured and how it is presented.

For instance, for lens testing I normally use Lightroom 5.4 with no sharpening, Photozone uses default sharpening, some may use out of camera JPEGs and some may even use two stage sharpening.

For SQF evaluation I normally use Landscape sharpening in Lightroom 5.3.

Best regards
Erik

I did a brief search for the review, could not spot it on the first 2 pages of returns.

Erik's f4 is more likely. I will go with that.

This review has a higher figure
"I used Imatest to check the sharpness of the Otus using a standard SFRPlus test chart. At f/1.4 it’s an outstanding performer, scoring 3,015 lines per picture height on a center-weighted test. That’s much, much higher than the 1,800 lines we require for a photo to be called sharp, and impressively the extreme edges of the frame are just as sharp as the center. It only gets better as you stop down: 3,265 lines at f/2, 3,602 lines at f/2.8, 3,829 lines at f/4, 3,899 lines at f/5.6, and 3,951 lines at f/8. Distortion is a nonissue; the lens shows 0.7 percent barrel distortion, but that’s barely relevant in field conditions. There’s no evidence of falloff at the corners; the vignette in the shot below was added in Lightroom.
Read more at http://www.itreviews.com/zeiss-otus-1-455/#LScdCVIPGVimjE26.99"

It does not really matter in the real world, we will not give up DoF for a few more lines of resolution.
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 16, 2014, 10:23:54 am
I include some MTF data from DxO mark, comparing Nikon 50/1.8, Sigma 50/1.4 EX (not the A-series) and Zeiss Otus

Erik, it looks to me like the Otus tests were made on a D800 with its AA filter, and the other two lenses ere tested on a D800E with no (well, OK, a now-you-see-it-now-you-don't) AA filter.

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 16, 2014, 10:31:57 am
It may be argued, that buying an Otus for shooting medium apertures makes little sense.

Erik, that's not the way it appears to me. I find the contrast and drawing amazing even at f/11. If I were you, I'd not pay much attention to that, since I'm not backing it up with numbers. However, you might want to read what a real expert has to say on the subject.

http://www.luminous-landscape.com/forum/index.php?topic=89877.0

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Manoli on May 16, 2014, 12:09:24 pm
Jim,

I know you've compared the LEICA 50 ‘LUX ON THE M240 v SONY 55 FE ON THE SONY A7, but out of interest have you compared the Leica 50'LUX against the Zeiss Otus ?

Manoli
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 16, 2014, 12:24:43 pm
I know you've compared the LEICA 50 ‘LUX ON THE M240 v SONY 55 FE ON THE SONY A7, but out of interest have you compared the Leica 50'LUX against the Zeiss Otus ?

The 50 'lux on the a7R suffers from corner smearing (http://blog.kasson.com/?p=4141), so I lost interest in testing it on that camera. I can't use the Leica lens on the D800E. I suppose I could test both on the M240, but it would take me half a day, and would anybody really use the Otus on the M240?

Jim

Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Manoli on May 16, 2014, 12:36:59 pm
I suppose I could test both on the M240, but it would take me half a day, and would anybody really use the Otus on the M240?

No, you're right although when I (very) briefly tested the 50 on the a7r I didn't find any noticeable corner smearing, unlike many of the wide-angles which are, IMO, next to unusable, unfortunately. I've a strange hunch though that Leica may well produce an M-EVF competitor to the a7 series by Photokina.  We'll see ..

All Best,
Manoli

ps
Thanks for the link.
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: ErikKaffehr on May 16, 2014, 12:41:57 pm
Hi,

Thanks for info. For me it is a bit surprising, but I have no Otus, so it is not personal experience. Thanks for pointing out the issue with D800/D800E, I missed it entirely.

Have you any idea why Otus would be less affected by diffraction than other lenses? Of course, it has a better starting point. The other question may be how far a simpler lens may be from the diffraction limit at medium aperture.

Best regards
Erik


Erik, that's not the way it appears to me. I find the contrast and drawing amazing even at f/11. If I were you, I'd not pay much attention to that, since I'm not backing it up with numbers. However, you might want to read what a real expert has to say on the subject.

http://www.luminous-landscape.com/forum/index.php?topic=89877.0

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Bart_van_der_Wolf on May 16, 2014, 01:09:13 pm
Hi,

Thanks for info. For me it is a bit surprising, but I have no Otus, so it is not personal experience. Thanks for pointing out the issue with D800/D800E, I missed it entirely.

Have you any idea why Otus would be less affected by diffraction than other lenses? Of course, it has a better starting point. The other question may be how far a simpler lens may be from the diffraction limit at medium aperture.

Hi Erik,

It's not that the Otus somehow has less diffraction, it just suffers less. One needs to understand that diffraction is one of several sources of blur, residual lens aberrations being another lens related source. These blurs add, the MTFs after convolution with the PSF of diffraction, and the PSF of aberrations, multiply. That means that the lesser of the two MTFs dictates the resulting lens MTF.

It is not just that, but the system MTF also depends on the sensel aperture and sampling density, and there may be an OLPF involved as well. All these (there is also an IR filter and sensor cover-glass) have an individual MTF that leads to a total combined MTF. So the lens MTF will be modified by the rest of the system.

All-in-all, the better the lens, the better the total performance will be, and the Otus does not leave much to be desired in that equation. That quality will also survive to a large extent to the OOF regions, so it may look a bit more pleasing, and may be easier to restore with deconvolution sharpening, QED ...

Cheers,
Bart
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: ErikKaffehr on May 24, 2014, 10:42:49 am
Jim,

I have rechecked and you are right. When I compare all tests on D800E the Otus has still what may be a significant advantage at medium apertures. Sorry for jumping conclusions!

Best regards
Erik

Erik, it looks to me like the Otus tests were made on a D800 with its AA filter, and the other two lenses ere tested on a D800E with no (well, OK, a now-you-see-it-now-you-don't) AA filter.

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: joneil on May 25, 2014, 11:28:18 am
 Can I make a point off tangent a bit here.   

  I tried out a couple of Zeiss Otus lenses on my D800 this past week, and compared them too my "regular" Zeiss ZF2 lenses on my D800.   Came home, pulled them up on screen, looked at them closely.  No math or other technical points, just my  gut feeling reaction here:

1) Pros
- build quality is second to none;
- optically, yes, I think better then the "regular" Zeiss lenses;

2) Cons
- while optically better than "regular" Zeiss lenses, that is like saying brand A of sports car is better than brand B because brand A does the quarter mile a tenth of a second faster than brand B.   IMO, the regular Zeiss lenses are in most cases so far ahead of many other lenses, well, it depends on what you are looking for;
- while the build quality is better, they are huge and heavy lenses compared to the "regular" Zeiss lenses.   I mean, in real world use, if I am going for a 2 or 4 hour hike in the bush, hauling around a lens that is twice the weight (or more) of it's closest equivalent - I dunno.  I mean, it is not money alone, as there is a reason i now use carbon fibre tripods and monopods as opposed to aluminium ones, I am just saying, do people think about these situations outside the lab?
- cost.   I can almost buy three ZF lenses for the price of one Otus.  I just don't have those kind of dollars.

   so my apologies if I have offended anyone, especially those who own or plan to own an Otus.   Wonderful lens, I am very impressed by it.   All I am saying is no matter how good it is, think about real world use, not just charts and diagrams.

  One more weird thing.  i was testing these lenses at the big camera show in Toronto this past weekend, hosted and put on by Henrys.   The main "theme" of this years expo was improving your smart phone photography.  Even the local media was in on it.  New software for your iphone, new lens adaptors for both telephoto and macro shots on your iphone, etc, etc.   Everything was about how good, how high the quality of the iPhone and other smart phones were.

   The Zeiss both, where I was playing with the Otus lenses, was right beside the area promoting "the art of phonography" as they were calling it.

    Irony, eh?
Title: An Otus simulation with no AA filter
Post by: Jim Kasson on May 25, 2014, 11:58:01 am
I ran a more complete Otus sim versus f-stop and pitch yesterday.

Here's a 3D look:

(http://www.kasson.com/ll/simotus3d.PNG)

Here it is in two dimensions with f-stop as the horizontal axis:

(http://www.kasson.com/ll/otus2d1.PNG)

Now we can see where improving sensor resolution stops helping; there's not much improvement as we go from a pitch of 2.4 um to one of 2 um.

And here it is in two dimensions with f-stop as the horizontal axis:

(http://www.kasson.com/ll/otus2d2.PNG)

Making the pitch finer doesn’t help much at all at f/16.

There are curves for diffraction-limited lenses, both with and without a simulated 4-way beam-splitter AA filter, here (http://blog.kasson.com/?p=5887).

Jim

Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 25, 2014, 12:16:33 pm
  I tried out a couple of Zeiss Otus lenses on my D800 this past week, and compared them too my "regular" Zeiss ZF2 lenses on my D800.   Came home, pulled them up on screen, looked at them closely.  No math or other technical points, just my  gut feeling reaction here:

1) Pros
- build quality is second to none;
- optically, yes, I think better then the "regular" Zeiss lenses;

2) Cons
- while optically better than "regular" Zeiss lenses, that is like saying brand A of sports car is better than brand B because brand A does the quarter mile a tenth of a second faster than brand B.   IMO, the regular Zeiss lenses are in most cases so far ahead of many other lenses, well, it depends on what you are looking for;
- while the build quality is better, they are huge and heavy lenses compared to the "regular" Zeiss lenses.   I mean, in real world use, if I am going for a 2 or 4 hour hike in the bush, hauling around a lens that is twice the weight (or more) of it's closest equivalent - I dunno.  I mean, it is not money alone, as there is a reason i now use carbon fibre tripods and monopods as opposed to aluminium ones, I am just saying, do people think about these situations outside the lab?
- cost.   I can almost buy three ZF lenses for the price of one Otus.  I just don't have those kind of dollars.

I know I'll get flamed for this, but I can't help myself. For me, the comparison is the Otus 55 with a 80 or 100 mm medium format lens. OK, it's an old MF camera, but when I use the Otus on the a7R or the D800E, I get sharper results in general than I do with the Hassy 80mm f/2.8 or the 120mm f/4 on a H2D-39. Essentially the same resolution. The Otus is heavier and bigger than the 80mm, and smaller and lighter than the 120. On the camera, the Hassy is bigger and heavier with the 80mm than with the Otus on either body. The Hassy 80 costs three-quarters of what B&H gets for the Otus, and the Otus is two stops faster.

Let's say that Zeiss comes out with an Otus 85mm and an Otus 35mm, and they both cost about $4K. Let's further say that Sony and/or Nikon introduce 56 MP cameras (full frame at NEX-7 sensel pitch) at about $7K. So now, for under $20K, you can buy a kit that will challenge cameras built around the Sony 33x44mm sensor, as well as the Hassy H5D-40 and H5D-50 and the Phase One equivalents. Yes, there are advantages to the larger sensors, but there are disadvantages to the larger cameras as well. With Pentax's recent announcement, there are signs of price erosion in the MF market, and a bigger line of Otus lenses could provide a different kind of price competition.

It's an exciting time to be a photographer.

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Fine_Art on May 25, 2014, 01:30:55 pm
Prices have to come down a lot further for most people to care. An Otus 50 shot will not beat a 4 shot stitch with a good 85 or a 100 macro. Pushing the engineering envelope is great. For a profitable commercial product you need winning value per dollar. At least from people that use the product rather than buy status symbols.
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Telecaster on May 25, 2014, 02:47:21 pm
I agree with Jim that the Otus + D800e/A7r makes for a medium format system competitor. If that's not what you want or need then IMO you're better off with the ZF 50/2 or (on the A7r) the Sony/Zeiss FE 55/1.8. Or maybe even the new Sigma 50/1.4. These are relatively affordable, compact & unobtrusive lenses. I love that the Otus 55mm exists, and I hope there are more of 'em coming, but that doesn't mean I feel obliged to own one.   ;)

-Dave-
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: ErikKaffehr on May 25, 2014, 03:02:25 pm
Hi,

I would agree, mostly.

My guess is that 50+ MP 135 is around the corner.

Erik

I know I'll get flamed for this, but I can't help myself. For me, the comparison is the Otus 55 with a 80 or 100 mm medium format lens. OK, it's an old MF camera, but when I use the Otus on the a7R or the D800E, I get sharper results in general than I do with the Hassy 80mm f/2.8 or the 120mm f/4 on a H2D-39. Essentially the same resolution. The Otus is heavier and bigger than the 80mm, and smaller and lighter than the 120. On the camera, the Hassy is bigger and heavier with the 80mm than with the Otus on either body. The Hassy 80 costs three-quarters of what B&H gets for the Otus, and the Otus is two stops faster.

Let's say that Zeiss comes out with an Otus 85mm and an Otus 35mm, and they both cost about $4K. Let's further say that Sony and/or Nikon introduce 56 MP cameras (full frame at NEX-7 sensel pitch) at about $7K. So now, for under $20K, you can buy a kit that will challenge cameras built around the Sony 33x44mm sensor, as well as the Hassy H5D-40 and H5D-50 and the Phase One equivalents. Yes, there are advantages to the larger sensors, but there are disadvantages to the larger cameras as well. With Pentax's recent announcement, there are signs of price erosion in the MF market, and a bigger line of Otus lenses could provide a different kind of price competition.

It's an exciting time to be a photographer.

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 25, 2014, 05:41:49 pm
Prices have to come down a lot further for most people to care.

I agree. I'll go even further. Most people won't care no matter how far down the prices come. Most people are satisfied with the images produced by their cellphones. Of the photographers (if you buy a violin, you own a violin; if you buy a camera, you are a photographer) remaining, most will make prints at 4x6 inches or smaller, or just be happy with screen res, so they won't care at any price, either. Of the photographers left, most won't care at any price because they demand autofocus. When you subtract out that group, you're left with a cohort that could easily be turned off by price/size/weight. Take them out of the mix, and there is a group whose members care  passionately about image quality, and are willing, under the right conditions, to dig deep into their wallets.

An Otus 50 [55?] shot will not beat a 4 shot stitch with a good 85 or a 100 macro.

Yep. Stitching is great if your subject and the lighting will hold still long enough. But sometimes you need a single capture. Or you've already committed to multiple captures for HDR or focus stacking, and adding stitching is just too complicated and error prone.


Pushing the engineering envelope is great. For a profitable commercial product you need winning value per dollar. At least from people that use the product rather than buy status symbols.

In photography, like so many other things (audio, video, musical instruments, automobiles, clothing, wine, etc), the amount of performance received for your money is higher at the lower end of the market than the higher. That doesn't mean that companies selling at the higher end of the market can't be commercial successes.

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Dave Ellis on May 25, 2014, 06:31:48 pm
Hello Jim

I'm a newcomer to this site but could I say that I really appreciate the technical posts that people like yourself, Erik and Bart provide.

I have a couple of questions relating to your model :


In general terms, how are you modeling the lens aberrations ?

I'm struggling to see how pixel pitch affects mtf. As I understand the basics of sampling theory, the pixel pitch determines the spatial sampling frequency and this must be at least twice the highest frequency component of the signal for accurate reproduction. If not, artefacts will be produced for those frequency components of the signal that are above the Nyquist frequency. But how does this affect the mtf of the lens/camera system ? I appreciate that pixel size and fill factor affects mtf as a result of non-point sampling and I assume that de-mosaicing affects mtf also. But are these the only factors or am I missing some basic concept here ?




Thanks
Dave
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 25, 2014, 07:05:05 pm
In general terms, how are you modeling the lens aberrations ?

Welcome to Lula, Dave. The short answer to your question is, "Not very well." I'm modeling the Otus 55 mm f/1.4 on-axis performance with a double application of a pillbox (circular) kernel with radius equal to 0.5 um + 8.5 um / the f-stop to a target image that's already been blurred by diffraction. This yields a tolerable fit to the performance of the sample of the Otus that I'm testing against. However, there's a lot of uncontrolled moving parts. The real images are demosaiced by Lightroom, and the synthetic ones with bilinear interpolation, and it looks to me like Lr sharpens a bit, which kicks the MTF up. The real images are subject to focus errors and other unmodeled defects. I figure it's not all that important if the task is to get at general concepts relating sensor resolution to lens resolution.

I'm struggling to see how pixel pitch affects mtf. As I understand the basics of sampling theory, the pixel pitch determines the spatial sampling frequency and this must be at least twice the highest frequency component of the signal for accurate reproduction. If not, artifacts will be produced for those frequency components of the signal that are above the Nyquist frequency. But how does this affect the mtf of the lens/camera system ? I appreciate that pixel size and fill factor affects mtf as a result of non-point sampling and I assume that de-mosaicing affects mtf also. But are these the only factors or am I missing some basic concept here?

No, I think you've got it. One subtlety worth exploring is the way that the frequency part of MTF is reported. If you look at cycles/pixel, you'd say that making the sensor pitch lower makes the MTF worse. But I'm looking at a measure that I think is more relevant to photographers, cycles per picture height, which is cycles/pixel times the vertical dimension of the image in pixels. I'm assuming landscape orientation and a 24x36mm sensor.

If you look at my blog, you can see that I'm dancing around the aliasing issue a bit. I'm looking for a single number that I can derive from slanted edge MTF testing. Some people, including the well-respected Imatest folks, look at MTF at the Nyquist frequency (half the sampling frequency). I think that's better than nothing. I'm playing around with the sum of all the spatial frequency energy between the Nyquist frequency and the sapling frequency to see if that's a good metric. No conclusions yet, though.

That help?

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Dave Ellis on May 25, 2014, 07:51:34 pm
Thanks for the welcome and the comments Jim.

I figure it's not all that important if the task is to get at general concepts relating sensor resolution to lens resolution.

No, I think you've got it. One subtlety worth exploring is the way that the frequency part of MTF is reported. If you look at cycles/pixel, you'd say that making the sensor pitch lower makes the MTF worse. But I'm looking at a measure that I think is more relevant to photographers, cycles per picture height, which is cycles/pixel times the vertical dimension of the image in pixels. I'm assuming landscape orientation and a 24x36mm sensor.


I agree, the precise figures don't really matter, it's the demonstration of the concepts that is important with modeling like this (to improve our understanding of the significance of different factors).

I hadn't picked up on the point that you are using cycles per picture height with the definition cycles/pixel x vertical image height in pixels. I'll have another think about this but it probably answers my question.

Dave
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Dave Ellis on May 26, 2014, 12:35:09 am
Jim

I've had a further think about this and come to the conclusion that the cycles/ph concept is not the issue. I think I have simply been under-estimating the mtf contribution from the non-point sampling and demosaicing. I had it in my mind that they were only small contributors but I've never looked at how their contributions vary with pixel size. Also, I guess they become more significant for higher quality lenses at their sharpest aperture.

Dave
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 26, 2014, 01:50:58 pm
I think I have simply been under-estimating the mtf contribution from the non-point sampling and demosaicing. I had it in my mind that they were only small contributors but I've never looked at how their contributions vary with pixel size. Also, I guess they become more significant for higher quality lenses at

Right you are about non-point sampling. Here's an extreme example with 1% and 100% fill factors:

(http://www.kasson.com/ll/mtf100pct477um.PNG)

(http://www.kasson.com/ll/mtf1pct477um.PNG)

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Dave Ellis on May 26, 2014, 04:12:33 pm
Thanks for that Jim. If you take your figures for say f/2.8 and 0.4 cycles, for 100% fill factor the mtf is 0.45 and for 1% fill factor the mtf is 0.6. The ratio of these figures is 0.75 which I think should be the mtf contribution of the non-point sampling. This appears to be consistent with the sinc function calculations I've seen elsewhere.

Dave
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on May 26, 2014, 05:35:25 pm
Thanks for that Jim. If you take your figures for say f/2.8 and 0.4 cycles, for 100% fill factor the mtf is 0.45 and for 1% fill factor the mtf is 0.6. The ratio of these figures is 0.75 which I think should be the mtf contribution of the non-point sampling. This appears to be consistent with the sinc function calculations I've seen elsewhere.

Dave

That's good to know, Dave. I'm not working in the frequency domain at all until the MTF numbers come out of the Burns slanted edge STF program; I'm doing everything up to then using convolution kernels. Jack Hogan has been doing similar work in the frewuency domain. We touch base from time to time, and for the most part our results are consonant.

Jim
Title: A quantitative look at the issue
Post by: Jim Kasson on May 26, 2014, 06:29:17 pm
Here's the result of an attempt to run directly at the topic of this thread. It's a plot of the direction of most rapid improvement in MTF50 ("steepest ascent" is the mathematical jargon from the world of optimum seeking methods), with the length of the arrows proportional to the slope of the MTF50 curve in the direction of steepest ascent, all plotted for a diffraction-limited lens on a Bayer CFA camera with no AA filter for f-stops of f/2.8 to f/16 and sensel pitches of 2 um to 5.7 um:

(http://www.kasson.com/ll/quiverdiffltdnoAA.PNG)

Sensel pitch in micrometers (um) is the vertical axis. F-stop is the horizontal one. You can see that for pixel pitches of 4.7 um and up, except at f/16 the lines of steepest ascent all point in the direction of greater sensor resolution. As the sensor resolution goes up and we get lower on the graph, the direction of the arrows on the left side of the graph begin to point more and more to the left, indication that the easiest way to gain MTF50 is to open up the (perfect) lens.

The direction of the arrows is unfortunately a function of the scaling chosen, but at least it's a quantitative way to look at the sensor resolution vs lens resolution question.

More plots here. (http://blog.kasson.com/?p=5920)

Jim
Title: Re: A quantitative look at the issue
Post by: Bart_van_der_Wolf on May 27, 2014, 04:20:42 am
Here's the result of an attempt to run directly at the topic of this thread. It's a plot of the direction of most rapid improvement in MTF50 ("steepest ascent" is the mathematical jargon from the world of optimum seeking methods), with the length of the arrows proportional to the slope of the MTF50 curve in the direction of steepest ascent, all plotted for a diffraction-limited lens on a Bayer CFA camera with no AA filter for f-stops of f/2.8 to f/16 and sensel pitches of 2 um to 5.7 um:

(http://www.kasson.com/ll/quiverdiffltdnoAA.PNG)

Hi Jim,

Nice way of looking at it. While I understand the MTF50 metric of a lens/sensor combination as a general indication of perceived contrast/resolution (although it depends on subsequent magnification), I do have a slight concern with the MTF50 only metric though. Since the ISO specify that limiting visual reolution corresponds reasonably well with the spatial frequencies at MTF10 (or at the Nyquist frequency, whichever is reached first), I do wonder if it would be instructive to also show that. Afterall, with deconvolution sharpening we are able to boost these lower MTF responses to higher levels, and I always sharpen my images, so MTF50 before sharpening becomes a bit arbitrary ...

I think the general conclusion will stay the same (see attachments), narrower sampling densities are more beneficial unless the image is dominated by diffraction blur, but beter lenses and narrower sensel pitch benefit even more. Adding MTF10 would allow to get a bit closer to real life where sharpening will always be involved.

Of course, modelling the effect of sharpening on MTF in advance is also not easy, I do know from experience, that's why I usually determine that by analyzing the end result after e.g. regularized Richardson-Lucy or Van Cittert deconvolution, or after FocusMagic did its magic. But knowing the MTF10 before restoration already gives an idea whether there is anything salvageable to begin with.

Cheers,
Bart
Title: Re: A quantitative look at the issue
Post by: Jim Kasson on May 27, 2014, 06:28:23 pm
Nice way of looking at it. While I understand the MTF50 metric of a lens/sensor combination as a general indication of perceived contrast/resolution (although it depends on subsequent magnification), I do have a slight concern with the MTF50 only metric though. Since the ISO specify that limiting visual reolution corresponds reasonably well with the spatial frequencies at MTF10 (or at the Nyquist frequency, whichever is reached first), I do wonder if it would be instructive to also show that. Afterall, with deconvolution sharpening we are able to boost these lower MTF responses to higher levels, and I always sharpen my images, so MTF50 before sharpening becomes a bit arbitrary ...did its magic. But knowing the MTF10 before restoration already gives an idea whether there is anything salvageable to begin with.

That makes sense, Bart. However, with diffraction-limited lenses, and even with my simulated Otus, MTF10 occurs above the Nyquist frequency at some f-stops at today's full frame pixel pitches. I'll run some sims and post the results.

OBTW, the sims are taking longer as I do enough of them to get 2D gridded results. I'm doing all my convolutions sequentially. I see no reason why I can't just build one big kernel to use at the resolution of the target by centering and adding the kernels for diffraction, lens defects, focus error, AA filter, fill-factor, etc., and then just applying that kernel to the target before sampling. However, I think I saw, somewhere on the 'net, a warning that you shouldn't do that. I don't understand the warning. It's a linear system at the presampling calculations, isn't it?  Any advice on this point? It's going to be a fair amount of work to write the code to add all the kernels together, since they're all different sizes.

Thanks,

Jim
Title: Re: A quantitative look at the issue
Post by: Bart_van_der_Wolf on May 28, 2014, 05:17:55 am
That makes sense, Bart. However, with diffraction-limited lenses, and even with my simulated Otus, MTF10 occurs above the Nyquist frequency at some f-stops at today's full frame pixel pitches. I'll run some sims and post the results.

In my SFR measurements a modulation of 10% is often achievable for the best apertures before reaching the Nyquist frequency. So one either uses MTF10 or Nyquist, whichever is reached first. Maybe the MTF10 sims will look pretty much the same as the MTF50's, only at higher spatial frequencies or LPPH.

Quote
OBTW, the sims are taking longer as I do enough of them to get 2D gridded results. I'm doing all my convolutions sequentially. I see no reason why I can't just build one big kernel to use at the resolution of the target by centering and adding the kernels for diffraction, lens defects, focus error, AA filter, fill-factor, etc., and then just applying that kernel to the target before sampling.

Sequential/cascaded convolutions can indeed be replaced by one compounded convolution, which pretty fast starts looking like a Gaussian (due to the Central limits theorem). Here (http://www.dspguide.com/ch7/2.htm)'s some more explanation in pretty simple words.

Quote
However, I think I saw, somewhere on the 'net, a warning that you shouldn't do that. I don't understand the warning. It's a linear system at the presampling calculations, isn't it?  Any advice on this point?

I think you might have remembered seeing the warning at David Jacobson's lens tutorial (http://photo.net/learn/optics/lensTutorial#part5) page, which warns against simple MTF multiplications where negative lobes are involved in the original (COC) signals.

Quote
It's going to be a fair amount of work to write the code to add all the kernels together, since they're all different sizes.

I understand. The cascaded kernel will also grow significantly with each additional convolution, but ultimately it should produce fewer multiplication+addition operations because you end up with a single convolution kernel for all image inputs. Also, given that the resulting kernel may turn out looking pretty simple, it would be possible to truncate or filter it to a smaller kernel support size which would also speed up things once the cascaded kernel is available in floating point precision. And once you indeed find that you can replace the compound result with a separable Gaussian, things can be sped up hugely.

But the difficulty is in finding the fastest way to cascade the kernels. Maybe a Symbolic solver like Mathematica can assist, although having to deal with discrete sampled sensel apertures instead of continuous point samples does make life more difficult.

Cheers,
Bart
Title: Re: A quantitative look at the issue
Post by: Jim Kasson on May 28, 2014, 12:47:52 pm
In my SFR measurements a modulation of 10% is often achievable for the best apertures before reaching the Nyquist frequency. So one either uses MTF10 or Nyquist, whichever is reached first. Maybe the MTF10 sims will look pretty much the same as the MTF50's, only at higher spatial frequencies or LPPH.

OK, here are some MTF10 (or MTFNyquist, whichever is higher) plots for a diffraction limited lens from f/2.8 through f/16 and sensel pitch 2 um through 5.7 um. The MTF units are cycles/picture height, assuming a 24x36mm sensor.

In 3D:

(http://www.kasson.com/ll/mtf10diffLtdSurf.PNG)

As a contour plot:

(http://www.kasson.com/ll/mtf10diffLtdCont.PNG)

As a family of curves in 2D:

(http://www.kasson.com/ll/mtf10diffLtd2D.PNG)

And as a quiver plot:

(http://www.kasson.com/ll/mtf10diffLtdQuiver.PNG)

I'll do a run for the simulated Otus, which won't spend so much time with MTF10>MTFNyquist.

Jim
Title: MTF10 results for a simulated Otus 55mm f/1.4
Post by: Jim Kasson on May 29, 2014, 12:00:52 pm
Here are the MTF10 (or MTFNyquist, whichever is higher) plots for a simulated Otus 55mm f/1.4 lens from f/2.8 through f/16 and sensel pitch 2 um through 5.7 um. The MTF units are cycles/picture height, assuming a 24x36mm sensor.

In 3D:

(http://www.kasson.com/ll/mtf10OtusSurf.PNG)

As a contour plot, with sensel pitch across the bottom and f-stop the vertical axis:

(http://www.kasson.com/ll/mtf10OtusCont.PNG)

As a family of curves in 2D. You can see the places where the MTF10 occurs at a greater spatial frequency than the Nyquist frequency on the two coarsest sensel pitches -- they're the flat spots on the curves:

(http://www.kasson.com/ll/mtf10Otus2D.PNG)

And as a quiver plot:

(http://www.kasson.com/ll/mtf10OtusQuiver.PNG)

Going from MTF50 to MTF10 as the metric makes it look like sensor resolution is relativly more valuable than lens resolution.

Jim
Title: Re: MTF10 results for a simulated Otus 55mm f/1.4
Post by: Bart_van_der_Wolf on May 29, 2014, 12:56:19 pm
Going from MTF50 to MTF10 as the metric makes it look like sensor resolution is relativly more valuable than lens resolution.

Hi Jim,

Thanks for the effort. Indeed, it's relatively easier to achieve more resolution by denser sampling, unless lens quality or diffraction throw a spanner in the works. Interesting to see my observations confirmed that the (steepest ascent) improvements also point towards narrower apertures than with MTF50 as a quality metric.

It also seems to confirm that a diffraction pattern diameter of 1.5 - 2 x the sensel pitch is where diffraction kicks in at the highest spatial frequencies, which was of course already explainable from more sensels joining in to generate resolution even with perfect phase alignment.

Cheers,
Bart
Title: Re: A quantitative look at the issue
Post by: Jim Kasson on May 29, 2014, 01:05:28 pm
The cascaded kernel will also grow significantly with each additional convolution, but ultimately it should produce fewer multiplication+addition operations because you end up with a single convolution kernel for all image inputs. Also, given that the resulting kernel may turn out looking pretty simple, it would be possible to truncate or filter it to a smaller kernel support size which would also speed up things once the cascaded kernel is available in floating point precision. And once you indeed find that you can replace the compound result with a separable Gaussian, things can be sped up hugely.

But the difficulty is in finding the fastest way to cascade the kernels. Maybe a Symbolic solver like Mathematica can assist, although having to deal with discrete sampled sensel apertures instead of continuous point samples does make life more difficult.

Bart, that helps, but I'm still a bit confused. Associativity applies to convolution, right? So, if I can appropriate "*" as the symbol for convolution, and I is an image, and k1, k2, and k3 are kernels, here's what I'm doing now:

FilteredImage = k3 * (k2 * (k1 * I))

But if I did this, I should get the same result:

FilteredImage = I * (k3 * (k1 * k2))

However, when I use discrete convolution filtering on an image, the outer pixels are not accurate, so to make this work, I need to pad the kernels to larger sizes by adding zeros, and then crop the result back down after each convolution.

So I don't need the symbolic approach.

I must be missing something here. Any help is appreciated.

Jim



Title: Re: A quantitative look at the issue
Post by: Bart_van_der_Wolf on May 29, 2014, 02:09:51 pm
Bart, that helps, but I'm still a bit confused. Associativity applies to convolution, right? So, if I can appropriate "*" as the symbol for convolution, and I is an image, and k1, k2, and k3 are kernels, here's what I'm doing now:

FilteredImage = k3 * (k2 * (k1 * I))

But if I did this, I should get the same result:

FilteredImage = I * (k3 * (k1 * k2))

However, when I use discrete convolution filtering on an image, the outer pixels are not accurate, so to make this work, I need to pad the kernels to larger sizes by adding zeros, and then crop the result back down after each convolution.

Hi Jim,

That's correct, the kernels need to be padded, the resulting kernel grows with each convolution. I think you only crop the final result, to keep full accuracy. Of course, when the final kernel edges don't add enough to be significant, that kernel can be cropped or windowed a bit more smoothly.

Quote
So I don't need the symbolic approach.

Indeed, it's not needed, but it sometimes allows to simplify calculations (like separation of a Gaussian). In this case,  there are so many variables involved that there may be little simplification possible, unless the final kernel can be relatively accurately approximated by a Gaussian. I'm just hoping for some speed-up to reduce the waiting for a result to happen, that's all.

Cheers,
Bart
Title: Re: A quantitative look at the issue
Post by: Jim Kasson on May 29, 2014, 02:19:44 pm
That's correct, the kernels need to be padded, the resulting kernel grows with each convolution. I think you only crop the final result, to keep full accuracy. Of course, when the final kernel edges don't add enough to be significant, that kernel can be cropped or windowed a bit more smoothly.

Indeed, it's not needed, but it sometimes allows to simplify calculations (like separation of a Gaussian). In this case,  there are so many variables involved that there may be little simplification possible, unless the final kernel can be relatively accurately approximated by a Gaussian. I'm just hoping for some speed-up to reduce the waiting for a result to happen, that's all.

Thanks, Bart. That helps. I don't think I'll have to worry about the time necessary to convolve the kernels. Even if I pad the heck out of them, they'll still be a lot smaller than my 12000x8000 pixel target.

Jim
Title: Re: A quantitative look at the issue
Post by: eronald on May 30, 2014, 06:54:17 am
Thanks, Bart. That helps. I don't think I'll have to worry about the time necessary to convolve the kernels. Even if I pad the heck out of them, they'll still be a lot smaller than my 12000x8000 pixel target.

Jim

I think it's much faster to apply the convolution as a product in the fourier domain because of the FFT speedup. In other words if F is the filter and I the image, paradoxically instead of doing F*I you go much faster doing invFourier (Fourier(F) Fourier(I) ) .

Edmund
Title: Re: A quantitative look at the issue
Post by: Bart_van_der_Wolf on May 30, 2014, 08:02:10 am
I think it's much faster to apply the convolution as a product in the fourier domain because of the FFT speedup. In other words if F is the filter and I the image, paradoxically instead of doing F*I you go much faster doing invFourier (Fourier(F) Fourier(I) ) .

For large images that's correct. Of course there are some potential pitfalls when going from continuous (optical) input signals such as diffraction, to a discrete representation. But once we have a final cascaded set of discrete kernel values, and if the image is large enough to benefit from requiring fewer multiplications with added overhead of conversions to and from the frequency domain, then multiplication in Fourier space is a useful speedup.

It would certainly be an optimization worth considering once the spatial domain approach is working as intended. That would also allow a comparison to see whether errors are made in the conversion algorithms (which may require padding with certain values, and/or windowing), or whether machine precision calculations create issues for values near zero.

Cheers,
Bart
Title: Re: A quantitative look at the issue
Post by: eronald on May 30, 2014, 02:23:38 pm
Jim has 12K by 8K images, the speedup should be huge. But of course he knows this already ;)

Edmund


For large images that's correct. Of course there are some potential pitfalls when going from continuous (optical) input signals such as diffraction, to a discrete representation. But once we have a final cascaded set of discrete kernel values, and if the image is large enough to benefit from requiring fewer multiplications with added overhead of conversions to and from the frequency domain, then multiplication in Fourier space is a useful speedup.

It would certainly be an optimization worth considering once the spatial domain approach is working as intended. That would also allow a comparison to see whether errors are made in the conversion algorithms (which may require padding with certain values, and/or windowing), or whether machine precision calculations create issues for values near zero.

Cheers,
Bart
Title: Re: A quantitative look at the issue
Post by: Jim Kasson on May 31, 2014, 05:45:07 pm
Jim has 12K by 8K images, the speedup should be huge. But of course he knows this already ;)

Edmund and Bart, I had already started coding up the kernel-convolving approach, so I thought I'd finish it. I set up a class for color kernels that could have different values for each plane and knew how to expand and contract themselves under operations like this:

a = aColorKernel.convolveWith(anotherColorKernel);

It took a while to get that coded and debugged. When I ran it, I found that it was slower than the original code, and used one virtual core most of the time versus the original's using at least 12.

So I wrote a little test of the FFT way of doing things:

(http://www.kasson.com/ll/testFFT.PNG)

And when I ran it, I saw this:

(http://www.kasson.com/ll/FFT time.PNG)

So I'm going to re-re-code. And to Edmund I say the three sweetest words in the English language: "You were right."

Jim
Title: Re: A quantitative look at the issue
Post by: eronald on May 31, 2014, 10:07:10 pm
Edmund and Bart, I had already started coding up the kernel-convolving approach, so I thought I'd finish it. I set up a class for color kernels that could have different values for each plane and knew how to expand and contract themselves under operations like this:

a = aColorKernel.convolveWith(anotherColorKernel);

It took a while to get that coded and debugged. When I ran it, I found that it was slower than the original code, and used one virtual core most of the time versus the original's using at least 12.

So I wrote a little test of the FFT way of doing things:

(http://www.kasson.com/ll/testFFT.PNG)

And when I ran it, I saw this:

(http://www.kasson.com/ll/FFT time.PNG)

So I'm going to re-re-code. And to Edmund I say the three sweetest words in the English language: "You were right."

Jim


Jim,

 I'm like a broken clock: precisely right twice a day, and quite exasperating all day long :)
 I must have missed something - what's the speedup factor on a 16Kx16K image ? I'm just seeing an absolute time for the Fourier multiplier method.
 Always have trouble understanding the work that other people do - so I hardly understand what you guys are saying. You're like virtuosos and I'm trying to figure out where the notes are on the piano. But I'm going to be writing some simple code meself. Just wish I could code Matlab fluently like you.

Edmund

PS. I suspect there is some simple optimisation which you can do to reflect the fact that your image data and filter kernel are real,  so there were will be symmetries or adjuncts in fourier space. I suspect you only need to compute the multiplication in one quadrant etc. In fact all of these optimisations are probably done in the image processing toolbox, but by taking a hard look at the equations I believe you should get an immediate speedup of 4.

Edmund
Title: Re: A quantitative look at the issue
Post by: hjulenissen on June 01, 2014, 03:11:11 am
For large images that's correct.
The cost of (1-d) convolution is big-O (N*M), for N samples and M coefficients

The cost of doing this in the DFT-domain is big-O (K*log2(K)), where K is some number >=max(N,M) depending on padding requirements, power-of-two performance etc.

If e.g. N=M=K=512, then N*M > K*log2(K) and FFT convolution seems like the right thing. If M < log2(N) then it might not be worth it to work in the transformed domain. I guess this could be the case for an moderate or large size image and a compact convolution kernel?

For large data sets (16000x16000 pixels x 3 channels x 8 byte), memory might become the bottle-neck. If you work in the spatial domain, you might be able to load tiles of pixels from the source file, process, and save the partial result.

I have an intuitive understanding of how to do padding in the spatial domain: repeat, reflect, etc signal samples so as to reduce discontinuity in the derivative (or 2nd derivative). In the frequency domain the right thing is not so intuitive (for me at least), as the transform does an inherent spatial "wrap-around".

-h
Title: Re: A quantitative look at the issue
Post by: hjulenissen on June 01, 2014, 03:15:37 am
(http://www.kasson.com/ll/testFFT.PNG)
Did you try replacing
var(16384,16384,3) = 0;

With something like:
var = zeros(16384,16384,3,'single');

If done carefully, this should force MATLAB to do its calculations in single-precision float. In principle, the x86 hardware and the FFTW library should do single-precision calculations at approximately twice the speed of double-precision calculations, but it has been my experience that MATLAB often does not behave like one might hope in this respect.

-h
Title: Re: A quantitative look at the issue
Post by: Jim Kasson on June 01, 2014, 11:37:25 am
Did you try replacing
var(16384,16384,3) = 0;

With something like:
var = zeros(16384,16384,3,'single');

The first statement pads var with zeros to the large square matrix. The second fills var with zeros, wiping out the image that's already in var. At least, that's the way it appears to me; maybe I'm missing something.

As to doing the operations in single precision floating point, that's a good idea, but it's down the road for me. My current FFT implementation has some odd results from sfrmat3 in the spatial frequency region approaching twice the Nyquist frequency; there are small ripples in the MTF curve, and I don't want to reduce precision until I'm getting the same results from the FFT and the convolution approaches.

Thanks,

Jim
Title: Re: A quantitative look at the issue
Post by: Jim Kasson on June 01, 2014, 12:03:51 pm
The cost of (1-d) convolution is big-O (N*M), for N samples and M coefficients

The cost of doing this in the DFT-domain is big-O (K*log2(K)), where K is some number >=max(N,M) depending on padding requirements, power-of-two performance etc.

If e.g. N=M=K=512, then N*M > K*log2(K) and FFT convolution seems like the right thing. If M < log2(N) then it might not be worth it to work in the transformed domain. I guess this could be the case for an moderate or large size image and a compact convolution kernel?

I'll run some tests at various sized kernels.

For large data sets (16000x16000 pixels x 3 channels x 8 byte), memory might become the bottle-neck. If you work in the spatial domain, you might be able to load tiles of pixels from the source file, process, and save the partial result.

I'm running this on a 192 GB machine; actually, it's a 256 GB machine, but Win 7 only recognizes the smaller number. The FFT approach is more memory-intensive than convolution, especially when I keep the old FFT'd kernels around, but at no time in this series of tests have I seen it go over 100 GB.

I have an intuitive understanding of how to do padding in the spatial domain: repeat, reflect, etc signal samples so as to reduce discontinuity in the derivative (or 2nd derivative). In the frequency domain the right thing is not so intuitive (for me at least), as the transform does an inherent spatial "wrap-around".

That's a good point; maybe my padding by filling with zeros is a bad idea; it's the approach recommended in the Matlab Image Toolbox support materials. Having the matrix dimensions an integer power of two is also recommended for speed.

Jim
Title: Convolution vs FFT timings
Post by: Jim Kasson on June 01, 2014, 03:54:56 pm
I set up a test of convolution vs multiplying in the frequency domain wrt compute time for a 12000x8000 pixel image..

The convolution program:

(http://www.kasson.com/ll/convtimepgm.PNG)

Yes, I know it would run faster if I just let Matlab do the three color planes together, but I need the ability to have different kernels for each color plane to simulate the wavelength dependency of diffraction. Note that I'm not blowing up the image and mirroring it at the edges, which gives convolution somewhat of an unfair advantage.

The FFT program:

(http://www.kasson.com/ll/ffttimepgm.PNG)

The results:

(http://www.kasson.com/ll/ffttime.PNG)

Jim
Title: Convolution vs FFT results
Post by: Jim Kasson on June 01, 2014, 05:33:06 pm
In trying to figure out why the FFT version of the camera simulator gave me different results than the convolution version, I discovered that I was getting what looked to be differential shifting among the color planes after diffraction with the FFT version, giving rise to color fringing that, after a point got worse as a diffraction-limited lens was stopped down, causeing wierd MTFs curves above the Nyquist frequency.

So I went back to the images from the timing test program that I posted immediately above and looked at them. They're different from each other, and the differences get greater as the kernel size increases.

Here's the output of convolving the 12000x8000 slanted edge test image with a 1201x1201 pixel averaging kernel:

(http://www.kasson.com/ll/ConvRoundTrip1.jpg)

And here's what you get with the FFT code with the same kernel:

(http://www.kasson.com/ll/FFTroundTrip1.jpg)

Oops!

I'm padding with zeros to the right and down before I do the FFT (http://www.mathworks.com/help/images/fourier-transform.html). Should I be centering the image in the 16384x16384 pixel field? Um, maybe I've got it: the FFT method returns an image that's larger than the input image by the size of the kernel, and shifts it down and to the right. How to make the shift independent of the kernel size?

Jim

Title: An FFT registration plan
Post by: Jim Kasson on June 02, 2014, 03:28:04 pm
I've done enough experimenting to understand that right/lower zero padding of the image and the kernels prior to FFT yields, after multiplication and inverse FFT, a rightward and downward shift of the synthetically convolved image of (kernelSize - 1) / 2 pixels. This would be no problem if the kernels were the same in each color plane of the image, since the output image could be brought back into register by cropping.

However, in my case, since I'm using different kernels for different planes to simulate diffraction, the shifting of the round trip through the FFT causes the planes of the output image to be out of registration with each other, which causes errors in the SFR analysis.

As I see it, I have two choices. I could keep track of the kernel sizes in each plane while I'm multiplying in the frequency domain, and shift the planes of the image after the inverse FFT. I've tested this in isolation, and it works. Alternatively, I could set up the Airy disk kernels so they're all the same pixel size, the size of the red one. The blue one will just have more rings. Right now they all have the same number of rings: 10.

I'm liking the latter approach better, since it seems less susceptible to programming errors.

I'll report back when I've results.

If anybody thinks I'm  on the wrong track, sing out, please.

Jim
Title: FFT results
Post by: Jim Kasson on June 02, 2014, 05:52:17 pm
As I see it, I have two choices. I could keep track of the kernel sizes in each plane while I'm multiplying in the frequency domain, and shift the planes of the image after the inverse FFT. I've tested this in isolation, and it works. Alternatively, I could set up the Airy disk kernels so they're all the same pixel size, the size of the red one. The blue one will just have more rings. Right now they all have the same number of rings: 10.

I'm liking the latter approach better, since it seems less susceptible to programming errors.

I implemented the second approach, and I now get very similar (but not identical) results for convolution and fft models. And the speed? At a target to sensor resolution ratio of 32, at f/8 and 4.77 um pixel pitch I get the same execution times. Narrower f-stops and/or finer pixel pitches favor the FFT approach. With the FFT I'll be able to take the pitches to under 2 um, which was my limit before because of execution times.

Matlab is known to be a rapacious consumer of memory, and the FFT has made it more so:

(http://www.kasson.com/ll/matlabfftmemory.PNG)

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: eronald on June 02, 2014, 09:37:48 pm
what I really would like to have instructions for is how to do some filtering on a synthetic image and then stuff it back in a raw converter.

Edmund
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on June 02, 2014, 10:49:52 pm
what I really would like to have instructions for is how to do some filtering on a synthetic image and then stuff it back in a raw converter.

Me, too. Is the DNG SDK (https://www.adobe.com/support/downloads/dng/dng_sdk.html) the only game in town?

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on June 02, 2014, 10:55:27 pm
I realized -- doh! -- that I only needed to compute the target's FFT once per batch run. Now a 4-way beam-splitter AA filtered diffraction limited lens from f/2.8 to 1/16 by half stops, and pitch 2 um to 5.7 by the same multiplier runs in 100 min instead of the previous 360.

Thanks, Edmund.

Jim
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: eronald on June 03, 2014, 07:39:44 am
Jim,
 
 
 We might both find it useful to chat. Can you email me your Skype id?
 My email is edmundronald at gmail dot com

 BTW, you might find it useful to save out and read in the F-transformed image file and kernels. You don't need to recompute them unless they change. I don't know if that is useful at your image size, but it might be worth testing. 

I think Matlab has the ability to save out any variable, and the whole environment.
 
Edmund
Title: Re: How much sensor resolution do we need to match our lenses?
Post by: Jim Kasson on June 03, 2014, 10:34:48 am
BTW, you might find it useful to save out and read in the F-transformed image file and kernels. You don't need to recompute them unless they change. I don't know if that is useful at your image size, but it might be worth testing.

The FFT'd target is 13 GB (16384x16384x3x8x2), so it's probably faster to read in the image, pad it, and compute the FFT than it would be to read in the FFT'd version, although it wouldn't make mush difference either way since the FFT'd target is only computed once per batch run now. The kernels change on every iteration, as they are, in general, a function of both the f-stop and the pixel pitch.  At roughly 1 minute per iteration, I think it's fast enough now. But thanks for the advice.

Jim