Luminous Landscape Forum

Equipment & Techniques => Cameras, Lenses and Shooting gear => Topic started by: Here to stay on October 18, 2014, 07:05:50 pm

Title: Do Sensors “Outresolve” Lenses?
Post by: Here to stay on October 18, 2014, 07:05:50 pm
I have a question about this article
Under the " Lens resolution basics" block

In this paragraph
"Lens resolution is limited by diffraction, when you close the diaphragm, and by aberrations, which worsen with focal length and the opening of the diaphragm."
 
Are you say that a lens resolution is limited by both  Diffraction & Aberrations at the same time.
Would this imply that the greatest resolution from a lens would lie at the point where the blur from Diffraction and the blur from Aberrations are the same?
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Here to stay on October 18, 2014, 07:12:13 pm
I see now where I should have posted this
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: thierrylegros396 on October 20, 2014, 12:54:00 pm
If you trust DXO Mark, yes!

Most of the time the center of the Lens is fairly good, but corners can be really poor (example Sony RX100).

Thierry
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: michael on October 20, 2014, 01:50:52 pm
There are many forms of lens aberration.

Diffraction is a different type of animal. They are not linked. In fact, some aberrations decrease as a lens is stopped down. Most lenses are at their best about 2 – 3 stops from wide open. Less, and aberrations of various sorts may dominate. more, and diffraction starts.

But, as with all things like this, "It depends".

Michael
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: NashvilleMike on October 20, 2014, 03:37:01 pm
Think of it like this:

Diffraction is the maximum potential resolution at any given aperture, no matter what lens, format, or brand. It's a physical limitation, a ceiling, if you will, on how much you can get out of the lens. So if at F/8 diffraction says with red light you can only resolve X, then X is the maximum you'll ever resolve, even if the lens was perfect. (I don't have the time to do the math to tell you what X is at the moment, but you get the idea). Diffraction limits vary with the color (wavelength) of the light.

Lens aberrations tend to vary dependent on the lens. Most lenses tend to correct most aberrations better when they are stopped down a bit. Stop down too far though, and of course you'll be "into" diffraction.

Another way to think of it is that if you just won the lottery and went out and bought the latest exotic Ferrari (or whatever). The car has a maximum top speed - can't do anything about that - it's as fast as that car will go. That's diffraction. Your skill as a driver and/or the road conditions and/or the kinds of tires you're using and/or the weather are factors that will determine how fast YOU will actually be able to drive the car at any given time. It likely won't be at the top speed.

Having said this, don't get paranoid over diffraction. It's there, but it's not a binary thing where if you venture into the land of diffraction your images immediately suck - it's more a gradual thing, apparent in some subject matter more than others, and can be partially mitigated with careful de-convolution sharpening (focusmagic, etc).

Speaking personally, as a D800E owner who owns top tier glass and is rather picky, I realize there is a tradeoff between apertures that are "not in" the diffraction zone and having sufficient DOF for a scene as well as being at an aperture where the aberrations are well corrected and the corners are sharp. I tend to "know" my lenses, and as a general rule tend to shoot the wide angles between F/7.1 and F/9 if I can, and try to stay away from F/11 or beyond unless I need to go there. If I'm in doubt, I'll aperture bracket a scene so I can cover myself with choices.

-m
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Here to stay on October 21, 2014, 12:26:36 am
There are many forms of lens aberration.

Diffraction is a different type of animal. They are not linked. In fact, some aberrations decrease as a lens is stopped down. Most lenses are at their best about 2 – 3 stops from wide open. Less, and aberrations of various sorts may dominate. more, and diffraction starts.

But, as with all things like this, "It depends".

Michael


What I am getting at is the sweet spot at which a lens has the highest resolution is at the Fstop at which you have the least blur due to aberrations and the least blur from diffraction. (the same amount of blur from both
Would look like this
Blur from aberrations ->sweet spot max resolution <- blur from diffraction  
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Here to stay on October 21, 2014, 12:28:50 am
Think of it like this:

Diffraction is the maximum potential resolution at any given aperture, no matter what lens, format, or brand. It's a physical limitation, a ceiling, if you will, on how much you can get out of the lens. So if at F/8 diffraction says with red light you can only resolve X, then X is the maximum you'll ever resolve, even if the lens was perfect. (I don't have the time to do the math to tell you what X is at the moment, but you get the idea). Diffraction limits vary with the color (wavelength) of the light.

Lens aberrations tend to vary dependent on the lens. Most lenses tend to correct most aberrations better when they are stopped down a bit. Stop down too far though, and of course you'll be "into" diffraction.
This is what I mean the sweet spot between aberrations and diffraction
thank you
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: ErikKaffehr on October 21, 2014, 01:24:14 am
Hi,

First class optics are diffraction limited at a useful aperture. The meaning of this is the first airy ring is visible.
(http://upload.wikimedia.org/wikipedia/commons/thumb/1/14/Airy-pattern.svg/220px-Airy-pattern.svg.png)

In the image above the centrum disk is what is know as the Airy disk, if the lens essentially puts all energy within the Airy disc, the first ring will visible. If significant amount of optical aberrations are present the first ring is obfuscated. Good microscope lenses and astronomical instruments are often diffraction limited at full aperture.

A 3D plot of the airy patter:
(http://upload.wikimedia.org/wikipedia/en/thumb/e/e6/Airy-3d.svg/220px-Airy-3d.svg.png)

A photographic image of the Airy pattern:
(http://upload.wikimedia.org/wikipedia/commons/thumb/4/4b/Airy_disk_created_by_laser_beam_through_pinhole.jpg/220px-Airy_disk_created_by_laser_beam_through_pinhole.jpg)

The image from Photozone.de below shows MTF 50 values for a Sigma 50/1.4 Art on Canon 5DIII, centrum, edge and corners. You can see that centrum peaks at f/4 (although it is near maximum at f/2) but edges/corners need something like f/4 to reach maximum. Maximum on corners is f/5.6 but centrum looses a tiny bit at f/5.6, due to diffraction.
(http://www.photozone.de/images/8Reviews/lenses/sigma_50_14art/mtf.png)

The example below is also from Photozone, it shows the Canon 50/1.8 a simple standard lens. This lens performs best at f/5.6.
(http://www.photozone.de/images/8Reviews/lenses/canon_50_18_5d/mtf.gif)


The third example is a 18.5/1.8 for Nikon C1, this is a small sensor camera. Because of the small sensor it needs an excellent lens.The lens is so well corrected that it runs into diffraction limit at f/2.8.
(http://www.photozone.de/images/8Reviews/lenses/1_nikkor_185_18_v1/mtf.png)

There are many aspects of lenses. MTF is one of those and is generally used in the design phase, but the engineers calculate dozens of MTF curves describing both in focus and out of focus images. Accurate focus is only possible on a single plane, that often is curved. So much of the subject will be more or less out of focus. So out of focus rendition is a very important parameter.

Best regards
Erik



There are many forms of lens aberration.

Diffraction is a different type of animal. They are not linked. In fact, some aberrations decrease as a lens is stopped down. Most lenses are at their best about 2 – 3 stops from wide open. Less, and aberrations of various sorts may dominate. more, and diffraction starts.

But, as with all things like this, "It depends".

Michael

Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Ellis Vener on October 21, 2014, 12:34:44 pm
To answer the question in your headline , Do Sensors “Outresolve” Lenses?

Not to be obvious but that depends on the lens and the sensor. Using camera  identical sensor area,  the same individual lens at a given aperture may yield fine results on a12mp sensor, good results on 24mp camera, and only so-so results on a 36mp sensor. it isn't just a matter of pixel count it is also down to the differences in sensor assembly (sensor+ microlenses +anti-aliasing design.

In general  most lenses exhibit their best overall performance  2~3 stops down from wide open - but if the photos you are making  one day  need f/1.4 to work and the next day's project requires f/16 you accept the slightly lower image quality at those apertures so you can make the photographs you need to make.
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Petrus on October 21, 2014, 02:07:19 pm
Would this imply that the greatest resolution from a lens would lie at the point where the blur from Diffraction and the blur from Aberrations are the same?

Yes.

Aberrations are the greatest at full open and diminish when stopped down. Diffraction is minimal at full open and start to get worse with smaller apertures. Where there two meet is the maximum resolution from this particular lens. Better lenses have this point at larger apertures, worse lenses at higher apertures.
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: ErikKaffehr on October 21, 2014, 03:04:10 pm
+1,

Petrus is right.

Well, with some exception astronomical telescopes and microscope lenses are normally sharpest fully open and often don't have aperture.


Best regards
Erik

Yes.

Aberrations are the greatest at full open and diminish when stopped down. Diffraction is minimal at full open and start to get worse with smaller apertures. Where there two meet is the maximum resolution from this particular lens. Better lenses have this point at larger apertures, worse lenses at higher apertures.
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: dwswager on October 21, 2014, 03:29:26 pm
In addition to the aberrations and diffraction, add in focus blur.

The answer to your question can be yes, depending on the lens, sensor and technique.  There is a reason the D8X0 cameras are the only 36MP cameras in 35mm sensor size.    In film days Kodak was adamant there was no more than 2400ppi available on film.

From a technical standpoint, lenses perform best stopped down a couple stops from wide open.  In general, f/5.6-f/11 tends to be the sweet spot of lenses.  Diffraction blur tends to begin, depending on the sensor pixel size at about f/7.1 for something like the D7100, DX size, 24MP.  Of course, practical consideration intrude and you make the best image you can by using this information to your best advantage.  The good image you get is always better than the great image you didn't.

There is a misnomer that a lens on low resolution camera can be a good performer, but on a higher resolution camera a weak performer.  This is UNTRUE.  The lens performs identical on all cameras/sensors and at the same image size will look identical, neglecting the impacts imparted by the sensor, electronics and processing algorithms.  Only if you pixel peep will the per pixel sharpness differences be visible and what you are seeing is that the sensor could resolve more than the lens could give.  What this means is a lens that has a resolving power of x while the sensor could handle a resolving power of 2x, the resulting image will still have a resolution of x' and not 2x'.  Basically splitting the resolution across more and more pixels will not increase the resolution.  And the opposite is true.  This is why Kodak thought people scanning film at 9,600ppi were nuts!  What is true is if you want to get the most out of something like the Nikon D810, you need to use lenses that have better resolving abilities with good light and techniques.


Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Fine_Art on October 21, 2014, 05:01:02 pm
What I am getting at is the sweet spot at which a lens has the highest resolution is at the Fstop at which you have the least blur due to aberrations and the least blur from diffraction. (the same amount of blur from both
Would look like this
Blur from aberrations ->sweet spot max resolution <- blur from diffraction  


There is still more to it. The sweet spot you refer to may, if using a good lens, be open far enough that part of your subject is out of focus. You cannot just pick the sweet spot then leave the camera there. Chances are the image you want in a landscape for example might need f4 at infinity, f5.6 for far midground, stacks at f8-f11 for the foreground.
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Fine_Art on October 21, 2014, 05:12:16 pm
In addition to the aberrations and diffraction, add in focus blur.

The answer to your question can be yes, depending on the lens, sensor and technique.  There is a reason the D8X0 cameras are the only 36MP cameras in 35mm sensor size.    In film days Kodak was adamant there was no more than 2400ppi available on film.

From a technical standpoint, lenses perform best stopped down a couple stops from wide open.  In general, f/5.6-f/11 tends to be the sweet spot of lenses.  Diffraction blur tends to begin, depending on the sensor pixel size at about f/7.1 for something like the D7100, DX size, 24MP.  Of course, practical consideration intrude and you make the best image you can by using this information to your best advantage.  The good image you get is always better than the great image you didn't.

There is a misnomer that a lens on low resolution camera can be a good performer, but on a higher resolution camera a weak performer.  This is UNTRUE.  The lens performs identical on all cameras/sensors and at the same image size will look identical, neglecting the impacts imparted by the sensor, electronics and processing algorithms.  Only if you pixel peep will the per pixel sharpness differences be visible and what you are seeing is that the sensor could resolve more than the lens could give.  What this means is a lens that has a resolving power of x while the sensor could handle a resolving power of 2x, the resulting image will still have a resolution of x' and not 2x'.  Basically splitting the resolution across more and more pixels will not increase the resolution.  And the opposite is true.  This is why Kodak thought people scanning film at 9,600ppi were nuts!  What is true is if you want to get the most out of something like the Nikon D810, you need to use lenses that have better resolving abilities with good light and techniques.




I have to disagree, I think what you see is the limit of the lens, not what the underutilized sensor is not delivering.

In any event, Erik has convinced me at least, that it is better to have blurry fine pixels than jagged larger ones. Meaning you get more image data even when the pixels are mushy.
Title: It's not binary
Post by: Jim Kasson on October 21, 2014, 06:15:44 pm
The title of your post makes it seem that there is a point with decreasing pixel pitch where a sensor "outresolves" a lens, and so further improvement in resolution is possible as the pitch continues to decrease. In fact, over a broad range of pitches, lenses, and lens apertures, both making the lens sharper and making the pixel pitch finer will improve resolution.

Here's an example, from a simulation of a RGGB Bayer-CFA sensor of variable pitch with a beam-splitting AA filter and a model of the Otus 55mm f/1.4.

(http://www.kasson.com/ll/otuswAA3d.PNG)

MTF50 in cycles per picture height for a FF sensor is the vertical axis, pitch in um is coming towards you, and f-stop is from left to right.

If we look down from the top at a "quiver plot", with the arrows pointing in the direction of greatest improvement, and the length of the arrpws proportional to the slope, we can seen that, over much of the aperture range of the lens, the fastest path towards improvement is finer pixel pitch.


(http://www.kasson.com/ll/quiverotus.PNG)

Details here  (http://blog.kasson.com/?p=5905)and here (http://blog.kasson.com/?p=5920).

Note that some would call a 2 um sensor used with this lens underutilized, since, on a per-pixel level it is not as sharp as the same lens on a 4 um sensor. Nowever, in cycles per picture height, the finer sensor is sharper.

Jim
Title: Re: It's not binary
Post by: Here to stay on October 22, 2014, 01:30:51 am
The title of your post makes it seem that there is a point with decreasing pixel pitch where a sensor "outresolves" a lens, and so further improvement in resolution is possible as the pitch continues to decrease. In fact, over a broad range of pitches, lenses, and lens apertures, both making the lens sharper and making the pixel pitch finer will improve resolution.



The thread Title was the title of  from this article

http://www.luminous-landscape.com/tutorials/resolution.shtml
And my question had more to do with a specific paragraph within the article

Thank you for all the work you put into this
Would you have any issues if I used your graphs for future reference and do you have a site that I can direct these references to ?
Again thanks

Title: Re: It's not binary
Post by: Jim Kasson on October 22, 2014, 10:51:14 am
Thank you for all the work you put into this
Would you have any issues if I used your graphs for future reference and do you have a site that I can direct these references to ?

Use them as you wish. If you post them, please credit me and link to my blog.

The two links above are a good place to start people for this topic, although, if you poke around a little, you'll see that's starting at the middle.

Here's the beginning: http://blog.kasson.com/?p=5720

Jim
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: dwswager on October 22, 2014, 03:52:36 pm
I have to disagree, I think what you see is the limit of the lens, not what the underutilized sensor is not delivering.

In any event, Erik has convinced me at least, that it is better to have blurry fine pixels than jagged larger ones. Meaning you get more image data even when the pixels are mushy.

Fine_Art,

Not sure what exactly you are disagreeing with, but my point is a point of fact.  A lens performance is independent of the camera sensor onto which it's image circle shines.  If a lens has the resolving power equal to 12 MPs of data (FF Size Sensor), then no matter what sensor reads that FF image circle, you get the same data.  Cutting a pie into 36 slices instead of 12 slices doesn't give you anymore pie!

What a higher resolution capable sensor will do is show you the limited resolving power of the lens, but the lens has not changed it's performance.  and done properly and all else constant, then the print or displayed image will be the same from a 12MP or 36MP sensor.  You could 'upsample' in the camera or post processing, but you still started with the same amount of actual data.  The other benefit of a higher resolution sensor is that it can capture all the data from all lenses less than or equal to it's data saturation point.  Put a better resolving lens on both those sensors and the 12MP starts throwing away data while the 36MP keeps it.

The 1st question I ask my friends when they want to upgrade their camera is why?  Usually it is more MPs.  So if they have a 12MP camera, I ask them "Assuming the format of the picture (2x3) stays the same, what is double the resolution of a 12MP camera?"  They are usually dumbfounded to know it is 48MP!!!


Title: Re: Do Sensors “Outresolve” Lenses?
Post by: ErikKaffehr on October 22, 2014, 04:22:41 pm
Nice to hear!

Best regards
Erik

I have to disagree, I think what you see is the limit of the lens, not what the underutilized sensor is not delivering.

In any event, Erik has convinced me at least, that it is better to have blurry fine pixels than jagged larger ones. Meaning you get more image data even when the pixels are mushy.
Title: Re: Sv: Re: Do Sensors “Outresolve” Lenses?
Post by: Torbjörn Tapani on October 22, 2014, 08:06:24 pm
Fine_Art,

Not sure what exactly you are disagreeing with, but my point is a point of fact.  A lens performance is independent of the camera sensor onto which it's image circle shines.  If a lens has the resolving power equal to 12 MPs of data (FF Size Sensor), then no matter what sensor reads that FF image circle, you get the same data.  Cutting a pie into 36 slices instead of 12 slices doesn't give you anymore pie!

What a higher resolution capable sensor will do is show you the limited resolving power of the lens, but the lens has not changed it's performance.  and done properly and all else constant, then the print or displayed image will be the same from a 12MP or 36MP sensor.  You could 'upsample' in the camera or post processing, but you still started with the same amount of actual data.  The other benefit of a higher resolution sensor is that it can capture all the data from all lenses less than or equal to it's data saturation point.  Put a better resolving lens on both those sensors and the 12MP starts throwing away data while the 36MP keeps it.

The 1st question I ask my friends when they want to upgrade their camera is why?  Usually it is more MPs.  So if they have a 12MP camera, I ask them "Assuming the format of the picture (2x3) stays the same, what is double the resolution of a 12MP camera?"  They are usually dumbfounded to know it is 48MP!!!
If you find a lens that gives 12 mpix on a 12 mpix sensor you can be certain it will deliver more resolution given a higher mpix sensor.
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Fine_Art on October 23, 2014, 02:43:07 am
Fine_Art,

Not sure what exactly you are disagreeing with, but my point is a point of fact.  A lens performance is independent of the camera sensor onto which it's image circle shines.  If a lens has the resolving power equal to 12 MPs of data (FF Size Sensor), then no matter what sensor reads that FF image circle, you get the same data.  Cutting a pie into 36 slices instead of 12 slices doesn't give you anymore pie!

What a higher resolution capable sensor will do is show you the limited resolving power of the lens, but the lens has not changed it's performance.  and done properly and all else constant, then the print or displayed image will be the same from a 12MP or 36MP sensor.  You could 'upsample' in the camera or post processing, but you still started with the same amount of actual data.  The other benefit of a higher resolution sensor is that it can capture all the data from all lenses less than or equal to it's data saturation point.  Put a better resolving lens on both those sensors and the 12MP starts throwing away data while the 36MP keeps it.

The 1st question I ask my friends when they want to upgrade their camera is why?  Usually it is more MPs.  So if they have a 12MP camera, I ask them "Assuming the format of the picture (2x3) stays the same, what is double the resolution of a 12MP camera?"  They are usually dumbfounded to know it is 48MP!!!




I was referring to this : "...what you are seeing is that the sensor could resolve more than the lens could give. "

You may infer that, but it is not what you are seeing.
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Bart_van_der_Wolf on October 23, 2014, 06:06:39 am
What a higher resolution capable sensor will do is show you the limited resolving power of the lens, but the lens has not changed it's performance.

Hi,

This is where that theory falls apart. Have another look at the chart that Jim posted earlier. A higher sampling density (smaller sampling pitch) will continue to extract more resolution from a lens. While there will be more to be gained from a good lens, it also works that way with a lesser lens. The simple reason is that one needs to combine the MTF functions of both lens and sampling system, and the result will grow closer to the worst of the two contributors if the better one improves, but if the worst of the two is improved then the combination will raise the combined quality even more.

It's rather basic arithmetic, 50%x50% is 25%, but 50%x90% is 45% (closer to the worst of the two). Raising the worst of the two to e.g. 75% would give 75%x90% is 67.5% (again closer to the worst of the two and a much better combination). You can consider the sampling density as the worst of the two, holding back the combined result most, until they get closer to each other's performance when improvement will (not stop, but) slow down.

Lens resolution and sensor sampling density are not independent limitations, they work in combination to produce a system MTF.
 
Cheers,
Bart
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Petrus on October 23, 2014, 07:57:18 am
Slightly OT, but in connection with hi-fi systems I have tried to get across the idea that the total throughput quality of the system is the multiplication of quality index of each component (1 = perfect, 0 = total non function), just like with lens and sensor explained above. Cables are practically perfect, as are electronics in digital audio, but speakers and rooms are far from perfect. So if we multiply cable index 0.99 with CD-player index 0.97, amplifier index 0.95, speaker 0.80 and room 0.75 we get 0.55 as the total quality index. Paying thousands or even tens of thousands to try to improve the player or amplifier is futile, when investing the same money into better speakers or room acoustics a much bigger improvement can be achieved. Just that playing with thousand dollar cables and $10000 players is much more "hi-fi" than gluing acoustic materials to walls and building bass traps.

Back to original programming...
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: bjanes on October 23, 2014, 08:47:55 am
This is where that theory falls apart. Have another look at the chart that Jim posted earlier. A higher sampling density (smaller sampling pitch) will continue to extract more resolution from a lens. While there will be more to be gained from a good lens, it also works that way with a lesser lens. The simple reason is that one needs to combine the MTF functions of both lens and sampling system, and the result will grow closer to the worst of the two contributors if the better one improves, but if the worst of the two is improved then the combination will raise the combined quality even more.

It's rather basic arithmetic, 50%x50% is 25%, but 50%x90% is 45% (closer to the worst of the two). Raising the worst of the two to e.g. 75% would give 75%x90% is 67.5% (again closer to the worst of the two and a much better combination). You can consider the sampling density as the worst of the two, holding back the combined result most, until they get closer to each other's performance when improvement will (not stop, but) slow down.

Lens resolution and sensor sampling density are not independent limitations, they work in combination to produce a system MTF.

Bart,

Well stated. Looking at Jim's charts, we see that the smallest pixel spacing currently available in 135 format cameras is 4.8 um with the 36 mp Sony Exmoor chips. That is second from the widest pixel spacings that Jim considered. We really need finer pixel spacing to make the best use of the Otus lens, and Jim states that the fastest route to higher MTF is to decrease the pixel spacing of the sensor.

While lenses don't outdate as rapidly as digital cameras, the question arises for those of us with limited resources and 36 mp cameras and very good rather than excellent lenses, would the best value be obtained by keeping the current camera and upgrading to the Otus lens or keeping our current optics and upgrading to a higher MP camera?

Bill
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Manoli on October 23, 2014, 09:30:58 am
While lenses don't outdate as rapidly as digital cameras,

That's something of an understatement, good quality manual lenses can last more than a lifetime.

... the question arises for those of us with limited resources and 36 mp cameras and very good rather than excellent lenses, would the best value be obtained by keeping the current camera and upgrading to the Otus lens or keeping our current optics and upgrading to a higher MP camera?

This is the question that DxO have tried to qualify with their very subjective P-Mpix rating and we know that this is indeed a difficult issue to quantify as there are many factors relating to lens selection, but by way of (an extreme) example:

You have an M8(10mp) , combined with the Leica Summilux 50/1.4 - until the arrival of the Otus, pretty well universally accepted as the ultimate 50mm - today, displaced by the 50/2 APO-Summicron. Do you upgrade your lens or buy an M240(24MP) ?

From a resolution(only)/best value perspective - upgrade the camera first.

(Note: I do have the M8, and no, I would not upgrade to the M240 - but that is for reasons wholly unconnected to the core topic of this thread.)






Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Bart_van_der_Wolf on October 23, 2014, 10:36:34 am
From a resolution(only)/best value perspective - upgrade the camera first.

Hi M.,

That's often the case. Sensor sampling density is relatively easier to upgrade than lens resolution, unless the latter is very poor (e.g. and if it matters, in the extreme corners).

We've seen sampling density and thus the limiting resolution (Nyquist frequency) go up from a 6.4 - 7.2 micron pitch to 4 - 4.88 micron, so say 35% in some 7 years. Lens resolution is more complex to catch in a single number, but I think the pace is much slower and less dramatic. The biggest difference was due to analog sensor (film) oriented optical designs being replaced by digital sensor oriented optical designs (taking into account the optical filter stack and cover-glass, which also made rear anti-reflection coating and lens shape more important).

Maybe the OTUS jumps a bit further, instead of crawling, but that's mostly for wide open use because at smaller apertures things get diffraction limited pretty fast).

This is also kind of consistent with Jim's "quiver plots" which show that even with a given lens, the more significant improvement can be achieved by increasing the sampling density (i.e. reducing the pitch).

Of course there are other ways to improve resolution as well, e.g. shooting with a longer focal length and stitching for the angle of view, or using super-resolution techniques. But resolution alone is not as big a requirement for most, except for those who need to produce large output sizes.

So, changing the camera/sensor is often the faster approach to better image quality (resolution and dynamic range and quantum efficiency), and will also allow to utilize improved useful features like live view, tethering, improved autofocus, faster shooting intervals, articulating LCDs, etc..

Modern lenses will last many generations of camera bodies to come, so there is less of a need to upgrade, unless for replacing a dud. Of course lens manufacturers will think of other features to incorporate in lenses, like autofocus improvements, which will only work together with the newest generation of bodies, but 'built-in obsolescence' or forced upgrading/replacement is a way of survival for those companies.

Cheers,
Bart
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: dwswager on October 23, 2014, 11:22:53 am
Hi,

This is where that theory falls apart. Have another look at the chart that Jim posted earlier. A higher sampling density (smaller sampling pitch) will continue to extract more resolution from a lens. While there will be more to be gained from a good lens, it also works that way with a lesser lens. The simple reason is that one needs to combine the MTF functions of both lens and sampling system, and the result will grow closer to the worst of the two contributors if the better one improves, but if the worst of the two is improved then the combination will raise the combined quality even more.

It's rather basic arithmetic, 50%x50% is 25%, but 50%x90% is 45% (closer to the worst of the two). Raising the worst of the two to e.g. 75% would give 75%x90% is 67.5% (again closer to the worst of the two and a much better combination). You can consider the sampling density as the worst of the two, holding back the combined result most, until they get closer to each other's performance when improvement will (not stop, but) slow down.

Lens resolution and sensor sampling density are not independent limitations, they work in combination to produce a system MTF.
 
Cheers,
Bart

Most seemed to have missed my initial condition on the discussion which was that 12MP was all the lens had to give.  And yes, each link in the chain impacts the overall output.  But it is a process of subtraction from image quality which starts at 100%.  The best performance each link in the chain can give is to not detract from image quality, a practical impossibility.  It can never add to it.  My point is not that more MPs is a bad thing, only that it is not necessarily helping, depending on the rest of the chain.  Having extensive experience with military imaging senors, there are uses for oversampling, but it never gives you more data than you started with. 

Oh, and the image circle produced by a lens is ABSOLUTELY independent from the surface upon which that image circle shines!  While the quality of the final image is not, the output of the lens most certainly is.   At the same size, it doesn't matter if the surface is a 12MP 1990s sensor, a 36MP 2014 sensor or a waffle!  To believe otherwise is beyond credibility.

I love theoretical discussions as much as almost anyone.  But that is what they are...theoretical.  In the real world, we deal with practical limitations that make a lot of improvements mute, unless the rest of the chain improves with it.  And finally, from Jack Dykinga (via John Shaw): “Cameras and lenses are simply tools to place our unique vision on film.  Concentrate on equipment and you’ll take technically good photographs. Concentrate on seeing the light’s magic colors and your images will stir the soul.”  The best photographs, in my opinion, are both technically good, and stir the soul; technical failures don't interfere with the soul stirring!
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Bart_van_der_Wolf on October 23, 2014, 11:57:32 am
Most seemed to have missed my initial condition on the discussion which was that 12MP was all the lens had to give.

Hi,

Maybe because 12MP means nothing without further context, like i.e. sampling density or surface area, and even that is only part of the image chain...

Quote
And yes, each link in the chain impacts the overall output.  But it is a process of subtraction from image quality which starts at 100%.

But that 100% is not the lens, it's the scene we want to image. Each component of the imaging chain offers 100% of it's own performance,  yet it may be the weaker or the stronger link in the cascade of interactions that follow. It is the weakest contributor that sets the ceiling (not the floor).

Quote
My point is not that more MPs is a bad thing, only that it is not necessarily helping, depending on the rest of the chain.


Yet MPs, if defined as sampling density (for limiting resolution, Nyquist frequency) and number of sensels (for field of view, or required image magnification factor to cover a certain field of view), usually are the weakest link, not the lens (unless that is diffraction or severely aberration limited). The proof is that image resolution improves proportionally faster from denser sampling of the projected image of an existing lens than from better lenses (when we assume normal lens designs) on a sensor that is limiting resolution.

Cheers,
Bart
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Manoli on October 23, 2014, 12:01:14 pm
I love theoretical discussions as much as almost anyone.  

The title of this thread is "Do sensors 'outresolve' lenses ?"
Before we end up going down a warren of rabbit holes, let me answer with a simple truism -

Today the majority of sensors out-resolve most of the lenses currently in production. The incremental gains to be had from upgrading favour the sensor, both from an economic POV and the consequential IQ benefits.

Even as far back as the analog days, that truism held - even in the debate of 35mm v MF. The larger negative had the IQ advantage, no matter that, back then, MF lenses were generally inferior to their 35mm counterparts.


Title: Re: Do Sensors “Outresolve” Lenses?
Post by: ErikKaffehr on October 23, 2014, 12:18:18 pm
Hi,

I would say it is the other way around, almost any decent lens outresolves any sensor of today at medium apertures and near the optical axis. Truly great lenses also outresolve any sensor at a large but normally not maximum apertures over the largest part of the sensor.

That is definitively what I see.

But clearly, there are deviations. It is possible that small pixel cameras (like Nikon 1) or Sony RX100 are limited more by lens than sensor.

Consumer zooms, specially super zooms, can be really bad at some focal length.

A good evidence of this is the need of OLP filtering on DSLRs and the tendency to yield colour moiré on cameras that lack OLP filter. Moiré is a sure sign that the lens outresolves the sensor. Or as I would say it has significant MTF at the Nyquist frequency.

Best regards
Erik

The title of this thread is "Do sensors 'outresolve' lenses ?"
Before we end up going down a warren of rabbit holes, let me answer with a simple truism -

Today the majority of sensors out-resolve most of the lenses currently in production. The incremental gains to be had from upgrading favour the sensor, both from an economic POV and the consequential IQ benefits.

Even as far back as the analog days, that truism held - even in the debate of 35mm v MF. The larger negative had the IQ advantage, no matter that, back then, MF lenses were generally inferior to their 35mm counterparts.



Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Manoli on October 23, 2014, 12:23:14 pm
I would say it is the other way around ...

and I would say, you're correct - I typed it 'back to front'!
Thanks, Erik.

Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Jim Kasson on October 23, 2014, 12:57:09 pm
Not sure what exactly you are disagreeing with, but my point is a point of fact.  

A dangerous way to get started. It puts any person disagreeing with you in the position of being one who denies facts. I will continue anyway.

A lens performance is independent of the camera sensor onto which [its] image circle shines.

I agree with that statement.

If a lens has the resolving power equal to 12 MPs of data (FF Size Sensor), then no matter what sensor reads that FF image circle, you get the same data.  

There's an assumption buried in that statement that the resolving power of a lens can be measured in megapixels (presumably on a Bayer CFA sensor). Can you cite a test protocol that would allow a lens to be characterized in that way?

The only one that I can think of is circular wrt the definition. Take a lens, and make resolution tests with finer and finer pixel pitches until, say, the MTF10 in cy/ph stops changing. Then say that the number of pixels on the sensor just before the MTF10 stopped changing is the pixel resolving power of the lens.

But that's an impossible test to perform, except in simulation. I have performed it in simulation, and the numbers of pixels obtained for even a prime lens that most would consider to be mediocre are very large; beyond what you can buy in a consumer camera today.

Jim
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: bjanes on October 23, 2014, 01:02:56 pm
Modern lenses will last many generations of camera bodies to come, so there is less of a need to upgrade, unless for replacing a dud. Of course lens manufacturers will think of other features to incorporate in lenses, like autofocus improvements, which will only work together with the newest generation of bodies, but 'built-in obsolescence' or forced upgrading/replacement is a way of survival for those companies.

That is true for lenses with brass helicoid manual focusing mechanisms like Leica and Zeiss, but not necessarily true for autofocusing or vibration reduction (image stabilization) lenses. A friend and I have both experienced US$500 repair bills for our Nikon 70-200 f/2.8 VR1 lenses. Neither were subjected to any impact damage or extraordinary use.

Bill
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: ErikKaffehr on October 23, 2014, 01:15:42 pm
Hi,

I would say that Bill is right, on the other hand I have something like 20 lenses, some as old as from 1985, all AF and no real failures on any lens.

But, generally the more complex something is the more probable it is that it will break sooner or lighter. I don't think plastic materials are bad, BTW, if the plastic used is of good quality.

Best regards
Erik

That is true for lenses with brass helicoid manual focusing mechanisms like Leica and Zeiss, but not necessarily true for autofocusing or vibration reduction (image stabilization) lenses. A friend and I have both experienced US$500 repair bills for our Nikon 70-200 f/2.8 VR1 lenses. Neither were subjected to any impact damage or extraordinary use.

Bill
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Jim Kasson on October 23, 2014, 01:19:34 pm
That is true for lenses with brass helicoid manual focusing mechanisms like Leica and Zeiss, but not necessarily true for autofocusing or vibration reduction (image stabilization) lenses. A friend and I have both experienced US$500 repair bills for our Nikon 70-200 f/2.8 VR1 lenses. Neither were subjected to any impact damage or extraordinary use.

Good point, Bill. Then there's obsolescence. A good lens stays good judged by the standards of the day it was designed, but standards change over time. I got rid of almost all my Hasselblad V-series lenses when the H=series came out. (I kept the 500, even though it's not very sharp, and the 250 APO, which is pretty sharp.) In fact, aside from view camera lenses and the 50mm f/2 that's on my Nikon S2, those are the oldest lenses I own.

Another thing to consider. When you buy a sharp lens, you've got a sharp lens that will be useful for many years. When you buy a hi-res body, all the lenses you own (except for some zooms) get better.

Jim
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Jim Kasson on October 23, 2014, 01:44:35 pm
Take a lens, and make resolution tests with finer and finer pixel pitches until, say, the MTF10 in cy/ph stops changing. Then say that the number of pixels on the sensor just before the MTF10 stopped changing is the pixel resolving power of the lens.

It occurs to me that, since this method results in asymptotically approaching a certain cy/ph, you could argue that it never actually converges. That would be pedantic. But it certainly would be true to say that the answer you get depends entirely on what you decide is an inconsequential improvement.

Jim
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: ErikKaffehr on October 23, 2014, 03:58:50 pm
Hi,

As it happened I had MTF curves from three of my lenses on screen.

Top Zeiss Sonnar 150/4 CF (30 years old?), centre Minolta 80-200/2.8 (around 30 years) and Sony 70-400/4G (2 years old). All shot on a Sony Alpha SLT 77 with 3.9 microns pixels, and no sharpening. I guess that SLT 77 has an OLP filter.
(http://echophoto.dnsalias.net/ekr/Articles/MTF/Imatest_small.png)

Best regards
Erik


Good point, Bill. Then there's obsolescence. A good lens stays good judged by the standards of the day it was designed, but standards change over time. I got rid of almost all my Hasselblad V-series lenses when the H=series came out. (I kept the 500, even though it's not very sharp, and the 250 APO, which is pretty sharp.) In fact, aside from view camera lenses and the 50mm f/2 that's on my Nikon S2, those are the oldest lenses I own.

Another thing to consider. When you buy a sharp lens, you've got a sharp lens that will be useful for many years. When you buy a hi-res body, all the lenses you own (except for some zooms) get better.

Jim
Title: Re: It's not binary
Post by: Here to stay on October 24, 2014, 12:30:21 am
Use them as you wish. If you post them, please credit me and link to my blog.

The two links above are a good place to start people for this topic, although, if you poke around a little, you'll see that's starting at the middle.

Here's the beginning: http://blog.kasson.com/?p=5720

Jim
Thank you Jim & will surely link to your Blog
I have also started to follow some of your other posts at another unnamed site
thank you for all this work
 
Title: Re: It's not binary
Post by: Here to stay on October 24, 2014, 12:44:13 am
The title of your post makes it seem that there is a point with decreasing pixel pitch where a sensor "outresolves" a lens, and so further improvement in resolution is possible as the pitch continues to decrease. In fact, over a broad range of pitches, lenses, and lens apertures, both making the lens sharper and making the pixel pitch finer will improve resolution.

Here's an example, from a simulation of a RGGB Bayer-CFA sensor of variable pitch with a beam-splitting AA filter and a model of the Otus 55mm f/1.4.

(http://www.kasson.com/ll/otuswAA3d.PNG)

MTF50 in cycles per picture height for a FF sensor is the vertical axis, pitch in um is coming towards you, and f-stop is from left to right.

If we look down from the top at a "quiver plot", with the arrows pointing in the direction of greatest improvement, and the length of the arrpws proportional to the slope, we can seen that, over much of the aperture range of the lens, the fastest path towards improvement is finer pixel pitch.


(http://www.kasson.com/ll/quiverotus.PNG)

Details here  (http://blog.kasson.com/?p=5905)and here (http://blog.kasson.com/?p=5920).

Note that some would call a 2 um sensor used with this lens underutilized, since, on a per-pixel level it is not as sharp as the same lens on a 4 um sensor. Nowever, in cycles per picture height, the finer sensor is sharper.

Jim

I am I correct to assume that the maximum resolution( I really should be calling it contrast) that a lens and sensor can resolve is at the point  a lens projects an Airy Disk size at which the sensor’s pixels pitch can accurately measure the size, brightness and location of that disk in an image?

For example with the D800 it is able to accurately locate a smaller Airy disk ( wider F-stop), thus for the highest resolution (contrast) it peaks sooner than let’s say a D700 that would show its greatest resolution(contrast) at a narrower F-stop(and to be specific  both cameras using the same lens).

With the D700 it is only able to accurately detect a larger Airy Disk and because of this the D700 peaks at a narrower F-stop than the D800. Is this correct?

To simplify this it would look something like this

Blur from resolution-limited sensor-> highest resolution (highest Airy Disk edge contrast that the sensor can detect) <- blur from diffraction,
Title: Re: It's not binary
Post by: ErikKaffehr on October 24, 2014, 02:09:30 am
Hi,

The reason that resolution figures are problematic is that they are totally unrelated to our vision. Resolution figures are very interesting for aerial reconnoissance  type photography but not for visual observation.

Panavision has a great series explaining this (it is for motion, but also applies to stills):

https://www.youtube.com/watch?feature=player_detailpage&v=iBKDjLeNlsQ

https://www.youtube.com/watch?feature=player_detailpage&v=v96yhEr-DWM

Looking at MTF at different feature sizes (frequencies) is thus much more interesting.

I have run MTF tests in pixel sizes from 9 my to 3.8 my, and lens performance essentially always peaks at the same aprtures, but with smaller pixels we get more resolution at a given MTF (which often is choosen at 50%).

So what I would say, the advantage of smaller pixels is better definition of whatever the lens renders, and that applies to any somewhat well corrected lens.

Best regards
Erik


I am I correct to assume that the maximum resolution( I really should be calling it contrast) that a lens and sensor can resolve is at the point  a lens projects an Airy Disk size at which the sensor’s pixels pitch can accurately measure the size, brightness and location of that disk in an image?

For example with the D800 it is able to accurately locate a smaller Airy disk ( wider F-stop), thus for the highest resolution (contrast) it peaks sooner than let’s say a D700 that would show its greatest resolution(contrast) at a narrower F-stop(and to be specific  both cameras using the same lens).

With the D700 it is only able to accurately detect a larger Airy Disk and because of this the D700 peaks at a narrower F-stop than the D800. Is this correct?

To simplify this it would look something like this

Blur from resolution-limited sensor-> highest resolution (highest Airy Disk edge contrast that the sensor can detect) <- blur from diffraction,

Title: Re: It's not binary
Post by: Bart_van_der_Wolf on October 24, 2014, 06:45:26 am
I am I correct to assume that the maximum resolution( I really should be calling it contrast) that a lens and sensor can resolve is at the point  a lens projects an Airy Disk size at which the sensor’s pixels pitch can accurately measure the size, brightness and location of that disk in an image?

Hi,

That's hard to answer for several reasons, one of which is that the diffraction pattern is not of limited/fixed size. We usually refer to its diameter as the diameter of the first zero (ring) of the Airy disk pattern, which represents something like 83.8% of the total intensity of the pattern, and is not uniform across its diameter (so alignment with the sensel grid also plays a role).

What we do know is at which spatial frequency the diffraction pattern of a perfect circular aperture will reduce image contrast (MTF) to zero amplitude, for 555nm wavelength:
cy/mm = 1 / (wavelength x aperture) , e.g. 1 / (0.000555 x 8) = 225.2 cy/mm
and it takes (more than) one full cycle to allow reconstruction of the original waveform (225.2 / 2 =112.6 mm, or 1/0.00888mm, or 8.88 micron feature size).

We also know that the sensor array has a limiting resolution of maximum:
Nyquist frequency in cycles/mm = 0.5 / senselpitch, e.g. 0.5  / 0.00488 = 102.5 cy/mm

We can therefore calculate the Aperture at which resolution will be totally eliminated by diffraction, and will prevent all aliasing, by reducing contrast to 0% at the Nyquist frequency:
Aperture = (2 x senselpitch) / wavelength, e.g. (2 x 0.00488) / 0.000555 = f/17.6

However, that is only taking diffraction (of a single wavelength) into account. Diffraction, will in practice be combined with the MTFs of residual lens aberrations, a less than perfectly round aperture, defocus (anything not in the thin perfect focus plane), a filterstack with or without AA-filters and a sensor coverglass, and a Bayer CFA pattern that needs to be demosaiced. The diffraction pattern size also changes with focus distance, so the above formula is based on infinity focusing.

So resolution will be totally limited at wider apertures than that for diffraction alone. It is also not simple to calculate, because there are positive and negative wave contributions that will cause interference patterns that may or may not align locally with the sensel grid.

The only thing we do know for certain, is that the absolute diffraction limit to resolution will not be exceeded (if even reached). Instead, the overall image will already deteriorate before that limit is reached by stopping down. It is only high contrast detail that will even theoretically reach that limit, lower contrast features will have lost significant modulation long before that. That's why limiting resolution is often set at lower spatial frequencies, e.g. MTF10 or Nyquist whichever is reached first. It also explains why even lower spatial frequencies, MTF50 are often used to give an overall impression of average performance for comparisons between different systems.

Cheers,
Bart

P.S. Using smaller sampling pitches than can be resolved from a diffraction limited image, still brings a benefit, because the diffraction pattern is not uniform. So smaller sensels allow to more accurately sample the diffraction patterns that are larger than single pixel, and thus allow more accurate deconvolution restoration of the original signal.
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: dwswager on October 24, 2014, 12:49:51 pm
Hi,

Maybe because 12MP means nothing without further context, like i.e. sampling density or surface area, and even that is only part of the image chain...

But that 100% is not the lens, it's the scene we want to image. Each component of the imaging chain offers 100% of it's own performance,  yet it may be the weaker or the stronger link in the cascade of interactions that follow. It is the weakest contributor that sets the ceiling (not the floor).
 

Yet MPs, if defined as sampling density (for limiting resolution, Nyquist frequency) and number of sensels (for field of view, or required image magnification factor to cover a certain field of view), usually are the weakest link, not the lens (unless that is diffraction or severely aberration limited). The proof is that image resolution improves proportionally faster from denser sampling of the projected image of an existing lens than from better lenses (when we assume normal lens designs) on a sensor that is limiting resolution.

Cheers,
Bart

But I did give a density: 12MP at 135 full frame (2x3) and comparing same size senors and image circles.

I said that each link subtracts from the image which means ceiling on image quality.  But once the lens produces the image circle, that is all there is and becomes the new ceiling.  The sensor does not have direct access to the image it only has access to the image circle produced by the lens.  So we are in agreement.

Again we agree that in general sensors have been a limiting factor.  That lenses have been able to provide more data than the sensors could properly resolve.  But that wasn't the issue.  The issues were:

1. Can sensors outperform lenses?  The answer to that is yes...depending on the lens and sensor;

2. What are the implications...is it better to have 12MPs worth of data fully resolved in 12 MPs or have 12MP data resolved into 36 'mushy' MPs?   My point being that subdividing the data into smaller and smaller units does not give more data and any resulting print at equal size from both sensors will look similar all else being equal.  Only when the lens over performs a lower resolution sensor or the enlargement necessary requires more data than a lower resolution sensor can provide, does it matter at all!

3. Does a higher performing sensor make a lens look worse from a resolution stand point?  Again, the answer is no.  It will look identical.  If you pixel peep, then the per pixel sharpness of the higher pixel sensor will not necessarily look as sharp as the per pixel sharpness of the low resolution sensor, but the overall image will look the same.

The holy grail would be a sensor with continuously scale-able pixels and a camera able to calculate based on the image circle given by the lens just how much data is available that scales the sensor pixel pitch to match the data provided up to the limit of the sensor.  No wasted pixels or file space.

The real world 'What do I do with this theoretical information' realities are:

1. Until we reach a break even point with lens resolution, buy the best, highest resolution camera you can afford...within the other boundary constraints like High ISO performance, file size, frame rate, required output size, cost, etc.

2. Understand that when the 'sensor' was independent of the camera (film), it usually was a wiser investment to buy better lenses than better cameras.  Cameras were generally purchased based on functionality and durability, not image quality.  Now that the sensor is integral to the camera the calculation changes.  But remember, film wasn't all that great either as a sensor.  I accurately predicted to my 'film snob' friends that when digital got to 6MPs, film would become an alternative process.  I think I was playing with a 1.2MP Coolpix 900 at the time.  Hell, the Nikon D1 was 2.74MP and pretty much was the seminal moment in the transition to digital.

3. Don't believe that lenses will outperform even a 24MP sensor like the D750 in all situations.  Pixel density is just one in a number of many factors affecting the overall sharpness and quality of a final image.  The gazillions of posts by D8x0 owners whining that their shots aren't sharp stand in testament.

Title: Re: It's not binary
Post by: Jim Kasson on October 24, 2014, 01:00:26 pm
The only thing we do know for certain, is that the absolute diffraction limit to resolution will not be exceeded (if even reached). Instead, the overall image will already deteriorate before that limit is reached by stopping down. It is only high contrast detail that will even theoretically reach that limit, lower contrast features will have lost significant modulation long before that. That's why limiting resolution is often set at lower spatial frequencies, e.g. MTF10 or Nyquist whichever is reached first. It also explains why even lower spatial frequencies, MTF50 are often used to give an overall impression of average performance for comparisons between different systems.

Before I get started in the issue in quotes above, let me thank you, Bart, not only for saving me the trouble of saying what you've said, but for saying it better than I would have.

Now, to the issue. There are reasons to think that MTF90 is a more important metric than MTF10 or MTF50 in the final image. However, what's most important in the output isn't necessarily what's most important in the input, ie, the raw file. It's usually easy to sharpen  80% contrast to 90%; there's enough signal that you don't often run into problems even with not-so-good lenses. It's harder to successfully sharpen when the contrast variation is closer to the noise in the image. That's why, although it's a beautiful experience to look through a lens with stellar high-contrast MTF (the Zeiss 135mm f/2 APO lens springs to mind), MTF 50 or 30, or even 10 (although I've found that, as Bart hints at above, the MTF10 often occurs past Nyquist, where it's meaningless)  more important for photographers who aren't just going to use the OOC JPEGs or the OOL (out of Lightroom with no knob twisting) raws.

Jim
Title: Re: It's not binary
Post by: ErikKaffehr on October 24, 2014, 01:16:30 pm
Jim,

I would say that your words deserve some elaboration…

Anyway, sharpening obviously plays a major role, and I would also add that a clean signal allows for more sharpening. But, I would also say that I feel it is better to have at strong MTF from the lens sampled at high frequency than having a weak MTF sampled at low frequency and gaining "final MTF" by excessive sharpening.

Or, to put it simple, I prefer a very good lens on a very good sensor. The question may be, which is to prefere to have?


I would say the second option is the better one, because it will give more true detail.

Best regards
Erik

Before I get started in the issue in quotes above, let me thank you, Bart, not only for saving me the trouble of saying what you've said, but for saying it better than I would have.

Now, to the issue. There are reasons to think that MTF90 is a more important metric than MTF10 or MTF50 in the final image. However, what's most important in the output isn't necessarily what's most important in the input, ie, the raw file. It's usually easy to sharpen  80% contrast to 90%; there's enough signal that you don't often run into problems even with not-so-good lenses. It's harder to successfully sharpen when the contrast variation is closer to the noise in the image. That's why, although it's a beautiful experience to look through a lens with stellar high-contrast MTF (the Zeiss 135mm f/2 APO lens springs to mind), MTF 50 or 30, or even 10 (although I've found that, as Bart hints at above, the MTF10 often occurs past Nyquist, where it's meaningless)  more important for photographers who aren't just going to use the OOC JPEGs or the OOL (out of Lightroom with no knob twisting) raws.

Jim
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: dwswager on October 24, 2014, 01:46:34 pm
Bart,

While I certainly concede that MTF is dependent on both the lens and sensor (never said it wasn't and not sure what made you think I would believe that overall image quality is not dependent on the lens), but the lens performance is still independent of the sensor performance which was my point in responding to a statement that a high resolution sensor would make a lens look bad.
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: ErikKaffehr on October 24, 2014, 01:57:08 pm
Hi,

Lens and sensor are both parts of an imaging chain.

When you put an image trough a lens some of the image quality is lost. We call that MTF. When the image sampled by the sensor we also loose image quality, while we also add false detail. The sensor has also an MTF.

A lens that resolves 12 MP would have zero MTF at that sensor pitch. The image would be extremely mushy. To get a decent image quality at the pixel level, a significant MTF is needed. But if a lens has a decent MTF at 12 MP it will certainly resolve much more detail than a 12 MP sensor can.

A 24 MP sensor can detect that detail, and do it without creating fake detail. Than you can downsize that to 12 MP and you will end up with a much better image than what would be possible at 12 MP.

Best regards
Erik

Bart,

While I certainly concede that MTF is dependent on both the lens and sensor (never said it wasn't and not sure what made you think I would believe that overall image quality is not dependent on the lens), but the lens performance is still independent of the sensor performance which was my point in responding to a statement that a high resolution sensor would make a lens look bad.
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: dwswager on October 24, 2014, 02:18:38 pm
Hi,

Lens and sensor are both parts of an imaging chain.

When you put an image trough a lens some of the image quality is lost. We call that MTF. When the image sampled by the sensor we also loose image quality, while we also add false detail. The sensor has also an MTF.

A lens that resolves 12 MP would have zero MTF at that sensor pitch. The image would be extremely mushy. To get a decent image quality at the pixel level, a significant MTF is needed. But if a lens has a decent MTF at 12 MP it will certainly resolve much more detail than a 12 MP sensor can.

A 24 MP sensor can detect that detail, and do it without creating fake detail. Than you can downsize that to 12 MP and you will end up with a much better image than what would be possible at 12 MP.


Yeah, I'm an engineer having worked IIR sensors.  Got all this.  But, the performance of the lens is STILL INDEPENDENT of any surface upon which it's image circle shines.  I can mount that lens on any camera I want and it's performance remains the same, though the resulting image can change dramatically.
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: ErikKaffehr on October 24, 2014, 02:23:28 pm
Yes,

I can agree that the sensor doesn't affect the lens but just the sampled image.

Best regards
Erik

Yeah, I'm an engineer having worked IIR sensors.  Got all this.  But, the performance of the lens is STILL INDEPENDENT of any surface upon which it's image circle shines.  I can mount that lens on any camera I want and it's performance remains the same, though the resulting image can change dramatically.
Title: Re: Sv: Re: Do Sensors “Outresolve” Lenses?
Post by: Torbjörn Tapani on October 24, 2014, 02:40:51 pm
But I did give a density: 12MP at 135 full frame (2x3) and comparing same size senors and image circles.

I said that each link subtracts from the image which means ceiling on image quality.  But once the lens produces the image circle, that is all there is and becomes the new ceiling.  The sensor does not have direct access to the image it only has access to the image circle produced by the lens.  So we are in agreement.

Again we agree that in general sensors have been a limiting factor.  That lenses have been able to provide more data than the sensors could properly resolve.  But that wasn't the issue.  The issues were:

1. Can sensors outperform lenses?  The answer to that is yes...depending on the lens and sensor;

2. What are the implications...is it better to have 12MPs worth of data fully resolved in 12 MPs or have 12MP data resolved into 36 'mushy' MPs?   My point being that subdividing the data into smaller and smaller units does not give more data and any resulting print at equal size from both sensors will look similar all else being equal.  Only when the lens over performs a lower resolution sensor or the enlargement necessary requires more data than a lower resolution sensor can provide, does it matter at all!

3. Does a higher performing sensor make a lens look worse from a resolution stand point?  Again, the answer is no.  It will look identical.  If you pixel peep, then the per pixel sharpness of the higher pixel sensor will not necessarily look as sharp as the per pixel sharpness of the low resolution sensor, but the overall image will look the same.

The holy grail would be a sensor with continuously scale-able pixels and a camera able to calculate based on the image circle given by the lens just how much data is available that scales the sensor pixel pitch to match the data provided up to the limit of the sensor.  No wasted pixels or file space.

The real world 'What do I do with this theoretical information' realities are:

1. Until we reach a break even point with lens resolution, buy the best, highest resolution camera you can afford...within the other boundary constraints like High ISO performance, file size, frame rate, required output size, cost, etc.

2. Understand that when the 'sensor' was independent of the camera (film), it usually was a wiser investment to buy better lenses than better cameras.  Cameras were generally purchased based on functionality and durability, not image quality.  Now that the sensor is integral to the camera the calculation changes.  But remember, film wasn't all that great either as a sensor.  I accurately predicted to my 'film snob' friends that when digital got to 6MPs, film would become an alternative process.  I think I was playing with a 1.2MP Coolpix 900 at the time.  Hell, the Nikon D1 was 2.74MP and pretty much was the seminal moment in the transition to digital.

3. Don't believe that lenses will outperform even a 24MP sensor like the D750 in all situations.  Pixel density is just one in a number of many factors affecting the overall sharpness and quality of a final image.  The gazillions of posts by D8x0 owners whining that their shots aren't sharp stand in testament.

Bart, Jim and Eric have explained this in great detail.

I will only adress point #3. As the others have explained it will not look identical. It will resolve more detail if you have a finer pixel pitch sensor.
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Jim Kasson on October 24, 2014, 02:44:58 pm
But, the performance of the lens is STILL INDEPENDENT of any surface upon which [its] image circle shines.  I can mount that lens on any camera I want and [its] performance remains the same, though the resulting image can change dramatically.

This may be putting too fine a point on it, but that's true only if you consider the sensor stack glass part of the lens. If you move a physical lens to a camera with a different stack thickness, you'll get different performance even if the underlying silicon is the same.

And are the microlenses, if any, part of the sensor or part of the lens? Eliding things like that, you could also say that the performance of the sensor is independent of any lens focusing light on it, especially if the performance of the sensor includes ray-angle effects.

Even if both statements are true, I'm not sure either statement provides much guidance to choosing to improve performance through lens improvements or sensor improvements.

Jim

Title: Re: Do Sensors “Outresolve” Lenses?
Post by: dwswager on October 24, 2014, 03:32:12 pm
This may be putting too fine a point on it, but that's true only if you consider the sensor stack glass part of the lens. If you move a physical lens to a camera with a different stack thickness, you'll get different performance even if the underlying silicon is the same.

And are the microlenses, if any, part of the sensor or part of the lens? Eliding things like that, you could also say that the performance of the sensor is independent of any lens focusing light on it, especially if the performance of the sensor includes ray-angle effects.

Even if both statements are true, I'm not sure either statement provides much guidance to choosing to improve performance through lens improvements or sensor improvements.

Jim


Ugh!  Yes, you will get different performance out the back end behind the stack/sensor, or if you change the size of the image circle, but the lens performance will not have changed one iota.  What you are arguing is that by changing the surfaces through which the image circle shines, the image circle emerging from the back side of the lens before the stack/sensor will be somehow different.  Good luck with that!

With respect to your last statement, it is  tangentially related in that if you have been happy with a lens on a 16MP camera, for example, you should be just as happy with that lens on a 36MP camera.  Most likely it already outperforms the 16MP sensor so you will reap more data with the 36MP sensor.  If we  assume a lens under perform, matches or over performs a 16MP sensor, then some unique situations can be identified.  If it under performs the 16MP it will under perform the 36MP sensor.  If it matches the 16MP sensor it under performs the 36MP.   However, in both these cases, you have not lost anything you already had!   If, however, it over performs the 16MP it can still under perform, match or over perform the 36MP sensor, but in all 3 of these cases you are getting more out the chain than you were previously, though 1) you might still not be fully exploiting all the imaging chain has to offer and 2) the lens performance has not changed.

This was all aimed at the people unhappy that their D8x0 camera pixel peeps showed less than stellar sharpness and want to blame 1) the lens that hadn't changed, or 2) The camera.  When in reality, it is almost always their technique that is to blame.
Title: Re: Sv: Re: Do Sensors “Outresolve” Lenses?
Post by: dwswager on October 24, 2014, 03:47:13 pm
Bart, Jim and Eric have explained this in great detail.

I will only adress point #3. As the others have explained it will not look identical. It will resolve more detail if you have a finer pixel pitch sensor.

Only if you add the assumptions 1) that the rest of the chain in front is providing data that is lost to a lower pixel pitch sensor and 2) all other factors are equal.  In that case I would agree.  Signal output is one such factor that won't be equal.  For a specific sensor technology, the signal generated is a function of the number of photons striking the surface.  Smaller pixel size means less signal output per pixel.     But let's assume the system has a specific level of noise.  And we were to subdivide the sensor into smaller and smaller pixels to the point that the sensor could not generate enough signal to be distinguished from the noise.  You really think this super system is going to 'resolve' more 'real' data?  Not gonna happen.  Again, in theoretical terms, I agree, but when you are trying to put steel on target, real world constraints will bite you in the backside long before you hit these theoretical limits.
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: ErikKaffehr on October 24, 2014, 04:39:57 pm
Hi,

I may guess that Fine_Art may refer to this images:

(http://echophoto.dnsalias.net/ekr/Articles/Aliasing2/feather_a.png)(http://echophoto.dnsalias.net/ekr/Articles/Aliasing2/feather_na_small.png)
Both images are taken at about 3.8 meter distance with a 150 mm lens. The left one using a Sonnar 150/4 on a Hasselblad with a P45+ back that has 6.8 micron pixels, the right one was shot with Sony 70-400/4-5.6 at 150 mm on a Sony Alpha 77 with 3.9 micron pixels. Due to smaller pixels, the Sony image was larger, but it is downsized to same pixel count (approximately) as the P45+ image using Image Magick. In my view the image on the right has better detail quality than on the left, although the number of pixels is about the same.

Best regards
Erik

I have to disagree, I think what you see is the limit of the lens, not what the underutilized sensor is not delivering.

In any event, Erik has convinced me at least, that it is better to have blurry fine pixels than jagged larger ones. Meaning you get more image data even when the pixels are mushy.
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Fine_Art on October 24, 2014, 07:31:55 pm
Ugh!  Yes, you will get different performance out the back end behind the stack/sensor, or if you change the size of the image circle, but the lens performance will not have changed one iota.  What you are arguing is that by changing the surfaces through which the image circle shines, the image circle emerging from the back side of the lens before the stack/sensor will be somehow different.  Good luck with that!

With respect to your last statement, it is  tangentially related in that if you have been happy with a lens on a 16MP camera, for example, you should be just as happy with that lens on a 36MP camera.  Most likely it already outperforms the 16MP sensor so you will reap more data with the 36MP sensor.  If we  assume a lens under perform, matches or over performs a 16MP sensor, then some unique situations can be identified.  If it under performs the 16MP it will under perform the 36MP sensor.  If it matches the 16MP sensor it under performs the 36MP.   However, in both these cases, you have not lost anything you already had!   If, however, it over performs the 16MP it can still under perform, match or over perform the 36MP sensor, but in all 3 of these cases you are getting more out the chain than you were previously, though 1) you might still not be fully exploiting all the imaging chain has to offer and 2) the lens performance has not changed.

This was all aimed at the people unhappy that their D8x0 camera pixel peeps showed less than stellar sharpness and want to blame 1) the lens that hadn't changed, or 2) The camera.  When in reality, it is almost always their technique that is to blame.

I have a lens that is speced as diffraction limited. If you put any sensor behind it you think you get all you are going to get. I also have a corrective element for it, a coma corrector, that improves the quality of the pixels on APSC at the expense of the edges on FF. The rim tends to distort a bit. When I use this lens on large pixel FF I tend to not use the corrector. When I use it on finer APS-C I use the corrector.

So it is not necessarily true that the lens always gives you the most with other devices subtracting.
Title: Re:
Post by: Torbjörn Tapani on October 24, 2014, 07:59:12 pm
Someone should take pictures of a straw man with the same lens on a A7s, A7 and A7r.
Title: Re:
Post by: Jim Kasson on October 24, 2014, 08:18:36 pm
Someone should take pictures of a straw man with the same lens on a A7s, A7 and A7r.

Zony 55FE on the a7R: http://blog.kasson.com/?p=4213
Zony 55FE on the a7: http://blog.kasson.com/?p=5019

Handheld a7/a7R comparisons: http://blog.kasson.com/?p=5267
Rangefinder lenses on the a7S: http://blog.kasson.com/?p=6447

General a7R stuff: http://blog.kasson.com/?p=3757
General a7R stuff: http://blog.kasson.com/?p=4888
General a7S stuff: http://blog.kasson.com/?p=6119

Jim
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: dwswager on October 24, 2014, 08:49:33 pm
Quote
Not to be obvious but that depends on the lens and the sensor. Using camera  identical sensor area,  the same individual lens at a given aperture may yield fine results on a12mp sensor, good results on 24mp camera, and only so-so results on a 36mp sensor.

This is the original statement to which I was responding.  To which I simply explained that the ;ens would give the same results on all 3 sensors though the sensors will handle what it is given differently.  While the final image is dependent on the lens, any DIFFERENCES in final output will not be caused by the lens, but by the sensor, in camera processing, post processing, printing or display process.  I believe what the original poster was trying to convey was that if you look at 100% pixel size (pixel peep), what you might find is that the per pixel data isn't what you thought it might be and the higher pixel density senor may reveal weaknesses in a lens that looked just fine on a 12MP sensor.

I have a lens that is speced as diffraction limited. If you put any sensor behind it you think you get all you are going to get. I also have a corrective element for it, a coma corrector, that improves the quality of the pixels on APSC at the expense of the edges on FF. The rim tends to distort a bit. When I use this lens on large pixel FF I tend to not use the corrector. When I use it on finer APS-C I use the corrector.

So it is not necessarily true that the lens always gives you the most with other devices subtracting.

The fact that the lens performance...what the lens outputs is independent of anything behind it in the imaging chain is so blindingly self-evident, I almost didn't post it.  The image below demonstrates.  If you wish to try to refute this, please explain how this works, because it will be amazing.  I would love to hear how a sensor or correcting lens gets into the lens and changes it's optical characteristics!
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: dwswager on October 24, 2014, 09:11:05 pm
I have no idea what minimum pixel size is technically feasible within all the constraints.  That is beyond my expertise! But I suspect we will arrive back at the situation where if you want more pixels, then a larger sensor is the technical, if not the practical, answer and not higher pixel density.  That there will be other advantages to more total pixels at a lower density than packing higher pixel density into the 135 size sensor. Basically, we will be back to the MF/135 trade-offs and debate.  I am only thankful that the performance of the D810 will more than meet my needs from a total pixel count standpoint.
Title: Re:
Post by: Torbjörn Tapani on October 24, 2014, 09:24:49 pm
No one wish to refute claims of lenses in isolation. That is the straw man you created.
Title: Re:
Post by: dwswager on October 24, 2014, 10:39:55 pm
No one wish to refute claims of lenses in isolation. That is the straw man you created.

I didn't set it up, everyone else got off on tangents stemming from my simple statement of fact that shouldn't have even got anyone's attention.  Same is true with the fact that more pixels is not necessarily better than less; what really matters is the total amount of data being carried by the pixels.  When engagement times (detect, track, target, fire, kill) can sometimes be less than 3 seconds, you learn to get all you can and you don't waste extra time and bandwidth with the irrelevant.  If 12MP can carry all the data, then 36MPs means 24MPs of waste!
Title: Re:
Post by: ErikKaffehr on October 25, 2014, 04:16:00 am
Yes,

But if I have a lens that just can deliver 12 MP worth of data I would rather use a phone cam.

It may be argued that stopping down to f/22 reduces resolution to below 12 MP, but even stopped down to f/22 indications are that a 36 MP camera has better detail than a 24 MP camera. Folks shooting MS on MFD say that MS (up to 200 MP) is more tolerant of diffraction than 50MP single shot. I don't understand that, but more pixels have more leeway for sharpening and that may help a lot.

Best regards
Erik

I didn't set it up, everyone else got off on tangents stemming from my simple statement of fact that shouldn't have even got anyone's attention.  Same is true with the fact that more pixels is not necessarily better than less; what really matters is the total amount of data being carried by the pixels.  When engagement times (detect, track, target, fire, kill) can sometimes be less than 3 seconds, you learn to get all you can and you don't waste extra time and bandwidth with the irrelevant.  If 12MP can carry all the data, then 36MPs means 24MPs of waste!

Title: Re: Do Sensors “Outresolve” Lenses?
Post by: synn on October 25, 2014, 05:12:51 am
As always, I don't care too much for numbers and graphs and feather pictures.

I do know this. I had a 70-200 f2.8 VR i which did ok on the 12MP nikon sensors. It wasn't amazing or the sharpest lens out there, but it did ok. There was some corner softness, but not a great deal. But when I mounted it on the D800, the sensor extracted every last bit of performance out of it. There was a lot to extract from the center and the corners gave up a long time before the center did. The result was images that had a much more obvious sharpness transition between the center and the edges than the 12mp cameras ever put out.


So yeah, I am a believer that sensors do out resolve lenses and a low performance lens on a high performance sensor would amplify the lenses issues.
Title: Re:
Post by: Bart_van_der_Wolf on October 25, 2014, 05:29:18 am
But if I have a lens that just can deliver 12 MP worth of data I would rather use a phone cam.

Indeed, but I still find the concept of - a lens delivering a number of pixels - detached from reality. A lens is a component in an imaging chain, and it is that entire chain that delivers a given resolution. The lens alone does not produce pixels.

Quote
Folks shooting MS on MFD say that MS (up to 200 MP) is more tolerant of diffraction than 50MP single shot. I don't understand that, but more pixels have more leeway for sharpening and that may help a lot.

There are two aspects to Multi-shot (the half sensel offset kind) images. One is that it doubles the sampling density, not exactly the same as doubling the number of sensels, but it is the same principle of denser sampling that extracts more detail from a given lens. That alone will combine to get higher resolution from a given system. The other aspect is that the higher sampling density will allow a more accurate sampling of the system blur (lens aberrations, diffraction, filter-stack, AA-filter, sensel aperture, demosaicing), which allows better deconvolution sharpening/restoration. It also allows to produce output with less magnification, which will also preserve resolution.

Cheers,
Bart
Title: Re: Sv: Re: Do Sensors “Outresolve” Lenses?
Post by: Torbjörn Tapani on October 25, 2014, 08:25:03 am
As always, I don't care too much for numbers and graphs and feather pictures.

I do know this. I had a 70-200 f2.8 VR i which did ok on the 12MP nikon sensors. It wasn't amazing or the sharpest lens out there, but it did ok. There was some corner softness, but not a great deal. But when I mounted it on the D800, the sensor extracted every last bit of performance out of it. There was a lot to extract from the center and the corners gave up a long time before the center did. The result was images that had a much more obvious sharpness transition between the center and the edges than the 12mp cameras ever put out.


So yeah, I am a believer that sensors do out resolve lenses and a low performance lens on a high performance sensor would amplify the lenses issues.

I think it is the other way around. This is an example of a lens that has more to give on a higher res sensor.

Sure you see the limitation of the lens better and you get less out of the sensor relative to its theoretical limit but there is more detail in the image. Even in the corners.

I could see the argument that we have reached a point of diminishing returns with a lens like that. But more and more I start to like the idea that we are reaching the limits of what sensors can produce. Then we are back to analog days, it's all about the lens. And format size. And skill. Not so much about upgrading.
Title: Re: Sv: Re: Do Sensors “Outresolve” Lenses?
Post by: Bart_van_der_Wolf on October 25, 2014, 08:51:37 am
Sure you see the limitation of the lens better and you get less out of the sensor relative to its theoretical limit but there is more detail in the image. Even in the corners.

I agree. Everything is better resolved, including the corners. Apparently the center still had a lot of untapped potential, and the corners had less to offer. However, there is now also more restoration potential in those corners because of the denser sampling. Not that it will become perfect, but closer to its limited maximum performance.

Lens design is still one of the limitations to combined total system performance. As the Otus and Art lenses show, there is room for improvement, but even with those lenses (as shown in Jim's simulation), the combination with denser sampling will also pull more resolution out of such lenses.

Cheers,
Bart
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: synn on October 25, 2014, 09:00:13 am
Interesting perspective  from both of you, but I don't think that's the case. I neither have that lens or a 12 mp body at my disposal now, but I really don't think anymore details were resolved in the corners by the D800. If I downsampled the D800 images to 12 MP, I would think the corners would be very similar to the 12MP images.

To me, this is a scenario where the lens was out resolved by the 36mp sensor in the corners.
Title: Re: Sv: Re: Do Sensors “Outresolve” Lenses?
Post by: ErikKaffehr on October 25, 2014, 09:00:49 am
Hi,

My take is that it is always better to have a good image that utilises the lens maximally. Clearly, a high resolution sensor may show weaknesses that are not obvious on a lesser sensor.

Now, I would say that a good 12 MP image will be great on an A2 print, because that is my experience. Going to higher resolution like 24 or 54 MP may give little benefits at A2 but will probably be visible in larger sizes like A1 or A0.

Best regards
Erik
 


I think it is the other way around. This is an example of a lens that has more to give on a higher res sensor.

Sure you see the limitation of the lens better and you get less out of the sensor relative to its theoretical limit but there is more detail in the image. Even in the corners.

I could see the argument that we have reached a point of diminishing returns with a lens like that. But more and more I start to like the idea that we are reaching the limits of what sensors can produce. Then we are back to analog days, it's all about the lens. And format size. And skill. Not so much about upgrading.
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Manoli on October 25, 2014, 10:00:20 am
You have a Nikon D3(12mp) with a 50/1.8 lens.
You print A2 or larger - what do you 'upgrade' to (one or the other) : A Zeiss Otus or a Nikon D810(36mp) ?



Title: Re: Do Sensors “Outresolve” Lenses?
Post by: ErikKaffehr on October 25, 2014, 10:15:42 am
D810,

Unless I shoot full aperture…

BR
Erik

You have a Nikon D3(12mp) with a 50/1.8 lens.
You print A2 or larger - what do you 'upgrade' to (one or the other) : A Zeiss Otus or a Nikon D810(36mp) ?




Title: Re: Do Sensors “Outresolve” Lenses?
Post by: synn on October 25, 2014, 10:48:51 am
You have a Nikon D3(12mp) with a 50/1.8 lens.
You print A2 or larger - what do you 'upgrade' to (one or the other) : A Zeiss Otus or a Nikon D810(36mp) ?





If your primary objective is to shoot with a 50mm equivalent fov and get the maximum possible quality, sell them both and get a sigma DP2.
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Bart_van_der_Wolf on October 25, 2014, 11:11:35 am
You have a Nikon D3(12mp) with a 50/1.8 lens.
You print A2 or larger - what do you 'upgrade' to (one or the other) : A Zeiss Otus or a Nikon D810(36mp) ?

Essentially the same answer as Erik gave.

At 'A2' output size the 12MP resolution would be enough. So if one never prints larger then perhaps the Otus might be appealing, mostly due to the improved wider aperture performance (the price, weight, and lack of AF, are a minus). For wider aperture use, obviously the Otus would be preferable.

The resolution gain will be much larger from upgrading the sensor resolution (unless the corners are very poor). The image from the lower quality lens can be significantly improved by the combination of higher sampling density and proper sharpening, even for larger than 'A2' prints (which will normally also be viewed from a bit further away than the 70 centimetres or 28 inches one would expect for an 'A2' size for matched 55mm lens perspective).

You can roughly simulate the improvement in resolution by shooting with a 85mm focal length and comparing that to same feature size output from a 50mm. One could also compare a down-sampled to 58% copy with the original size and output them to the same size, to get an idea about the magnitude of change. It's not a perfect simulation, but it will show whether the difference is something worth investing in.

Cheers,
Bart
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: dwswager on October 25, 2014, 11:20:15 am
You have a Nikon D3(12mp) with a 50/1.8 lens.
You print A2 or larger - what do you 'upgrade' to (one or the other) : A Zeiss Otus or a Nikon D810(36mp) ?


Nikon D810, assuming you don't require the durability and working speed the D3 brings to the table.  While none of the Nikon 50mm lens (1.4G to the 1.8D ) are all that spectacular, in almost all shooting conditions you will get more out of the D810 sensor, especially as the output size increases.  Oh, and I suspect other lenses will make their way onto that body which will similarly give better results. Besides, there are more than just 'resolution' advantages to the gained from the camera upgrade.  

Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Bart_van_der_Wolf on October 25, 2014, 11:31:45 am
Besides, there are more than just 'resolution' advantages to the gained from the camera upgrade.

Indeed, and one may save a tiny bit of money for other/future lens upgrades when the price of the camera body goes down faster than that of the lens.

Cheers,
Bart
Title: Re:
Post by: dwswager on October 25, 2014, 11:34:20 am
But if I have a lens that just can deliver 12 MP worth of data I would rather use a phone cam.

Best regards
Erik


This begs the question, "Does one believe they could make just as good an image (quality of the output file) with their phone as say a Nikon D3 or even D300s?" 

Both these cameras have older, 12MP sensors (FX and DX) which data limits the output so no matter how much more data they might be getting all they can give is 12MP worth?    Most 35mm format film (24mmx36mm) had less resolution data than that!
Title: Re: Sv: Re: Do Sensors “Outresolve” Lenses?
Post by: Torbjörn Tapani on October 25, 2014, 12:29:27 pm
You have a Nikon D3(12mp) with a 50/1.8 lens.
You print A2 or larger - what do you 'upgrade' to (one or the other) : A Zeiss Otus or a Nikon D810(36mp) ?
I would go with the D810 if that was my only goal and choice. Realistically I would get a D800E and a Sigma 50 Art.
Title: Re:
Post by: ErikKaffehr on October 25, 2014, 12:40:22 pm
Hi,

Nokia had a mobile phone with 41 MP resolution, and it was enthusiastically reported to be quite close to Canon 5DII in image quality under good light.

What I say is that a lens that would deliver just 12 MP would deliver it with very bad contrast (MTF) at fine pixels, as it otherwise would deliver far better resolution than 12 MP. It is sort of not realistic to make a lens that has decent MTF at 12 MP but suddenly drops to nil at an arbitrary limit like 12MP. A lens delivering zero MTF at 12 MP may deliver say 3-6 MP at 50% MTF. And that bad resolution would push it in the phone camera territory.

Any good lens would deliver probably deliver something like 50-200 MP at 0% MTF on full frame, I guess.

Best regards
Erik

This begs the question, "Does one believe they could make just as good an image (quality of the output file) with their phone as say a Nikon D3 or even D300s?"  

Both these cameras have older, 12MP sensors (FX and DX) which data limits the output so no matter how much more data they might be getting all they can give is 12MP worth?    Most 35mm format film (24mmx36mm) had less resolution data than that!
Title: Re: Sv: Re: Do Sensors “Outresolve” Lenses?
Post by: ErikKaffehr on October 25, 2014, 12:55:32 pm
Hi,

A2 is my normal print size and at that size I see little difference between 39 MP MFD, 24 MP full frame or 16 MP APS-C.

For larger sizes, I would say D810 or something like it – paired with a decent lens – would be beneficial, but I would say a 400$ Sigma 70/2.8 would be just fine. The great strength of the Otus is that it is really good at f/1.4, but I don't feel I need it for any size of print if shooting medium aperture.

Best regards
Erik



I would go with the D810 if that was my only goal and choice. Realistically I would get a D800E and a Sigma 50 Art.
Title: Re:
Post by: dwswager on October 25, 2014, 03:53:15 pm
Hi,

Nokia had a mobile phone with 41 MP resolution, and it was enthusiastically reported to be quite close to Canon 5DII in image quality under good light.

What I say is that a lens that would deliver just 12 MP would deliver it with very bad contrast (MTF) at fine pixels, as it otherwise would deliver far better resolution than 12 MP. It is sort of not realistic to make a lens that has decent MTF at 12 MP but suddenly drops to nil at an arbitrary limit like 12MP. A lens delivering zero MTF at 12 MP may deliver say 3-6 MP at 50% MTF. And that bad resolution would push it in the phone camera territory.

Any good lens would deliver probably deliver something like 50-200 MP at 0% MTF on full frame, I guess.

Best regards
Erik


Given the unmentioned assumption of the loss of 6-9MPs of data that was the baseline of the discussion, I guess I would still rather have a D3  receiving, digitizing and processing it than my phone electronics.  You can still do a lot with 3-6MP!

I don't want to read anything into your posting, but is seems like you are saying 3-6MP isn't worth the effort.  I know guys doing more with 12MPs than most people with D8x0 cameras might ever do with 36MP.  Like you wrote yesterday (paraphrase) Most equipment outperforms it's user!

About 12 years ago I took 2 photos of my 2 daughters holding monarch butterflies in the palm of their hands.  My mother inlaw loved them and asked for prints.  Of course, I had taken these with the 2MP Coolpix 950 I was playing with.  But I have to say, after some significant post processing effort, I printed them at 11"x14".  About 6 months later she recreated the photo with my Niece, the 3rd female grandchild.  I shot that with a N90s and 85mm f1.8D trying to recreate the framing and perspective.  The film was scanned with my Nikon LS-1000 Super Coolscan.  There was much more data in this film scan and I was required to keep the image a little softer than I would like to match the other 2.  All 3 prints still hang side by side in my inlaws home and I can tell you the 2MP images hold up respectably next to the 3rd.  You really need to step up close and almost pixel peep to see the additional detail in the 3rd print.  At normal viewing distance, it isn't really noticeable.
Title: Re:
Post by: Torbjörn Tapani on October 25, 2014, 03:56:36 pm
I like to think about displays as much as prints. You need a 18 mpix camera to exceed the resolution on the long axis of the new iMac 5k retina. 12 mpix is going to feel very dated soon. Not even a 16 mpix 3:2 camera is enough to make a background image  for it.
Title: Re:
Post by: ErikKaffehr on October 25, 2014, 04:13:12 pm
Hi,

I don't really see your point, but I feel that very good prints can be made from small MP files. Early on, 135 on good film was considered to be around 6MP, but it was found that 3 MP digital was actually good match for 135 film.

Now, my normal print size is A2, and I don't feel that 6 MP is good enough for that. But I don't see a lot of difference between 12 MP and 24 MP at that size. So, I would say that I (personally) need something like between 12 MP and 24 MP for a very good print. That difference from 6 MP to 24 MP is worth a journey to Iceland for me.

Personally, I would never buy a D3. I don't shoot high ISO or 10 FPS. I shoot on tripod, with MLU and at 50 ISO. So with my shooting habits a low MP high FPS camera simply make no sense.

Some folks are shooting high ISOs on free hand, that is another game, not about resolution but about getting that image.

Best regards
Erik




Given the unmentioned assumption of the loss of 6-9MPs of data that was the baseline of the discussion, I guess I would still rather have a D3  receiving, digitizing and processing it than my phone electronics.  You can still do a lot with 3-6MP!

I don't want to read anything into your posting, but is seems like you are saying 3-6MP isn't worth the effort.  I know guys doing more with 12MPs than most people with D8x0 cameras might ever do with 36MP.  Like you wrote yesterday (paraphrase) Most equipment outperforms it's user!

About 12 years ago I took 2 photos of my 2 daughters holding monarch butterflies in the palm of their hands.  My mother inlaw loved them and asked for prints.  Of course, I had taken these with the 2MP Coolpix 950 I was playing with.  But I have to say, after some significant post processing effort, I printed them at 11"x14".  About 6 months later she recreated the photo with my Niece, the 3rd female grandchild.  I shot that with a N90s and 85mm f1.8D trying to recreate the framing and perspective.  The film was scanned with my Nikon LS-1000 Super Coolscan.  There was much more data in this film scan and I was required to keep the image a little softer than I would like to match the other 2.  All 3 prints still hang side by side in my inlaws home and I can tell you the 2MP images hold up respectably next to the 3rd.  You really need to step up close and almost pixel peep to see the additional detail in the 3rd print.  At normal viewing distance, it isn't really noticeable.

Title: Re:
Post by: dwswager on October 25, 2014, 04:37:13 pm
Hi,

I don't really see your point, but I feel that very good prints can be made from small MP files. Early on, 135 on good film was considered to be around 6MP, but it was found that 3 MP digital was actually good match for 135 film.

Now, my normal print size is A2, and I don't feel that 6 MP is good enough for that. But I don't see a lot of difference between 12 MP and 24 MP at that size. So, I would say that I (personally) need something like between 12 MP and 24 MP for a very good print. That difference from 6 MP to 24 MP is worth a journey to Iceland for me.

Personally, I would never buy a D3. I don't shoot high ISO or 10 FPS. I shoot on tripod, with MLU and at 50 ISO. So with my shooting habits a low MP high FPS camera simply make no sense.

Some folks are shooting high ISOs on free hand, that is another game, not about resolution but about getting that image.

Best regards
Erik


Oh, so we are in violent agreement!  It was the phone cam comment that threw me.  I've taken few photos with my phone as I find the quality and especially the ability to post process on the phone is so limiting, at least to me.  The only benefit to it is immediacy and ease.

Title: Re: It's not binary
Post by: Here to stay on October 26, 2014, 02:12:34 am


I have run MTF tests in pixel sizes from 9 my to 3.8 my, and lens performance essentially always peaks at the same aprtures, but with smaller pixels we get more resolution at a given MTF (which often is choosen at 50%).

So what I would say, the advantage of smaller pixels is better definition of whatever the lens renders, and that applies to any somewhat well corrected lens.

Best regards
Erik
thank you for the response

Sorry for the late follow up its been a busy end of the week here
I first noticed an earlier peak when using a MTP mapper for AF & catzeye calibration. I hadn’t noticed it from generation to generation of camera bodies but rather going from a 6mp  to a 24mp camera. It started with a 300 2.8 and I had noticed it on my 24mp, normally I would set my focus calibration for F5.6 my most used range to help combat focus shift problems with sigma. I later ran quick MTP F8 to 2.8 and found that I peaked somewhat earlier than my first 6mp cropped body. This peaking I also noticed had a greater effect with targets further away and with longer FLs  when setting up focus for the common distance I shoot at. For example with a 50mm 1.4 when using a target 4-7m away there was very little difference, once I started setting the focus targets 8-10m  away I started to see this peak.
Now that I shoot mainly FF It would be interesting to see some data going from 12-36mp and see if  this is a figment of my imagination {  Blur from resolution-limited sensor-> highest resolution (highest Airy Disk edge contrast that the sensor can detect) <- blur from diffraction      } or not.

 I had a renewed interest in this when I first seen this Lensrentals charts
(https://i0.wp.com/www.lensrentals.com/blog/media/2012/03/zeiss-100-test.jpg)
(https://i0.wp.com/www.lensrentals.com/blog/media/2013/03/D700N50-2-686x1024.jpg)
(https://i0.wp.com/www.lensrentals.com/blog/media/2013/03/800N50-690x1024.jpg)

When we take a look at the 50 1.8 G with 5.92 µm µm pixel D3x
(http://www.photozone.de/images/8Reviews/lenses/nikkor_afs_50_18_d3x/mtf.png)
to a 50mm 1.8 G 3.39 µm pixel V1
(http://www.photozone.de/images/8Reviews/lenses/nikkor_afs_50_18_v1/mtf.png)



Title: Re: It's not binary
Post by: Jim Kasson on October 26, 2014, 12:17:23 pm
This peaking I also noticed had a greater effect with targets further away and with longer FLs  when setting up focus for the common distance I shoot at. For example with a 50mm 1.4 when using a target 4-7m away there was very little difference, once I started setting the focus targets 8-10m  away I started to see this peak.

Moving the target further away has the effect of making the slanted edge sharper, and therefore the SFR software capable of finer discrimination. Try the test with a target that is cut from thin mylar rather than printed and see if the effect still occurs.

It could also be that the lens is better corrected for farther subject distance.

Jim
Title: Re: It's not binary
Post by: Fine_Art on October 26, 2014, 03:12:44 pm
Moving the target further away has the effect of making the slanted edge sharper, and therefore the SFR software capable of finer discrimination. Try the test with a target that is cut from thin mylar rather than printed and see if the effect still occurs.

It could also be that the lens is better corrected for farther subject distance.

Jim

Can you explain how that works a bit more? I have the Sigma 35 Art, that I think has the opposite. It can be incredible within 30ft, it seems average at infinity.
Title: Re: It's not binary
Post by: Jim Kasson on October 26, 2014, 03:42:22 pm
Can you explain how that works a bit more? I have the Sigma 35 Art, that I think has the opposite. It can be incredible within 30ft, it seems average at infinity.

Are you talking about slanted-edge testing, or in general? If it's the latter, your Sigma might not be well-corrected at infinity, although, in my experience, that's rarer than not being corrected for close distances.

Jim
Title: Re: It's not binary
Post by: Bart_van_der_Wolf on October 26, 2014, 03:44:56 pm
Moving the target further away has the effect of making the slanted edge sharper, and therefore the SFR software capable of finer discrimination. Try the test with a target that is cut from thin mylar rather than printed and see if the effect still occurs.

Hi Jim,

That's correct, but it could also indicate that the target was not printed with a high enough resolution (e.g. 300 or 360 PPI instead of 600 PPI, or 720 PPI with 'finest detail'). Also, shooting from a distance of 50x the focal length would produce an image magnification on the sensor of 1:49, or a factor of 0.02041, which should reduce a printed edge transition that's sharp enough to exceed the resolution of the lens alone.

Quote
It could also be that the lens is better corrected for farther subject distance.

That could also play a role. It usually is recommended to test at something close to the actual shooting distance, although that may create some practical challenges, especially for long focal lengths. I think it's usually safe to test at 25-50x focal length if the target is of good quality (600, or 720 PPI with 'finest detail') , otherwise 100x FL should suffice (e.g. for C-print targets). Of course accurate focusing is critical, and harder than one may expect.

It would be interesting to see how e.g. the Otus responds to shooting a slanted edge from different distances.

Cheers,
Bart
Title: Re: It's not binary
Post by: ErikKaffehr on October 26, 2014, 03:45:16 pm
Can it focus correctly at infinity?

Best regards
Erik

Are you talking about slanted-edge testing, or in general? If it's the latter, your Sigma might not be well-corrected at infinity, although, in my experience, that's rarer than not being corrected for close distances.

Jim
Title: Re: It's not binary
Post by: Here to stay on October 27, 2014, 01:29:41 am
Moving the target further away has the effect of making the slanted edge sharper, and therefore the SFR software capable of finer discrimination. Try the test with a target that is cut from thin mylar rather than printed and see if the effect still occurs.

It could also be that the lens is better corrected for farther subject distance.

Jim
What I am using is a thin mylar like material with adhesive that is mounted to flat white plastic board. I was using a printed chart from the web and found that up close that it was the resolution causing me problems.  

If this was due to being better corrected for farther subjects would not the peak still be held at the same F stop.
I think part of the problem is when testing up close that the software is least accurate and the further away the software can detect the peak as I describe below 

Why I bring this up is that I see a peak when using a 5d classic and several lenses that they would peak at lets say F7 then moving over to a higher resolution camera like a 7d that they would peak earlier on in the F-stop. this would suggest to me that we indeed see a peak at the point where the resolution of the sensor can more accurately map the location of the Airy disks. And not where the just lens peaks but rather where the system peaks (lens and pixel size)  Its not a large peak but never the less its there.

 
Title: Re: It's not binary
Post by: bjanes on October 27, 2014, 08:10:35 am
Moving the target further away has the effect of making the slanted edge sharper, and therefore the SFR software capable of finer discrimination. Try the test with a target that is cut from thin mylar rather than printed and see if the effect still occurs.

It could also be that the lens is better corrected for farther subject distance.

Jim

Jim,

That is an interesting suggestion. Where can one obtain this mylar?

Bill
Title: Re: It's not binary
Post by: Fine_Art on October 27, 2014, 11:12:56 am
Can it focus correctly at infinity?

Best regards
Erik


I better test that. It is very close if it is hitting the limits.
Title: Re: It's not binary
Post by: Jim Kasson on October 27, 2014, 11:52:11 am
That is an interesting suggestion. Where can one obtain this mylar?

My suggestion is a piece of completely exposed, then developed B&W 4x5 film (T-Max 400 (TMY) would be my first choice), taped at a slight angle emulsion side out to a piece of white, smooth matte paper, such as the back of a piece of priniting paper (Oriental Seagull VC would go with the film, but I use Exhibition Fiber), using black gaffer's tape. Be sure to light the target so that the 7-mil (?) thickness of the film doesn't cast a shadow on the backing paper. Estar is Kodak's name for something very much like Mylar. To keep the film close to the plane of the paper, you can get the gaffer tape close to the edge, but if any fibers stick over the edge, it will confuse the sfr software. You could also use a piece of smooth matte inkjet paper for the backing; if you do that, you can print a Siemens star on it for focusing -- focusing on the slanted edge itself is difficult with an SLR (although easy with the Betterlight scanning back).  120 film would work, too. It's thinner, but it curls more. Developed film curls towards the emulsion side, which is the opposite of what you'd like.

Not developing the film is a possibility. That will reduce the contrast of the edge, but the film will lie flatter, and you probably don't need that much contrast anyway.

As A thinner alternative to photographic film, you might consider the black coated aluminum foil used in studio lighting, or industrial materials such as these:

http://www.tesa.com/industry/electronics/assortment_overview/functional_tapes/light_shading_blocking

The thinner the black material, the less chance of its casting a shadow, but the greater chance you'll bend and crinkle it trying to attach it. Try not to cut your own edge; the cutting equipment used by the material supplier will probably be smoother than anything you can do yourself.

If you have a white backing and a black edge maker, you're going to have a high-contrast target. Keep the exposure down far enough that the demosaicing and other processing doesn't cause clipping, or your sfr program will get confused.

Jim
Title: Re: It's not binary
Post by: Bart_van_der_Wolf on October 27, 2014, 12:07:53 pm
The thinner the black material, the less chance of its casting a shadow, but the greater chance you'll bend and crinkle it trying to attach it. Try not to cut your own edge; the cutting equipment used by the material supplier will probably be smoother than anything you can do yourself.

If you have a white backing and a black edge maker, you're going to have a high-contrast target. Keep the exposure down far enough that the demosaicing and other processing doesn't cause clipping, or your sfr program will get confused.

In the past I've used simple self adhesive black, and self adhesive white PVC material one can use to cover shelves. Using the black material at the bottom, and cut with a sharp knife the white on top, will make the black shine through a bit and reduce the extreme contrast somewhat. The material is also rather thin, and won't cast much of a shadow, which would fall on the black layer and be hardly invisible anyway. It's also weatherproof in case one needs to shoot a test outside and can't wait for better weather.

Cheers,
Bart
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Here to stay on October 28, 2014, 12:46:53 am
How accurate would say this Blog is on understanding diffraction
https://dtmateojr.wordpress.com/2014/10/09/understanding-the-effects-of-diffraction/ (https://dtmateojr.wordpress.com/2014/10/09/understanding-the-effects-of-diffraction/)
And would you recommend the site for beginners  ?
Title: Re: Do Sensors “Outresolve” Lenses?
Post by: Bart_van_der_Wolf on October 28, 2014, 04:01:22 am
And would you recommend the site for beginners  ?

His account on DPReview seems to have been terminated, and I remember some heated debate over there, with some other actually very well informed contributors. The blog doesn't seem worth the time to untangle the correct from the incorrect statements, especially for 'beginners'. Your mileage may vary.

Cheers,
Bart
Title: Re: It's not binary
Post by: Here to stay on October 28, 2014, 06:27:05 pm


 I had a renewed interest in this when I first seen this Lensrentals charts
(https://i0.wp.com/www.lensrentals.com/blog/media/2012/03/zeiss-100-test.jpg)
(https://i0.wp.com/www.lensrentals.com/blog/media/2013/03/D700N50-2-686x1024.jpg)
(https://i0.wp.com/www.lensrentals.com/blog/media/2013/03/800N50-690x1024.jpg)

When we take a look at the 50 1.8 G with 5.92 µm µm pixel D3x
(http://www.photozone.de/images/8Reviews/lenses/nikkor_afs_50_18_d3x/mtf.png)
to a 50mm 1.8 G 3.39 µm pixel V1
(http://www.photozone.de/images/8Reviews/lenses/nikkor_afs_50_18_v1/mtf.png)





 I have been told countless times that the work presented by Lensrental and photozone are bogus on another sight and would welcome your insight as to why a higher resolution sensor would show a different Fstop at which the system (sensor and lens) peaks  
Title: Re: It's not binary
Post by: ErikKaffehr on October 28, 2014, 07:00:37 pm
Hi,

Hard to explain that MTF 50 reaches max smaller apertures, but I would guess it may have to do with focusing. Latest generation Canon lenses and cameras have considerably better AF than older cameras. Exact focusing using LV may be easier on newer cameras.

Another point is that the peak is quite coarsely sampled. The lenses probably peak somewhere between f/5.6 and f/8.

A third observation is that the MTF fifty figure measures the frequency where MTF reaches 50%, so higher resolution cameras are measured at a different point of the MTF curve than lower resolution cameras. Would the plot show MTF at say 40 lp/mm the peaking may be different.

Something that needs to be pointed out is that Lensrentals measurements use DCRaw without sharpening while Photozone uses Lightroom with standard sharpening. The Lensrental values are presented as lp/picture height while Photozne data is LW/picture height, so the data differs by a factor of two, simply because of nomenclature.

Best regards
Erik




I have been told countless times that the work presented by Lensrental and photozone are bogus on another sight and would welcome your insight as to why a higher resolution sensor would show a different Fstop at which the system (sensor and lens) peaks  
Title: Re: It's not binary
Post by: Ian stuart Forsyth on October 29, 2014, 02:40:59 am
I have been told countless times that the work presented by Lensrental and photozone are bogus on another sight and would welcome your insight as to why a higher resolution sensor would show a different Fstop at which the system (sensor and lens) peaks  
I believe what is happening is that we can more precisely detect the point where we see the meeting points of blur from  Aberrations and diffraction. With the D700 the blur from being resolution limited is hiding the point that blur from aberrations start to limit the contrast in the image. Kind of like using a meter stick with cm accuracy and then measuring a second time with a stick to the 1mm accuracy and finding that you have 2 different final measurements.

The problem at this point in time is we don’t know what the peak f-stop is of a lens, until we can accurately measure every photon strike the lens projects. It’s kind of like we don’t know the point that a  soft lens stops resolving detail not until we hit the limit of light. If we had a true MTF graph that showed the  resolution of a lens through different F stops what we would see is as the pixels get small and smaller the data collect would move closer and closer to that line.  This manifest itself in a  MTF graph as a smaller and smaller F-stop as the pixel size gets smaller its moving closer to that absolute line.  In other words the lens peaks the same but how we measure where the sensor peaks with that lens differs.