Pages: 1 2 [3] 4 5   Go Down

Author Topic: Do Sensors “Outresolve” Lenses?  (Read 24237 times)

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: It's not binary
« Reply #40 on: October 24, 2014, 06:45:26 am »

I am I correct to assume that the maximum resolution( I really should be calling it contrast) that a lens and sensor can resolve is at the point  a lens projects an Airy Disk size at which the sensor’s pixels pitch can accurately measure the size, brightness and location of that disk in an image?

Hi,

That's hard to answer for several reasons, one of which is that the diffraction pattern is not of limited/fixed size. We usually refer to its diameter as the diameter of the first zero (ring) of the Airy disk pattern, which represents something like 83.8% of the total intensity of the pattern, and is not uniform across its diameter (so alignment with the sensel grid also plays a role).

What we do know is at which spatial frequency the diffraction pattern of a perfect circular aperture will reduce image contrast (MTF) to zero amplitude, for 555nm wavelength:
cy/mm = 1 / (wavelength x aperture) , e.g. 1 / (0.000555 x 8) = 225.2 cy/mm
and it takes (more than) one full cycle to allow reconstruction of the original waveform (225.2 / 2 =112.6 mm, or 1/0.00888mm, or 8.88 micron feature size).

We also know that the sensor array has a limiting resolution of maximum:
Nyquist frequency in cycles/mm = 0.5 / senselpitch, e.g. 0.5  / 0.00488 = 102.5 cy/mm

We can therefore calculate the Aperture at which resolution will be totally eliminated by diffraction, and will prevent all aliasing, by reducing contrast to 0% at the Nyquist frequency:
Aperture = (2 x senselpitch) / wavelength, e.g. (2 x 0.00488) / 0.000555 = f/17.6

However, that is only taking diffraction (of a single wavelength) into account. Diffraction, will in practice be combined with the MTFs of residual lens aberrations, a less than perfectly round aperture, defocus (anything not in the thin perfect focus plane), a filterstack with or without AA-filters and a sensor coverglass, and a Bayer CFA pattern that needs to be demosaiced. The diffraction pattern size also changes with focus distance, so the above formula is based on infinity focusing.

So resolution will be totally limited at wider apertures than that for diffraction alone. It is also not simple to calculate, because there are positive and negative wave contributions that will cause interference patterns that may or may not align locally with the sensel grid.

The only thing we do know for certain, is that the absolute diffraction limit to resolution will not be exceeded (if even reached). Instead, the overall image will already deteriorate before that limit is reached by stopping down. It is only high contrast detail that will even theoretically reach that limit, lower contrast features will have lost significant modulation long before that. That's why limiting resolution is often set at lower spatial frequencies, e.g. MTF10 or Nyquist whichever is reached first. It also explains why even lower spatial frequencies, MTF50 are often used to give an overall impression of average performance for comparisons between different systems.

Cheers,
Bart

P.S. Using smaller sampling pitches than can be resolved from a diffraction limited image, still brings a benefit, because the diffraction pattern is not uniform. So smaller sensels allow to more accurately sample the diffraction patterns that are larger than single pixel, and thus allow more accurate deconvolution restoration of the original signal.
« Last Edit: October 24, 2014, 12:48:31 pm by BartvanderWolf »
Logged
== If you do what you did, you'll get what you got. ==

dwswager

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1375
Re: Do Sensors “Outresolve” Lenses?
« Reply #41 on: October 24, 2014, 12:49:51 pm »

Hi,

Maybe because 12MP means nothing without further context, like i.e. sampling density or surface area, and even that is only part of the image chain...

But that 100% is not the lens, it's the scene we want to image. Each component of the imaging chain offers 100% of it's own performance,  yet it may be the weaker or the stronger link in the cascade of interactions that follow. It is the weakest contributor that sets the ceiling (not the floor).
 

Yet MPs, if defined as sampling density (for limiting resolution, Nyquist frequency) and number of sensels (for field of view, or required image magnification factor to cover a certain field of view), usually are the weakest link, not the lens (unless that is diffraction or severely aberration limited). The proof is that image resolution improves proportionally faster from denser sampling of the projected image of an existing lens than from better lenses (when we assume normal lens designs) on a sensor that is limiting resolution.

Cheers,
Bart

But I did give a density: 12MP at 135 full frame (2x3) and comparing same size senors and image circles.

I said that each link subtracts from the image which means ceiling on image quality.  But once the lens produces the image circle, that is all there is and becomes the new ceiling.  The sensor does not have direct access to the image it only has access to the image circle produced by the lens.  So we are in agreement.

Again we agree that in general sensors have been a limiting factor.  That lenses have been able to provide more data than the sensors could properly resolve.  But that wasn't the issue.  The issues were:

1. Can sensors outperform lenses?  The answer to that is yes...depending on the lens and sensor;

2. What are the implications...is it better to have 12MPs worth of data fully resolved in 12 MPs or have 12MP data resolved into 36 'mushy' MPs?   My point being that subdividing the data into smaller and smaller units does not give more data and any resulting print at equal size from both sensors will look similar all else being equal.  Only when the lens over performs a lower resolution sensor or the enlargement necessary requires more data than a lower resolution sensor can provide, does it matter at all!

3. Does a higher performing sensor make a lens look worse from a resolution stand point?  Again, the answer is no.  It will look identical.  If you pixel peep, then the per pixel sharpness of the higher pixel sensor will not necessarily look as sharp as the per pixel sharpness of the low resolution sensor, but the overall image will look the same.

The holy grail would be a sensor with continuously scale-able pixels and a camera able to calculate based on the image circle given by the lens just how much data is available that scales the sensor pixel pitch to match the data provided up to the limit of the sensor.  No wasted pixels or file space.

The real world 'What do I do with this theoretical information' realities are:

1. Until we reach a break even point with lens resolution, buy the best, highest resolution camera you can afford...within the other boundary constraints like High ISO performance, file size, frame rate, required output size, cost, etc.

2. Understand that when the 'sensor' was independent of the camera (film), it usually was a wiser investment to buy better lenses than better cameras.  Cameras were generally purchased based on functionality and durability, not image quality.  Now that the sensor is integral to the camera the calculation changes.  But remember, film wasn't all that great either as a sensor.  I accurately predicted to my 'film snob' friends that when digital got to 6MPs, film would become an alternative process.  I think I was playing with a 1.2MP Coolpix 900 at the time.  Hell, the Nikon D1 was 2.74MP and pretty much was the seminal moment in the transition to digital.

3. Don't believe that lenses will outperform even a 24MP sensor like the D750 in all situations.  Pixel density is just one in a number of many factors affecting the overall sharpness and quality of a final image.  The gazillions of posts by D8x0 owners whining that their shots aren't sharp stand in testament.

Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: It's not binary
« Reply #42 on: October 24, 2014, 01:00:26 pm »

The only thing we do know for certain, is that the absolute diffraction limit to resolution will not be exceeded (if even reached). Instead, the overall image will already deteriorate before that limit is reached by stopping down. It is only high contrast detail that will even theoretically reach that limit, lower contrast features will have lost significant modulation long before that. That's why limiting resolution is often set at lower spatial frequencies, e.g. MTF10 or Nyquist whichever is reached first. It also explains why even lower spatial frequencies, MTF50 are often used to give an overall impression of average performance for comparisons between different systems.

Before I get started in the issue in quotes above, let me thank you, Bart, not only for saving me the trouble of saying what you've said, but for saying it better than I would have.

Now, to the issue. There are reasons to think that MTF90 is a more important metric than MTF10 or MTF50 in the final image. However, what's most important in the output isn't necessarily what's most important in the input, ie, the raw file. It's usually easy to sharpen  80% contrast to 90%; there's enough signal that you don't often run into problems even with not-so-good lenses. It's harder to successfully sharpen when the contrast variation is closer to the noise in the image. That's why, although it's a beautiful experience to look through a lens with stellar high-contrast MTF (the Zeiss 135mm f/2 APO lens springs to mind), MTF 50 or 30, or even 10 (although I've found that, as Bart hints at above, the MTF10 often occurs past Nyquist, where it's meaningless)  more important for photographers who aren't just going to use the OOC JPEGs or the OOL (out of Lightroom with no knob twisting) raws.

Jim
« Last Edit: October 24, 2014, 01:02:29 pm by Jim Kasson »
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: It's not binary
« Reply #43 on: October 24, 2014, 01:16:30 pm »

Jim,

I would say that your words deserve some elaboration…

Anyway, sharpening obviously plays a major role, and I would also add that a clean signal allows for more sharpening. But, I would also say that I feel it is better to have at strong MTF from the lens sampled at high frequency than having a weak MTF sampled at low frequency and gaining "final MTF" by excessive sharpening.

Or, to put it simple, I prefer a very good lens on a very good sensor. The question may be, which is to prefere to have?

  • An excellent lens on a very good sensor
  • A very good lens on an excellent sensor

I would say the second option is the better one, because it will give more true detail.

Best regards
Erik

Before I get started in the issue in quotes above, let me thank you, Bart, not only for saving me the trouble of saying what you've said, but for saying it better than I would have.

Now, to the issue. There are reasons to think that MTF90 is a more important metric than MTF10 or MTF50 in the final image. However, what's most important in the output isn't necessarily what's most important in the input, ie, the raw file. It's usually easy to sharpen  80% contrast to 90%; there's enough signal that you don't often run into problems even with not-so-good lenses. It's harder to successfully sharpen when the contrast variation is closer to the noise in the image. That's why, although it's a beautiful experience to look through a lens with stellar high-contrast MTF (the Zeiss 135mm f/2 APO lens springs to mind), MTF 50 or 30, or even 10 (although I've found that, as Bart hints at above, the MTF10 often occurs past Nyquist, where it's meaningless)  more important for photographers who aren't just going to use the OOC JPEGs or the OOL (out of Lightroom with no knob twisting) raws.

Jim
« Last Edit: October 24, 2014, 01:24:30 pm by ErikKaffehr »
Logged
Erik Kaffehr
 

dwswager

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1375
Re: Do Sensors “Outresolve” Lenses?
« Reply #44 on: October 24, 2014, 01:46:34 pm »

Bart,

While I certainly concede that MTF is dependent on both the lens and sensor (never said it wasn't and not sure what made you think I would believe that overall image quality is not dependent on the lens), but the lens performance is still independent of the sensor performance which was my point in responding to a statement that a high resolution sensor would make a lens look bad.
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Do Sensors “Outresolve” Lenses?
« Reply #45 on: October 24, 2014, 01:57:08 pm »

Hi,

Lens and sensor are both parts of an imaging chain.

When you put an image trough a lens some of the image quality is lost. We call that MTF. When the image sampled by the sensor we also loose image quality, while we also add false detail. The sensor has also an MTF.

A lens that resolves 12 MP would have zero MTF at that sensor pitch. The image would be extremely mushy. To get a decent image quality at the pixel level, a significant MTF is needed. But if a lens has a decent MTF at 12 MP it will certainly resolve much more detail than a 12 MP sensor can.

A 24 MP sensor can detect that detail, and do it without creating fake detail. Than you can downsize that to 12 MP and you will end up with a much better image than what would be possible at 12 MP.

Best regards
Erik

Bart,

While I certainly concede that MTF is dependent on both the lens and sensor (never said it wasn't and not sure what made you think I would believe that overall image quality is not dependent on the lens), but the lens performance is still independent of the sensor performance which was my point in responding to a statement that a high resolution sensor would make a lens look bad.
Logged
Erik Kaffehr
 

dwswager

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1375
Re: Do Sensors “Outresolve” Lenses?
« Reply #46 on: October 24, 2014, 02:18:38 pm »

Hi,

Lens and sensor are both parts of an imaging chain.

When you put an image trough a lens some of the image quality is lost. We call that MTF. When the image sampled by the sensor we also loose image quality, while we also add false detail. The sensor has also an MTF.

A lens that resolves 12 MP would have zero MTF at that sensor pitch. The image would be extremely mushy. To get a decent image quality at the pixel level, a significant MTF is needed. But if a lens has a decent MTF at 12 MP it will certainly resolve much more detail than a 12 MP sensor can.

A 24 MP sensor can detect that detail, and do it without creating fake detail. Than you can downsize that to 12 MP and you will end up with a much better image than what would be possible at 12 MP.


Yeah, I'm an engineer having worked IIR sensors.  Got all this.  But, the performance of the lens is STILL INDEPENDENT of any surface upon which it's image circle shines.  I can mount that lens on any camera I want and it's performance remains the same, though the resulting image can change dramatically.
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Do Sensors “Outresolve” Lenses?
« Reply #47 on: October 24, 2014, 02:23:28 pm »

Yes,

I can agree that the sensor doesn't affect the lens but just the sampled image.

Best regards
Erik

Yeah, I'm an engineer having worked IIR sensors.  Got all this.  But, the performance of the lens is STILL INDEPENDENT of any surface upon which it's image circle shines.  I can mount that lens on any camera I want and it's performance remains the same, though the resulting image can change dramatically.
Logged
Erik Kaffehr
 

Torbjörn Tapani

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 319
Re: Sv: Re: Do Sensors “Outresolve” Lenses?
« Reply #48 on: October 24, 2014, 02:40:51 pm »

But I did give a density: 12MP at 135 full frame (2x3) and comparing same size senors and image circles.

I said that each link subtracts from the image which means ceiling on image quality.  But once the lens produces the image circle, that is all there is and becomes the new ceiling.  The sensor does not have direct access to the image it only has access to the image circle produced by the lens.  So we are in agreement.

Again we agree that in general sensors have been a limiting factor.  That lenses have been able to provide more data than the sensors could properly resolve.  But that wasn't the issue.  The issues were:

1. Can sensors outperform lenses?  The answer to that is yes...depending on the lens and sensor;

2. What are the implications...is it better to have 12MPs worth of data fully resolved in 12 MPs or have 12MP data resolved into 36 'mushy' MPs?   My point being that subdividing the data into smaller and smaller units does not give more data and any resulting print at equal size from both sensors will look similar all else being equal.  Only when the lens over performs a lower resolution sensor or the enlargement necessary requires more data than a lower resolution sensor can provide, does it matter at all!

3. Does a higher performing sensor make a lens look worse from a resolution stand point?  Again, the answer is no.  It will look identical.  If you pixel peep, then the per pixel sharpness of the higher pixel sensor will not necessarily look as sharp as the per pixel sharpness of the low resolution sensor, but the overall image will look the same.

The holy grail would be a sensor with continuously scale-able pixels and a camera able to calculate based on the image circle given by the lens just how much data is available that scales the sensor pixel pitch to match the data provided up to the limit of the sensor.  No wasted pixels or file space.

The real world 'What do I do with this theoretical information' realities are:

1. Until we reach a break even point with lens resolution, buy the best, highest resolution camera you can afford...within the other boundary constraints like High ISO performance, file size, frame rate, required output size, cost, etc.

2. Understand that when the 'sensor' was independent of the camera (film), it usually was a wiser investment to buy better lenses than better cameras.  Cameras were generally purchased based on functionality and durability, not image quality.  Now that the sensor is integral to the camera the calculation changes.  But remember, film wasn't all that great either as a sensor.  I accurately predicted to my 'film snob' friends that when digital got to 6MPs, film would become an alternative process.  I think I was playing with a 1.2MP Coolpix 900 at the time.  Hell, the Nikon D1 was 2.74MP and pretty much was the seminal moment in the transition to digital.

3. Don't believe that lenses will outperform even a 24MP sensor like the D750 in all situations.  Pixel density is just one in a number of many factors affecting the overall sharpness and quality of a final image.  The gazillions of posts by D8x0 owners whining that their shots aren't sharp stand in testament.

Bart, Jim and Eric have explained this in great detail.

I will only adress point #3. As the others have explained it will not look identical. It will resolve more detail if you have a finer pixel pitch sensor.
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Do Sensors “Outresolve” Lenses?
« Reply #49 on: October 24, 2014, 02:44:58 pm »

But, the performance of the lens is STILL INDEPENDENT of any surface upon which [its] image circle shines.  I can mount that lens on any camera I want and [its] performance remains the same, though the resulting image can change dramatically.

This may be putting too fine a point on it, but that's true only if you consider the sensor stack glass part of the lens. If you move a physical lens to a camera with a different stack thickness, you'll get different performance even if the underlying silicon is the same.

And are the microlenses, if any, part of the sensor or part of the lens? Eliding things like that, you could also say that the performance of the sensor is independent of any lens focusing light on it, especially if the performance of the sensor includes ray-angle effects.

Even if both statements are true, I'm not sure either statement provides much guidance to choosing to improve performance through lens improvements or sensor improvements.

Jim

dwswager

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1375
Re: Do Sensors “Outresolve” Lenses?
« Reply #50 on: October 24, 2014, 03:32:12 pm »

This may be putting too fine a point on it, but that's true only if you consider the sensor stack glass part of the lens. If you move a physical lens to a camera with a different stack thickness, you'll get different performance even if the underlying silicon is the same.

And are the microlenses, if any, part of the sensor or part of the lens? Eliding things like that, you could also say that the performance of the sensor is independent of any lens focusing light on it, especially if the performance of the sensor includes ray-angle effects.

Even if both statements are true, I'm not sure either statement provides much guidance to choosing to improve performance through lens improvements or sensor improvements.

Jim


Ugh!  Yes, you will get different performance out the back end behind the stack/sensor, or if you change the size of the image circle, but the lens performance will not have changed one iota.  What you are arguing is that by changing the surfaces through which the image circle shines, the image circle emerging from the back side of the lens before the stack/sensor will be somehow different.  Good luck with that!

With respect to your last statement, it is  tangentially related in that if you have been happy with a lens on a 16MP camera, for example, you should be just as happy with that lens on a 36MP camera.  Most likely it already outperforms the 16MP sensor so you will reap more data with the 36MP sensor.  If we  assume a lens under perform, matches or over performs a 16MP sensor, then some unique situations can be identified.  If it under performs the 16MP it will under perform the 36MP sensor.  If it matches the 16MP sensor it under performs the 36MP.   However, in both these cases, you have not lost anything you already had!   If, however, it over performs the 16MP it can still under perform, match or over perform the 36MP sensor, but in all 3 of these cases you are getting more out the chain than you were previously, though 1) you might still not be fully exploiting all the imaging chain has to offer and 2) the lens performance has not changed.

This was all aimed at the people unhappy that their D8x0 camera pixel peeps showed less than stellar sharpness and want to blame 1) the lens that hadn't changed, or 2) The camera.  When in reality, it is almost always their technique that is to blame.
Logged

dwswager

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1375
Re: Sv: Re: Do Sensors “Outresolve” Lenses?
« Reply #51 on: October 24, 2014, 03:47:13 pm »

Bart, Jim and Eric have explained this in great detail.

I will only adress point #3. As the others have explained it will not look identical. It will resolve more detail if you have a finer pixel pitch sensor.

Only if you add the assumptions 1) that the rest of the chain in front is providing data that is lost to a lower pixel pitch sensor and 2) all other factors are equal.  In that case I would agree.  Signal output is one such factor that won't be equal.  For a specific sensor technology, the signal generated is a function of the number of photons striking the surface.  Smaller pixel size means less signal output per pixel.     But let's assume the system has a specific level of noise.  And we were to subdivide the sensor into smaller and smaller pixels to the point that the sensor could not generate enough signal to be distinguished from the noise.  You really think this super system is going to 'resolve' more 'real' data?  Not gonna happen.  Again, in theoretical terms, I agree, but when you are trying to put steel on target, real world constraints will bite you in the backside long before you hit these theoretical limits.
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Do Sensors “Outresolve” Lenses?
« Reply #52 on: October 24, 2014, 04:39:57 pm »

Hi,

I may guess that Fine_Art may refer to this images:

Both images are taken at about 3.8 meter distance with a 150 mm lens. The left one using a Sonnar 150/4 on a Hasselblad with a P45+ back that has 6.8 micron pixels, the right one was shot with Sony 70-400/4-5.6 at 150 mm on a Sony Alpha 77 with 3.9 micron pixels. Due to smaller pixels, the Sony image was larger, but it is downsized to same pixel count (approximately) as the P45+ image using Image Magick. In my view the image on the right has better detail quality than on the left, although the number of pixels is about the same.

Best regards
Erik

I have to disagree, I think what you see is the limit of the lens, not what the underutilized sensor is not delivering.

In any event, Erik has convinced me at least, that it is better to have blurry fine pixels than jagged larger ones. Meaning you get more image data even when the pixels are mushy.
« Last Edit: October 24, 2014, 04:45:04 pm by ErikKaffehr »
Logged
Erik Kaffehr
 

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Do Sensors “Outresolve” Lenses?
« Reply #53 on: October 24, 2014, 07:31:55 pm »

Ugh!  Yes, you will get different performance out the back end behind the stack/sensor, or if you change the size of the image circle, but the lens performance will not have changed one iota.  What you are arguing is that by changing the surfaces through which the image circle shines, the image circle emerging from the back side of the lens before the stack/sensor will be somehow different.  Good luck with that!

With respect to your last statement, it is  tangentially related in that if you have been happy with a lens on a 16MP camera, for example, you should be just as happy with that lens on a 36MP camera.  Most likely it already outperforms the 16MP sensor so you will reap more data with the 36MP sensor.  If we  assume a lens under perform, matches or over performs a 16MP sensor, then some unique situations can be identified.  If it under performs the 16MP it will under perform the 36MP sensor.  If it matches the 16MP sensor it under performs the 36MP.   However, in both these cases, you have not lost anything you already had!   If, however, it over performs the 16MP it can still under perform, match or over perform the 36MP sensor, but in all 3 of these cases you are getting more out the chain than you were previously, though 1) you might still not be fully exploiting all the imaging chain has to offer and 2) the lens performance has not changed.

This was all aimed at the people unhappy that their D8x0 camera pixel peeps showed less than stellar sharpness and want to blame 1) the lens that hadn't changed, or 2) The camera.  When in reality, it is almost always their technique that is to blame.

I have a lens that is speced as diffraction limited. If you put any sensor behind it you think you get all you are going to get. I also have a corrective element for it, a coma corrector, that improves the quality of the pixels on APSC at the expense of the edges on FF. The rim tends to distort a bit. When I use this lens on large pixel FF I tend to not use the corrector. When I use it on finer APS-C I use the corrector.

So it is not necessarily true that the lens always gives you the most with other devices subtracting.
Logged

Torbjörn Tapani

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 319
Re:
« Reply #54 on: October 24, 2014, 07:59:12 pm »

Someone should take pictures of a straw man with the same lens on a A7s, A7 and A7r.
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re:
« Reply #55 on: October 24, 2014, 08:18:36 pm »

Someone should take pictures of a straw man with the same lens on a A7s, A7 and A7r.

Zony 55FE on the a7R: http://blog.kasson.com/?p=4213
Zony 55FE on the a7: http://blog.kasson.com/?p=5019

Handheld a7/a7R comparisons: http://blog.kasson.com/?p=5267
Rangefinder lenses on the a7S: http://blog.kasson.com/?p=6447

General a7R stuff: http://blog.kasson.com/?p=3757
General a7R stuff: http://blog.kasson.com/?p=4888
General a7S stuff: http://blog.kasson.com/?p=6119

Jim

dwswager

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1375
Re: Do Sensors “Outresolve” Lenses?
« Reply #56 on: October 24, 2014, 08:49:33 pm »

Quote
Not to be obvious but that depends on the lens and the sensor. Using camera  identical sensor area,  the same individual lens at a given aperture may yield fine results on a12mp sensor, good results on 24mp camera, and only so-so results on a 36mp sensor.

This is the original statement to which I was responding.  To which I simply explained that the ;ens would give the same results on all 3 sensors though the sensors will handle what it is given differently.  While the final image is dependent on the lens, any DIFFERENCES in final output will not be caused by the lens, but by the sensor, in camera processing, post processing, printing or display process.  I believe what the original poster was trying to convey was that if you look at 100% pixel size (pixel peep), what you might find is that the per pixel data isn't what you thought it might be and the higher pixel density senor may reveal weaknesses in a lens that looked just fine on a 12MP sensor.

I have a lens that is speced as diffraction limited. If you put any sensor behind it you think you get all you are going to get. I also have a corrective element for it, a coma corrector, that improves the quality of the pixels on APSC at the expense of the edges on FF. The rim tends to distort a bit. When I use this lens on large pixel FF I tend to not use the corrector. When I use it on finer APS-C I use the corrector.

So it is not necessarily true that the lens always gives you the most with other devices subtracting.

The fact that the lens performance...what the lens outputs is independent of anything behind it in the imaging chain is so blindingly self-evident, I almost didn't post it.  The image below demonstrates.  If you wish to try to refute this, please explain how this works, because it will be amazing.  I would love to hear how a sensor or correcting lens gets into the lens and changes it's optical characteristics!
Logged

dwswager

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1375
Re: Do Sensors “Outresolve” Lenses?
« Reply #57 on: October 24, 2014, 09:11:05 pm »

I have no idea what minimum pixel size is technically feasible within all the constraints.  That is beyond my expertise! But I suspect we will arrive back at the situation where if you want more pixels, then a larger sensor is the technical, if not the practical, answer and not higher pixel density.  That there will be other advantages to more total pixels at a lower density than packing higher pixel density into the 135 size sensor. Basically, we will be back to the MF/135 trade-offs and debate.  I am only thankful that the performance of the D810 will more than meet my needs from a total pixel count standpoint.
Logged

Torbjörn Tapani

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 319
Re:
« Reply #58 on: October 24, 2014, 09:24:49 pm »

No one wish to refute claims of lenses in isolation. That is the straw man you created.
Logged

dwswager

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1375
Re:
« Reply #59 on: October 24, 2014, 10:39:55 pm »

No one wish to refute claims of lenses in isolation. That is the straw man you created.

I didn't set it up, everyone else got off on tangents stemming from my simple statement of fact that shouldn't have even got anyone's attention.  Same is true with the fact that more pixels is not necessarily better than less; what really matters is the total amount of data being carried by the pixels.  When engagement times (detect, track, target, fire, kill) can sometimes be less than 3 seconds, you learn to get all you can and you don't waste extra time and bandwidth with the irrelevant.  If 12MP can carry all the data, then 36MPs means 24MPs of waste!
Logged
Pages: 1 2 [3] 4 5   Go Up