Pages: 1 ... 5 6 [7] 8 9   Go Down

Author Topic: Want – Need – Afford  (Read 42883 times)

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
Want – Need – Afford
« Reply #120 on: August 29, 2009, 08:42:55 pm »

Quote from: Wayne Fox
so you make an incorrect statement in defense of your personal viewpoint on stitching, and then you try and pass it off as though you knew it was incorrect when you made it, but you did it as some type of test?

gimme a break ...

I'm afraid you are not as alert today, Wayne. The issue is not specifically related to stitching but to pixel density. It so happens that the pixel density of the D3X and 5D2 is very similar to the pixel density of the P65+, so any attempt to get the same pixel count in the same FOV of scene from the same position will involve the use of the same focal length of lens and the same DOF at the same F stop, approximately.

If I were to attempt to get such a stitched image using the D3 or 5D, I would have to use a slightly longer focal length than I would use with the P65+ for a single shot of the same FOV, and consequently the DOF of the final stitched image would be slightly less at the same f stop.

On the other hand, if I were to use a 12mp Olympus 4/3rds system for stitching, which might be a better choice than a D3X when hiking up a steep hill, the DOF of the final stitched image, of same pixel count as the P65 single shot, would have greater DOF at the same f stop.

If I were to make the stitch from a Canon G10, then the DOF would be very much greater at the same f stop, but no doubt at the sacrifice of DR, SNR etc.

All these factors are related.

The obvious advantage of the 5D2 and D3X in this situation is their better performance above base ISO, in all departments. For example, If I need to use ISO 200 with the P65 to get a shutter speed fast enough to freeze the slight motion of the foliage, then in the same circumstances I could stop down more than one stop with the 5D2 for greater DOF, yet still use the same shutter speed, assuming the DXO data are reliable.

How come? The P65 ISO 200 is actually ISO 89 and the DR 10.55 EV. The 5D2 ISO 400 is actually ISO 285 and the DR slightly geater at 10.92 EV. All the other parameters addressed in the DXO tests, tonal range, color sensitivity, SNR, are also either as good or slightly better for the 5D2 at ISO 285, compared with the P65 at ISO 89.

However, when all is said and done, the final result will also depend on lens performance at the apertures chosen for each system. As a general rule, lenses with a smaller image circle designed for the smaller format camera tend to be sharper, but not necessarily at small apertures where diffraction takes its toll.

Diffraction seems to be the great equalizer.

Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
Want – Need – Afford
« Reply #121 on: August 29, 2009, 09:02:07 pm »

Quote from: Christopher
Ray once again, they don't test what ISO they are they test what the real ISO is. I can only urge you to look at the H3D-50 again. You would see that all ISO are ISO 50 even 400. That still does noc change that ISO 400 on a H3D acts like ISO 400, even though it is not real. Same goes for phase.


All digital cameras have only one ISO, described as the base ISO. What differs is the way the camera handles the underexposure. I get the impression it makes little difference with DBs whether one underexposes 3 stops at, say, ISO 50 or uses the same shutter speed at ISO 400.

This is not the case with Canon and Nikon DSLRs where a correct exposure at ISO 800 will be significantly better than a 3 stop underexposure at ISO 100.
Logged

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Want – Need – Afford
« Reply #122 on: August 29, 2009, 09:08:39 pm »

Quote from: bjanes
Mark,

A lot of ball heads have a panning axis, but for them to work properly one has first to adjust the legs so as to make the base of the ball head parallel to the ground. This can be a hassle. RSS does make the PCL-1 which one attachs to the ball head and then uses the ball head to level the PCL-1. The panning is done with the PCL-1. Does the RSS ball head to which you are referring have a feature that obviates the PCL-1?

No - that's why I referred to "other materials". By the time you buy the head and the pano package you're into about USD 700+, so depending on how much of this stuff one does, either put up with the hassle of leveling the tripod, or get a PCL-1. The package also includes a plate for aligning the camera according to the optical center of the lens to minimize parallax problems.
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
Want – Need – Afford
« Reply #123 on: August 29, 2009, 09:41:49 pm »

Quote from: MarkDS
As for the panning technique, if you get a Really Right Stuff ballhead, you can swivel horizontally without any other movement. They also have other materials and instructions on their website for assuring the best possible fit when preparing a series of pan images.

Mark,
After that experience, I got myself a new travelling tripod, the carbon fibre, 4-section-leg, Manfrotto 190CXPRO4 with built-in level and pan & tilt Manfrotto 460MG head. It's a bit on the heavy side at 1.8Kg but at least I don't have to crouch down when peering through the viewfinder, as I had to with my other ball-head aluminium tripod which was only 5 ft high.

One series of tests I've yet to do is compare stitches of hand-held shots with properly levelled shots on a tripod, of the same scene using Autopano Pro.
Logged

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Want – Need – Afford
« Reply #124 on: August 29, 2009, 09:46:13 pm »

Quote from: Ray
All digital cameras have only one ISO, described as the base ISO. What differs is the way the camera handles the underexposure. I get the impression it makes little difference with DBs whether one underexposes 3 stops at, say, ISO 50 or uses the same shutter speed at ISO 400.

This is not the case with Canon and Nikon DSLRs where a correct exposure at ISO 800 will be significantly better than a 3 stop underexposure at ISO 100.

Ray, re the part where "you get the impression" - not clear to me why the sensor physics should differ that much between a high-end DSLR and an MFDB. Could you explain?
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Bill VN

  • Newbie
  • *
  • Offline Offline
  • Posts: 35
Want – Need – Afford
« Reply #125 on: August 29, 2009, 10:33:24 pm »

Quote from: MarkDS
Ray, re the part where "you get the impression" - not clear to me why the sensor physics should differ that much between a high-end DSLR and an MFDB. Could you explain?

And, are we talking ISO measurements per Japanese standards or per North American standards? They are different, which is why many photographers would would meter Kodak films at half their ASA/ISO and develop for more time than recommended à la Ansel Adams. Fujifilm and Ilford films always worked right at their stated ASA/ISO ratings.

The big divide between digital sensors is that some are CCDs and others CMOS. Individual photosite size makes a difference, as does color bit depth.
« Last Edit: August 29, 2009, 10:35:18 pm by Bill VN »
Logged

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Want – Need – Afford
« Reply #126 on: August 29, 2009, 10:55:54 pm »

Quote from: Bill VN
And, are we talking ISO measurements per Japanese standards or per North American standards? They are different, which is why many photographers would would meter Kodak films at half their ASA/ISO and develop for more time than recommended à la Ansel Adams. Fujifilm and Ilford films always worked right at their stated ASA/ISO ratings.

The big divide between digital sensors is that some are CCDs and others CMOS. Individual photosite size makes a difference, as does color bit depth.

........and the differences are more and more complex than all of that. You can read more about the P65 sensor design on this website and in the Phase literature. One reaches a certain limit to discussions whereby actual results which test the limits of each kind of system, complemented by DxO type laboratory measurements, contribute more understanding of the value-added side of the equation than possible from cursory knowledge of sensor differences.
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Wayne Fox

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 4237
    • waynefox.com
Want – Need – Afford
« Reply #127 on: August 30, 2009, 01:33:55 am »

Quote from: Ray
I'm afraid you are not as alert today, Wayne. The issue is not specifically related to stitching but to pixel density. It so happens that the pixel density of the D3X and 5D2 is very similar to the pixel density of the P65+, so any attempt to get the same pixel count in the same FOV of scene from the same position will involve the use of the same focal length of lens and the same DOF at the same F stop, approximately.

If I were to attempt to get such a stitched image using the D3 or 5D, I would have to use a slightly longer focal length than I would use with the P65+ for a single shot of the same FOV, and consequently the DOF of the final stitched image would be slightly less at the same f stop.


I'm not even sure what your point is anymore.  This was a discussion about using stitching instead of buying a higher resolution camera.  Your first paragraph doesn't make any sense to me.  The only way I can take a scene with a dSLR that contains the same FoV and results in a file with the same pixel count as my p65 is to stitch.  How is the pixel density of the sensor even relevant?.  It's only affect is how many captures it will take to stitch the final file so I can match the pixel resolution of the p65 file.

To accomplish this I will have to use a longer lens in relation to the sensor size to capture the same information at the same resolution, then stitch the resulting files together.  The end result is my depth of field will be quite similar from either approach, and in fact it could be the stitched fie would have less DoF, which you leveled as a criticism of the p65 (less depth of field).   If you try to do this from the same location I'm not sure you can get the exact same scene including foreground and background, but then again I've never tried it, and not sure why I ever would.
Logged

Dick Roadnight

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1730
Want – Need – Afford
« Reply #128 on: August 30, 2009, 04:04:43 am »

Quote from: Schewe
Get a BetterLight scanning back. Less hassle (not as cheap though).
$15,000 for a slow scan back with less res than a H3D11-60 (or P+65)... are you kidding?

9,000 * 12,000 pixels would give a nice 24 * 35 " print @ 360 ppi, an be useful for some landscapes, and save stitching 2 shots.
Logged
Hasselblad H4, Sinar P3 monorail view camera, Schneider Apo-digitar lenses

BernardLanguillier

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 13983
    • http://www.flickr.com/photos/bernardlanguillier/sets/
Want – Need – Afford
« Reply #129 on: August 30, 2009, 06:18:22 am »

Quote from: Dick Roadnight
$15,000 for a slow scan back with less res than a H3D11-60 (or P+65)... are you kidding?

9,000 * 12,000 pixels would give a nice 24 * 35 " print @ 360 ppi, an be useful for some landscapes, and save stitching 2 shots.

Except that those are true RGB pixel and not the result of some funky Bayer interpolation.

Cheers,
Bernard

Dick Roadnight

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1730
Want – Need – Afford
« Reply #130 on: August 30, 2009, 07:13:31 am »

Quote from: BernardLanguillier
Except that those are true RGB pixel and not the result of some funky Bayer interpolation.

Cheers,
Bernard
... so, using real pixels, if there is no movement (waves, trees, people, clouds) in the shot, would you get as good a result @240 original camera pixels per print inch as you would using a flash-compatible camera (bayer interpolated) at 360 ppi?

I have been thinking about getting the 160 Mpx Seitz 617 rapid scan back for the Sinar (when they ship it) or the Red 617.
Logged
Hasselblad H4, Sinar P3 monorail view camera, Schneider Apo-digitar lenses

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
Want – Need – Afford
« Reply #131 on: August 30, 2009, 09:46:20 am »

Quote from: Wayne Fox
I'm not even sure what your point is anymore.  This was a discussion about using stitching instead of buying a higher resolution camera.  Your first paragraph doesn't make any sense to me.  The only way I can take a scene with a dSLR that contains the same FoV and results in a file with the same pixel count as my p65 is to stitch.  How is the pixel density of the sensor even relevant?.  It's only affect is how many captures it will take to stitch the final file so I can match the pixel resolution of the p65 file.


When stitching using a smaller sensor to emulate the result from a larger sensor, the pixel density of the smaller senor will determine the focal length of lens needed. The greater the pixel density, the shorter the lens needed and the greater the DoF of the stitched result, at a given F stop.

For example, if we ever get a 5D3 or 5D4 with a 60mp sensor, then you wouldn't have to stitch to get the same FOV image and same pixel count as the P65+ would provide from the same position. However, if you were to use a 50mm lens with the 5D4, you would need a 75mm lens with the P65+.

If you were to use F5.6 with the 50mm lens on the 5DMkIV, you would need to use F8 or F9 with the 75mm lens on the P65+ to get the same DoF (allowing for minor discrepancies due to the different aspect ratios of the cameras being compared, and allowing for any DoF peculiarities due to lens design and/or exceptionally close distance to subject).

If you imagine dividing the 60mp 35mm sensor into 4 parts, you would get something close to a 15mp Olympus 4/3rds sensor. To get your P65 equivalent by stitching images with a 15mp Oly, you would use the same focal length of 50mm because the pixel density is the same as the 5D4, and the DoF of the stitched result would be greater than that of the P65+ at the same F stop. Clear?
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
Want – Need – Afford
« Reply #132 on: August 30, 2009, 10:16:09 am »

Quote from: MarkDS
Ray, re the part where "you get the impression" - not clear to me why the sensor physics should differ that much between a high-end DSLR and an MFDB. Could you explain?

Mark,
I think we need someone like Emil Martinec to explain that, but I imagine it has something to do with the differences between CCD and CMOS design. I understand in Canon cameras, at ISOs higher than base, the analogue signal from the sensor is amplified at an early stage to reduce the effects of noise later in the processing chain.

If we refer to the DXOMark comparison of the P65+ and Canon 5D2, we can see that for each doubling of ISO for the P65+, there's approximately one stop loss of DR. At base ISO of 100 (or 44), DR is 11.51 EV. At ISO 800 (or 360), DR is 8.56 EV, a loss of 3 stops.

If we look at the same progession for the 5D2, we have a DR of 11.16 EV at base ISO (73) and a DR of 10.66 EV at ISO 800 (or 564), a loss of just half a stop.

Logged

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Want – Need – Afford
« Reply #133 on: August 30, 2009, 11:59:41 am »

Quote from: Wayne Fox
I'm not even sure what your point is anymore.  This was a discussion about using stitching instead of buying a higher resolution camera.  Your first paragraph doesn't make any sense to me.  The only way I can take a scene with a dSLR that contains the same FoV and results in a file with the same pixel count as my p65 is to stitch.  How is the pixel density of the sensor even relevant?.  It's only affect is how many captures it will take to stitch the final file so I can match the pixel resolution of the p65 file.

To accomplish this I will have to use a longer lens in relation to the sensor size to capture the same information at the same resolution, then stitch the resulting files together.  The end result is my depth of field will be quite similar from either approach, and in fact it could be the stitched fie would have less DoF, which you leveled as a criticism of the p65 (less depth of field).   If you try to do this from the same location I'm not sure you can get the exact same scene including foreground and background, but then again I've never tried it, and not sure why I ever would.

Ray, I must say I too was befuddled with this paragraph and also couldn't see (and still don't) the relevance of pixel count to FOV and DOF. You can have a sensor of any pixel count you want and the FOV and DOFwill still be determined by the focal length of the lens, the F-Stop and the lens to subject distance, whatever the resolving power of the sensor. Stitching DSLR images comes into play where you want higher resolution to cover the same FOV and you're not using a P65. And you'd need a longer lens to capture roughly the same FOV on a MFDB than on a DSLR with the same aspect ratio - if I recall from the film era when you needed an 80mm lens on a Rolleiflex to get roughly the FOV of a 50mm lens on a Leica (aspect ratio excepted). No?
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Want – Need – Afford
« Reply #134 on: August 30, 2009, 12:04:20 pm »

Quote from: BernardLanguillier
Except that those are true RGB pixel and not the result of some funky Bayer interpolation.

Cheers,
Bernard

Bernard, "funky" Bayer matrices have been serving us well, just to judge from the superb quality of your own results with your cameras embodying that technology, and not to start another battle - so far Foveon hasn't demonstrated any superiority - so I'm not sure I understand what this "funky-business" is. But how does one of these scanning backs deal with RGB interpretation? Is it Foveon-type technology? What do they do to capture different wavelengths of light and encode it as data to be interpreted as colour?
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

BernardLanguillier

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 13983
    • http://www.flickr.com/photos/bernardlanguillier/sets/
Want – Need – Afford
« Reply #135 on: August 30, 2009, 07:20:33 pm »

Quote from: MarkDS
Bernard, "funky" Bayer matrices have been serving us well, just to judge from the superb quality of your own results with your cameras embodying that technology, and not to start another battle - so far Foveon hasn't demonstrated any superiority - so I'm not sure I understand what this "funky-business" is. But how does one of these scanning backs deal with RGB interpretation? Is it Foveon-type technology? What do they do to capture different wavelengths of light and encode it as data to be interpreted as colour?

Granted, Bayer works very well. Now it has to be the case that the colors delivered by a true RGB device are significantly better but it is true that we do nothave a good metrics for this, nor evidence that our brain is able to actually perceive this difference at the conscious level.

It would be interesting to blind test this and see if people feel a difference between equal resolution images shot with a Betterlight and a back. My bet is that they would feel a difference, but be totally unable to tell us what the difference is.

The backs simply work by moving an array with 3 lines of sensors, filtered for RGB each in succession so that R, G and B pass in front of the same pixel scene on after the other. Obviously artifacts are introduced if there is the slighted movement of either the back of the objects in the scene. I am not saying that this is a practical solution for outdoor work...  

Cheers,
Bernard

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Want – Need – Afford
« Reply #136 on: August 30, 2009, 07:29:51 pm »

Quote from: BernardLanguillier
The backs simply work by moving an array with 3 lines of sensors, filtered for RGB each in succession so that R, G and B pass in front of the same pixel scene on after the other. Obviously artifacts are introduced if there is the slighted movement of either the back of the objects in the scene. I am not saying that this is a practical solution for outdoor work...  

Cheers,
Bernard

Yeah, well if you had a Thermos of cappuchino and an iPod full of Mahler symphonies you could sip coffee and listen to a symphony in the great outdoors while each shot processes...............  

Anyhow, thanks for the insight on how they work. I can imagine it being ideal for indoor repro work of inanimate objects where the photog is paid by the hour.
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
Want – Need – Afford
« Reply #137 on: August 30, 2009, 07:44:45 pm »

Quote from: MarkDS
Ray, I must say I too was befuddled with this paragraph and also couldn't see (and still don't) the relevance of pixel count to FOV and DOF. You can have a sensor of any pixel count you want and the FOV and DOFwill still be determined by the focal length of the lens, the F-Stop and the lens to subject distance, whatever the resolving power of the sensor. Stitching DSLR images comes into play where you want higher resolution to cover the same FOV and you're not using a P65. And you'd need a longer lens to capture roughly the same FOV on a MFDB than on a DSLR with the same aspect ratio - if I recall from the film era when you needed an 80mm lens on a Rolleiflex to get roughly the FOV of a 50mm lens on a Leica (aspect ratio excepted). No?


Mark,
You seem to be confusing the role of pixel count with pixel density in this context. When emulating the result from a single P65+ shot, by stitching images from a smaller sensor, it's the pixel density of the smaller sensor that will determine the choice of appropriate focal length for both equal FOV and equal pixel count of the final stitch.

It is the pixel count of the smaller sensor that will determine the number of images required to be taken for stitching. Different size sensors of equal pixel density will of course have a different pixel count.

For example, the Canon 1Ds3 has the same pixel density as the cropped format 20D and almost the same pixel density as the P65+, therefore, whether I stitch with the 1Ds3 or 20D, I will use the same focal length of lens, which will also be approximately the same focal length as that used for the single P65 shot.

I will of course need to stitch a greater number of images using the 20D instead of the 1Ds3.

However, if I were to substitute the 15mp Canon 50D for the 8mp 20D, and use the same lens, I could certainly get the same FoV in the final stitch from the same number of images stitched in exacly the same way, but the final stitch would have almost twice the pixel count of the single P65 shot.

In order to achieve the goal of equal pixel count, which is the purpose of the stitching exercise, I would need to use a shorter focal length of lens with the 50D.

Of course, if I don't care what the pixel count of the final stitch will be because I intend to either upsample or downsample the stitched image according to print size, then I could use any focal length I liked. The longer the focal length, the more images I would need for stitching purposes and the greater the pixel count of the final stitch.

However, in this exercise we do care about pixel count, don't we?  
Logged

Mark D Segal

  • Contributor
  • Sr. Member
  • *
  • Offline Offline
  • Posts: 12512
    • http://www.markdsegal.com
Want – Need – Afford
« Reply #138 on: August 30, 2009, 09:12:02 pm »

Ray, not clear to me what I'm confusing..otherwise there would be clarity, eh?   Do we understand the same things by these terms?

Pixel density and pixel count: Density is about how many pixels you cram into a given space. Count is the number of pixels.  For a sensor of fixed dimensions, pixel density will be higher the higher the pixel count. A very large sensor with a large number of pixels could have higher or lower density than a DSLR with a smaller number of pixels. It all depends on pixel count relative to sensor size. BUT none of this pixel density and count as defined here has anything to do directly with FOV and DOF. Those are determined by sensor size, focal length and lens to subject distance.
Mark
Logged
Mark D Segal (formerly MarkDS)
Author: "Scanning Workflows with SilverFast 8....."

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
Want – Need – Afford
« Reply #139 on: August 30, 2009, 09:58:20 pm »

Quote from: MarkDS
Ray, not clear to me what I'm confusing..otherwise there would be clarity, eh?   Do we understand the same things by these terms?

Pixel density and pixel count: Density is about how many pixels you cram into a given space. Count is the number of pixels.  For a sensor of fixed dimensions, pixel density will be higher the higher the pixel count. A very large sensor with a large number of pixels could have higher or lower density than a DSLR with a smaller number of pixels. It all depends on pixel count relative to sensor size. BUT none of this pixel density and count as defined here has anything to do directly with FOV and DOF. Those are determined by sensor size, focal length and lens to subject distance.
Mark


I can't see the difficulty here, Mark. As Jeff Schewe has admitted, the reason one might need a P65 is to make big prints of high quality. If you want a 60mp stitch using a small sensor, say a Canon 20D, you will need to stitch a certain number of images, perhaps 12 allowing for overlap. You'll need a lens of around the same focal length as you would use with the P65 because the pixel density of the 20D is about the same as that of the P65+ (in fact it's slightly less).

If you use a 50D instead of a 20D, you'll need a shorter lens, even though the sensor is the same size, and consequently you need to stitch fewer images to get your 60mp final stitch with the same FoV. If you don't use a shorter lens, the final stitch will have a significantly higher pixel count. What's so difficult to understand?

However, it's true that in paractice, if I were to carry a 50D up Poon Hill instead of a P65+ with a view to stitching a few images to get a file size suitable for the same size print I'd be able to make from a single P65 shot using, say, a 24mm lens, I wouldn't be too worried about getting an even higher resolution image from the stitching process.

So lets say I use the same focal length of 24mm with the 50D that I would have used with the P65+ (or 20D) for the purpose of capturing the same scene from the same position. If I use the same F stop as I would have used with the P65+, I can expect to get a sharper result since system resolution is always a product of lens resolution and sensor resolution and the 50D, with its higher pixel density, should deliver higher resolution than the P65+ from the same quality of lens (excluding the issue of the AA filter. One would need to see real-world results to make a judgement on this).

If we assume that the 50D would be capable of higher resolution at the same f stop, then one can trade such higher resolution for greater DoF by stopping down, say, one stop.

However, I would be prepared to accept that the P65's advantage of a lack of AA filter may affect such theoretical predictions, if someone were to show me some real-world stitches demonstrating such a comparison.

My own tests comparing the 50D with the 40D have demonstrated that a 50D image at F11 has about equal resolution to the 40D at F8 (and I expect greater resolution than the 20D at F8 although I haven't done the comparison).

This is a factor which is often overlooked by those who complain that lenses are not good enough for cameras with ever increasing pixel count. When the 50D was announced, there was much discussion about the usefulness of such a high-pixel-density sensor. Perhaps no purpose would be served by stopping down beyond F8. Well, the purpose is increased DoF with no loss of resolution at the plane of focus, compared with a 40D or 20D.
« Last Edit: August 30, 2009, 10:07:32 pm by Ray »
Logged
Pages: 1 ... 5 6 [7] 8 9   Go Up