Luminous Landscape Forum
Equipment & Techniques => Cameras, Lenses and Shooting gear => Topic started by: eronald on May 21, 2014, 03:54:18 pm
-
The point of a low pass filter is to cut off at a certain max frequency. Instead, it seems the D800 filter just chops 25% or so off the measured resolution (whatever that means) of each lens.
http://www.dxomark.com/Reviews/Best-lenses-for-the-Nikon-D800E-The-sharpest-full-frame-camera-ever-measured/Sharpness-analysis-D800E-vs.-D800
Edmund
-
I guess I'll add a D800S to my line up. :)
The breakthrough Nikon achieved 2 years ago continues to be revealed it seems.
Cheers,
Bernard
-
+1. But I do hope to see wireless on the D800s native.
Paul
-
I saw that, but does it contradict some of the other information around on the resolution differences between the two models ?
Anyway, it's an amazing camera the E.. 8)
A newer model with a bit of an update would be welcome here..
-
The point of a low pass filter is to cut off at a certain max frequency. Instead, it seems the D800 filter just chops 25% or so off the measured resolution (whatever that means) of each lens.
AFAIK, we don't have anything near a brick wall AA filter. I think the only way to get there is to oversample with no AA filter and res back down, adding the AA in with digital filtering.
However, to say that current AA filters are possibly a joke without talking about aliasing is to focus on the disadvantage (loss of energy at spatial frequencies below the Nyquist frequency) while ignoring the advantage (reduction of energy at spatial frequencies above the Nyquist frequency).
Take a look at this:
http://blog.kasson.com/?p=5809
If you want to suggest possible AA filter convolution kernels, I'll try them out in the camera simulator and report the results. If you want to use negative numbers you can get some interesting results, but I'm not sure how realistic that is.
Jim
-
The problem I have with the DXOMark Perceptual MP concept is that I have not seen a suitable explanation of exactly how it is calculated and as a result I am unable to appreciate the significance of the measurement. I prefer to talk in terms of mtf vs lppmm.
The purpose of the AA filter is to restrict spatial image energy below the Nyquist frequency of the sensor as much as possible to minimize the creation of sampling artefacts. I don't know how camera designers decide on the strength of the filter but presumably in a camera like the D800, it would be tailored to have the smallest possible impact on sharpness with the sharpest available lenses. (Lower quality lenses provide extra low pass filtering anyway). With the D800E, the only filtering by the camera results from the sensor fill factor and the de-mosaicing algorithm used. The issue with the AA removed is how much energy above the Nyquist frequency the lens lets through and hence the likelihood of artefacts. From what I've read, the D800E is pretty good in this regard. I guess it is not particularly critical with a lot of landscape scenes and also with these it is quite likely that a fairly small aperture is used resulting in a lower mtf for the lens anyway (and hence extra low pass filtering).
Dave
-
AFAIK, we don't have anything near a brick wall AA filter. I think the only way to get there is to oversample with no AA filter and res back down, adding the AA in with digital filtering.
However, to say that current AA filters are possibly a joke without talking about aliasing is to focus on the disadvantage (loss of energy at spatial frequencies below the Nyquist frequency) while ignoring the advantage (reduction of energy at spatial frequencies above the Nyquist frequency).
Take a look at this:
http://blog.kasson.com/?p=5809
If you want to suggest possible AA filter convolution kernels, I'll try them out in the camera simulator and report the results. If you want to use negative numbers you can get some interesting results, but I'm not sure how realistic that is.
Jim
Jim,
Where are details on your simulator?
Edmund
-
I have always wondered why digital imaging doesn't use a figure like total harmonic distortion (THD) like in audio. Even if you had a perfect brick-wall low pass filter you will not get a high quality signal at or very close to Nyquist frequency
-
Jim,
Where are details on your simulator?
Edmund
Edmund, I've been describing its capabilities on my blog. It's written in Matlab. I can email you some code if you'd like; it's pretty disorganized now, as I'm working on it every day. I'm using Peter Burns' sfrmat3 to get the MTFs. WRT the AA filter simulation, all I need is a convolution kernel at at least the resolution of the camera's pixel pitch, or preferably some higher resolution. I'm still working on the right way to do AA filter simulation, and having some examples to try would be instructive for me, as well as motivating.
Jim
-
Edmund, I've been describing its capabilities on my blog. It's written in Matlab. I can email you some code if you'd like; it's pretty disorganized now, as I'm working on it every day. I'm using Peter Burns' sfrmat3 to get the MTFs. WRT the AA filter simulation, all I need is a convolution kernel at at least the resolution of the camera's pixel pitch, or preferably some higher resolution. I'm still working on the right way to do AA filter simulation, and having some examples to try would be instructive for me, as well as motivating.
Jim
What would be nice would be to have some small mini-simulator with actual blocks, either fourier or spatial, not just MTF. I proposed this to Iliah a few years ago, he basically laughed, but we have a great number of raw converters and zero public domain full system simulators AFAIK
Edmund
-
What would be nice would be to have some small mini-simulator with actual blocks, either fourier or spatial, not just MTF. I proposed this to Iliah a few years ago, he basically laughed, but we have a great number of raw converters and zero public domain full system simulators AFAIK
Edmund, I'm not sure what you mean by actual blocks; do you mean code modules?
Let me tell you how my simulators -- I've built two of them, one aimed at camera noise studies, and one that started out looking at mosaicing and demosaicing, and grew to handle the Sony lossy raw compression, diffraction, AA, lens defects and other things -- work. Actually, let me just concentrate on the second one. It starts with a target image, which can be anything as long as it's at lest a couple of binary orders of magnitude larger in each linear dimension than the simulated sensor. Then I simulate the lens defects through convolution. Then I simulate the AA filter the same way. Then I sample to a (just RGGB, for now) Bayer CFA with just about any binary fill factor, which can exceed 1, adding photon noise if desired. Then I add gain from the ISO knob and digitize with an arbitrary resolution ADC. I demosaic with bilinear interpolation, and can write out the resultant image, or, in the case of a slanted edge target, analyze it with sfrmat3.
There are some significant limitations at this point. The camera's CFA is that which yields the Adobe RGB primaries. Diffraction is computed for any wavelength, but is the same for all color planes. I can't write out raw images in a form that DCRAW can deal with. Only on-axis lens defects are considered, and they are crude.
The first simulator was object oriented, but the second one, which just grew in unplanned directions, is messily functional.
FWIW...
Jim
-
The point of a low pass filter is to cut off at a certain max frequency. Instead, it seems the D800 filter just chops 25% or so off the measured resolution (whatever that means) of each lens.
http://www.dxomark.com/Reviews/Best-lenses-for-the-Nikon-D800E-The-sharpest-full-frame-camera-ever-measured/Sharpness-analysis-D800E-vs.-D800
Edmund
Most of us use our cameras for recording images that we watch. Measurements are only interesting as long as they are relevant.
Undocumented measurements of "sharpness", while interesting, can not tell us if a particular camera feature is "a joke".
(yes, I feel that both the dxo article and your post is somewhat populist)
I have implemented (near) brickwall filters for images, and they look horrible. Even the most popular filter kernels for digital image scaling (where you have more freedom than doing physical/optical filtering) does some harm to the pass-band in order to limit ringing etc. Nearest neighbor provides excellent "sharpness" but generally horrible image quality.
-h
-
The point of a low pass filter is to cut off at a certain max frequency. Instead, it seems the D800 filter just chops 25% or so off the measured resolution (whatever that means) of each lens.
http://www.dxomark.com/Reviews/Best-lenses-for-the-Nikon-D800E-The-sharpest-full-frame-camera-ever-measured/Sharpness-analysis-D800E-vs.-D800
Edmund
Wow! I'm sure glad I chose the D800E in preference to the D800, and ignored all those experts whingeing about the greater potential of the D800E to produce aliasing artefacts. ;D
-
Edmund, I've been describing its capabilities on my blog. It's written in Matlab. I can email you some code if you'd like; it's pretty disorganized now, as I'm working on it every day. I'm using Peter Burns' sfrmat3 to get the MTFs. WRT the AA filter simulation, all I need is a convolution kernel at at least the resolution of the camera's pixel pitch, or preferably some higher resolution. I'm still working on the right way to do AA filter simulation, and having some examples to try would be instructive for me, as well as motivating.
Hi Jim,
IIRC, others (e.g. Frans van den Bergh (http://www.dpreview.com/forums/post/51180297) with his MTF Mapper (http://sourceforge.net/projects/mtfmapper/postdownload?source=dlp) application) have tried to model an AA-filter with four (Gaussian or Box?) blur kernels, and a variable (orthogonal? +) diagonal offset between these kernels.
While an interesting exercise, I feel that the interactions between optical components are of such complexity, and so many variables are involved (e.g. irregularly shaped sensel aperture, hence complex shaped fill-factor), that an inaccurate model of one variable, may lead to false conclusions about the whole system (just like one cannot simply multiply the MTF of diffraction blur with the MTF of defocus).
In fact, the complexity of compound blur kernels is such that also Frans seems to use Monte-Carlo simulation to get a statistical approximation because it would presumably take too long to accurately calculate the convolution of all components.
In addition to that, I keep finding (empirically) that combined with Raw conversion algorithms the cumulative/compound blur of all involved components closely resembles a Gaussian blur kernel (or a combination of a few different Gaussian blur kernels for complex blur shapes). Others use the Moffat function (https://en.wikipedia.org/wiki/Moffat_distribution) (see also here (http://pixinsight.com/doc/tools/DynamicPSF/DynamicPSF.html#description_001)) instead of a Gaussian, to allow a more accurately fitting model of the shape of the compound PSF.
So maybe just adding a simple Gaussian blur will get you close enough to a realistic simulation, without the risk of over-specifying the approximation model.
Cheers,
Bart
-
The point of a low pass filter is to cut off at a certain max frequency. Instead, it seems the D800 filter just chops 25% or so off the measured resolution (whatever that means) of each lens.
Hi Edmund,
The cut-off is very gradual, and affects all spatial frequencies (not a brick wall cutoff, if it were even possible, because that would introduce e.g. ringing artifacts). Also, the DxOmark single figure metric is a combination of perceptual and physical factors thrown into one basket. It also doesn't tell you anything about the potential for deconvolution restoration, for which the MTF response near the limiting resolution is more important than its weight in the Mpix metric suggests.
When the D800E became available, I tested it's resolution potential (http://www.luminous-landscape.com/forum/index.php?topic=65927.msg523733#msg523733) compared to that of the D800, based on a few images that were made available to me by Michael Reichmann. The difference in absolute limiting resolution was approx. 1%, although the modulation near that limiting resolution was lower for the D800, making it more of a challenge to deconvolve and restore resolution (especially for low luminance contrast micro-detail), although with fewer aliasing artifacts getting in the way.
So the DxO Mpix metric is heavily influenced by their perceptual component, but without capture sharpening.
I'd not take it as anything more than an indication (as all single figure metrics attempting to describe a complex system).
Cheers,
Bart
-
Bart,
As you know I'm very superficial and tend to go off fast :)
What disturbs me with the DxO results is that rather than an impression of frequency clipping, there is a seeming quasi-proportionality of the Non-E/E numbers. I was expecting there would be a cutoff effect, where bad lenses are not very affected by the filter, but their numbers appear to show a loss of rez that can somehow be imagined somehow proportional to the lens max rez/quality/cost. So a blurry lens gets even blurrier.
A lot of people have blurry (legacy) lenses; they probably think there is no point in getting the "E"; from this study it would appear that to the contrary they *should* get the E.
Maybe somebody can explain to this sleepy idiot what the scales are and what is being described by their numbers. Every time I look at something from DxO I get confused. Such pretty diagrams, such a dumb reader.
As an aside I had a conversation with the Fuji engineers at a press presentation during the last PK; during the presentation their manager said "our sensor, no Moiré! ". So I filled my mouth with random words, walked up and said "Hey, you say no Moiré, but I think you *must* have some form of aliasing, what happens if you have a spatial frequency at twice Nyquist?" and the guy without batting an eyelid told the stupid journalist "such a frequency is blocked by the lens"!
Also, if I were feeling philosophical, I would wonder exactly where the energy that is filtered goes? Does it heat the filter? Are the photons absorbed by the lattice or re-emitted in some way?
Edmund
Hi Edmund,
The cut-off is very gradual, and affects all spatial frequencies (not a brick wall cutoff, if it were even possible, because that would introduce e.g. ringing artifacts). Also, the DxOmark single figure metric is a combination of perceptual and physical factors thrown into one basket. It also doesn't tell you anything about the potential for deconvolution restoration, for which the MTF response near the limiting resolution is more important than its weight in the Mpix metric suggests.
When the D800E became available, I tested it's resolution potential (http://www.luminous-landscape.com/forum/index.php?topic=65927.msg523733#msg523733) compared to that of the D800, based on a few images that were made available to me by Michael Reichmann. The difference in absolute limiting resolution was approx. 1%, although the modulation near that limiting resolution was lower for the D800, making it more of a challenge to deconvolve and restore resolution (especially for low luminance contrast micro-detail), although with fewer aliasing artifacts getting in the way.
So the DxO Mpix metric is heavily influenced by their perceptual component, but without capture sharpening.
I'd not take it as anything more than an indication (as all single figure metrics attempting to describe a complex system).
Cheers,
Bart
-
A lot of people have blurry (legacy) lenses; they probably think there is no point in getting the "E"; from this study it would appear that to the contrary they *should* get the E.
Hi Edmund,
Yes, that is a common mistake.
The combined system MTF response is roughly a combination of lens and sampling system MTFs. An increase in either, will boost total performance. Better/denser sampling (with minimized aliasing artifacts) will also allow better Capture (deconvolution) sharpening. The possibilities of resolution restoration through deconvolution of clean signal data are still underestimated.
Cheers,
Bart
-
Hi Edmund,
Yes, that is a common mistake.
The combined system MTF response is roughly a combination of lens and sampling system MTFs. An increase in either, will boost total performance. Better/denser sampling (with minimized aliasing artifacts) will also allow better Capture (deconvolution) sharpening. The possibilities of resolution restoration through deconvolution of clean signal data are still underestimated.
Cheers,
Bart
Bart, maybe this deserves a hard look / article. There is something counterintuitive here.Of course, moving from the spatial to the frequency domain and back is never obvious, and here we also have the fact that MTF numbers are used which again brings the shape of the MTF curve into play etc.
Edmund
-
Bart, maybe this deserves a hard look / article. There is something counterintuitive here.Of course, moving from the spatial to the frequency domain and back is never obvious, and here we also have the fact that MTF numbers are used which again brings the shape of the MTF curve into play etc.
This shows what Bart's talking about. It's really crude, and I'll work on getting a better plot with more points, log2 scale on the f-stop axis etc, but I thought I'd post it while the topic is still current:
(http://www.kasson.com/ll/surfmtf.PNG)
What you're looking at is simulated MTF50 in cycles/picture height (for a 24mmx36mm sensor) vs sensel pitch in um coming towards you, and f-stop for a simulated Otus 55mm going from left to right. No AA filter.
Jim
-
Bart, maybe this deserves a hard look / article. There is something counterintuitive here.Of course, moving from the spatial to the frequency domain and back is never obvious, and here we also have the fact that MTF numbers are used which again brings the shape of the MTF curve into play etc.
Hi Edmund,
In addition to Jim's example chart for a lens quality (MTF) that varies with aperture on one axis, and sampling density (sensel pitch) on the other, one can see that with better lens quality (here it's due to optimum balance between residual aberrations and diffraction) AND denser sampling, the combined result gets better.
One could replace the aperture differences on one axis by different lenses at their optimum aperture on that same axis. Then combined response would still increase with denser sampling for all lenses, with a peak at the combination of best lens and most dense sampling.
To put it in other words, at a certain level of detail with an MTF of 50% for a lens, and and MTF of 50% for a sensor, the combined system response will be 0.5 x 0.5 = 0.25 MTF response (MTF25). If at the same level of detail the lens could be improved to 100% MTF, and the sensor is still 50%, then the combined MTF is raised to 1.00 x 0.5 = 0.5 MTF (MTF50, and it will never get better than the lowest contributor which therefore sets the maximum achievable limit, unless sharpening is added).
Likewise, if at the same level of detail the lens remains at 50% MTF, and the sensor could be improved to 100%, then the combined MTF is 0.5 x 1.00 = 0.5 MTF (it will also never be better that the lowest contributor). With both components at 100%, their combined MTF would become 1.00 x 1.00 = 1.00 MTF response (MTF100).
Therefore, increasing the MTF response for either component will raise the combined response. Of course, a 100% MTF for either component is virtually impossible (except for the lowest spatial frequency or with sharpening). It may also be easier to improve the MTF of one component for a reasonable price than for the other component. We also face physical limitations that prohibit 100% MTF, like diffraction and available sampling density. So it becomes an optimization problem for both components with bounds, and the lowest contributing component keeps dictating the best achievable combined response.
Cheers,
Bart
P.S. I've added a chart as attachment to show the trade-offs between lens quality (expressed as least achievable blur), sensel pitch, and resulting resolution in (simulated) Cycles/mm at MTF50. Diffraction is not fully modeled in, so that may reduce the amplitude at narrower sensel pitches a bit.
-
Jim, Bart,
This is an interesting topic. I would like to take this further. In particular, I would like to understand what MTF curves for a real-world AA filter looks like.
Unfortunately I am due for a minor medical procedure later today, and will probably not have the necessary clarity of thought for serious discussion for a few days - my apologies and I hope to revisit the topic soon. Please do not take my lack of response in this thread as an indication of disinterest.
Edmund
-
Using the 144 cycle Siemen's star chart you may get an idea of what frequencies are attenuated the most. It would be nice to have one made of ground glass with black paint and a backlight. That would give the contrast needed for a better effect. A printed 400:1 contrast does not have the proper MTF attenuation.
-
Jim, Bart,
This is an interesting topic. I would like to take this further. In particular, I would like to understand what MTF curves for a real-world AA filter looks like.
Unfortunately I am due for a minor medical procedure later today, and will probably not have the necessary clarity of thought for serious discussion for a few days - my apologies and I hope to revisit the topic soon. Please do not take my lack of response in this thread as an indication of disinterest.
Hi Edmund,
No worries, getting well has a higher priority.
Cheers,
Bart
-
Using the 144 cycle Siemen's star chart you may get an idea of what frequencies are attenuated the most. It would be nice to have one made of ground glass with black paint and a backlight. That would give the contrast needed for a better effect. A printed 400:1 contrast does not have the proper MTF attenuation.
Hi Arthur,
There are commercial Chrome-on-glass versions (http://www.edmundoptics.eu/testing-targets/test-targets/resolution-test-targets/star-target/1946) available for lens testing. However, they are not suitable for sampled / digital camera imaging, because the sharp edges add high spatial frequencies that will lead to aliasing artifacts. Also, the high contrast will not be needed, and can even distort the read-out (due to glare), and cause trouble in subsequent processing (more sensitive to built-in sharpening, causing overshoots and clipping).
These black/white targets are not suited for digital imaging (they are for analog imaging, vibration testing, or inspection of the lens on an optical bench). That's why I invented the sinusoidal version more than a decade ago (it was a 60 cycle version, because it was less demanding to print with the technology of those days).
Cheers,
Bart
-
Now the tests are all shot wide open correct?
I never shoot wide open, if ever above f4 mostly 5.6-11.
Would any diffraction still equalize the bodies at smaller apertures.
Before I bought the d800 everyone said "the difference is so minor" "sharpening will minimize any difference"
These tests seem to indicate a pretty huge gap. What would be a more pragmatic reading?
Would I really be gaining any noticeable difference with the "e"?
Of course I'm happy with my non-e but thinking if I could ramp up sharpness that much....
-
Hi Arthur,
There are commercial Chrome-on-glass versions (http://www.edmundoptics.eu/testing-targets/test-targets/resolution-test-targets/star-target/1946) available for lens testing. However, they are not suitable for sampled / digital camera imaging, because the sharp edges add high spatial frequencies that will lead to aliasing artifacts. Also, the high contrast will not be needed, and can even distort the read-out (due to glare), and cause trouble in subsequent processing (more sensitive to built-in sharpening, causing overshoots and clipping).
These black/white targets are not suited for digital imaging (they are for analog imaging, vibration testing, or inspection of the lens on an optical bench). That's why I invented the sinusoidal version more than a decade ago (it was a 60 cycle version, because it was less demanding to print with the technology of those days).
Cheers,
Bart
I don't really understand that. I do believe you, maybe you can flesh it out a bit more. If we have a camera with a LPF, the aliasing should be minimal or software generated. If we have a camera like the 800E we should get sub-nyquist radius garbage which should give the user an idea of how the camera is going to work on fine patterns. If we look at a radius greater than the nyquist circle we can calculate the frequency relative to the pixel frequency. Of course this is basic to you, I am trying to explain what the idea was. So I am thinking that as the frequency scales out, you can also see how it attenuates from pure black/white. Isnt this useful? Even if the lens mangles it with glare, that is the system you have to deal with. Real objects dont have rounded edges, they just finish. I would expect that when you get to the point where the pixels are 1/2 the width of the bars you should be hitting pure black, pure white. At frequencies inside that circle there should be the whole MTF curve. What is wrong? This is not my field of study.
-
Now the tests are all shot wide open correct?
I never shoot wide open, if ever above f4 mostly 5.6-11.
Would any diffraction still equalize the bodies at smaller apertures.
Before I bought the d800 everyone said "the difference is so minor" "sharpening will minimize any difference"
These tests seem to indicate a pretty huge gap. What would be a more pragmatic reading?
Would I really be gaining any noticeable difference with the "e"?
Of course I'm happy with my non-e but thinking if I could ramp up sharpness that much....
I think it's already been mentioned that these significant, perceived differences in resolution and sharpness, between the D800 and D800E, apply to the unsharpened images that have resulted from using a particular lens at its sharpest aperture, whatever aperture that may be.
When one begins to sharpen images for comparison purposes, then one introduces a whole new ball game. There are so many parameters and variables in sharpening routines (just look at the sharpening options in ACR, and in Smart Sharpen in Photoshop), and also different sharpening programs, such as Focus Magic, each with their own strengths and weaknesses, that it becomes very difficult to achieve a truly objective and unbiased result when one introduces sharpening.
In other words, one type of sharpening that produces the best result with a D800 image, might not be the best type of sharpening to get the best result from a D800E image. And even if you've done wide-ranging experimentation to demonstrate that the resolution differences after sharpening are in the order of only 1%, someone else might come up with a different sharpening routine, either now or in the future, which might significantly widen that gap of 1%.
DXO understand this perfectly, which is why they deliberately avoid introducing sharpening techniques into their methodology.
Best wishes for a speedy resolution of Edmund's current medical problems.
-
I don't really understand that. I do believe you, maybe you can flesh it out a bit more. If we have a camera with a LPF, the aliasing should be minimal or software generated. If we have a camera like the 800E we should get sub-nyquist radius garbage which should give the user an idea of how the camera is going to work on fine patterns.
Hi Arthur,
The problem is that also an OLPF equipped sensor will exhibit aliasing artifacts, only less prominent. The reason is that to eliminate ALL aliasing potential, it would require a much stronger pre-blur. It would result in a very low contrast and low resolution image. That's partly due to the different sampling densities between the green and the Red/Blue channels of the Bayer CFA.
If we look at a radius greater than the nyquist circle we can calculate the frequency relative to the pixel frequency. Of course this is basic to you, I am trying to explain what the idea was. So I am thinking that as the frequency scales out, you can also see how it attenuates from pure black/white. Isnt this useful?
The problem is that the images will be hardly useful due to aliasing and related False color Demosaicing artifacts. See the attached simulation of a sinusoidal grating and a bi-tonal grating of the same star, side by side. First the regular images, then the same after overlaying a Bayer CFA filter in Photoshop and demosaicing the result in PixInsight software with a bi-linear demosaicing algorithm, then with a VNG algorithm instead of bi-linear. This would be similar to how a sensor array without OLPF would respond. The fourth attachment was Gaussian Blurred (0.3 radius) before overlaying with a Bayer CFA, and then demosaiced with the VNG algorithm just like the third attachment.
Real objects don't have rounded edges, they just finish.
Only in an 'ideal' world. However, with discrete sampling sensors, we are still faced with sensels that use area sampling, which averages the signal over the sensel aperture (IOW the edge will gradually go from covered to not covered, and the average is encoded as a square of uniform density). This in contrast to a point sampling system. Even our eyes, due to lens and inner-eye (vitreous, or gelatinous mass) imperfections blur the image forming rays to something more (Co)sinusoidal. At least our eyes use dithered sampling ...
I would expect that when you get to the point where the pixels are 1/2 the width of the bars you should be hitting pure black, pure white. At frequencies inside that circle there should be the whole MTF curve. What is wrong? This is not my field of study.
Just draw a circle of 92 pixels diameter (=Nyquist), and a circle of 183 pixels diameter (=0.5x Nyquist), on the attached renderings of the star centers for the relevant spatial frequencies.
Cheers,
Bart
-
Best wishes for a speedy resolution of Edmund's current medical problems.
Thanks. I'm ok, but for a change feeling about as smart as I really am :)
Going to watch this thread scroll by for another day, before I become my usual bitchy self.
Edmund
-
Bart,
Thanks for the clear explanation, that really fills in some gaps.
-
I'm a layman and don't understand much of what's been said above. I mainly shoot landscapes. If I was to buy a new camera, would I be better off with the D800 or D800E and why?
-
Thanks. I'm ok...
Glad to hear it.
Going to watch this thread scroll by for another day, before I become my usual bitchy self.
Can't wait.
Jim
-
IIRC, others (e.g. Frans van den Bergh (http://www.dpreview.com/forums/post/51180297) with his MTF Mapper (http://sourceforge.net/projects/mtfmapper/postdownload?source=dlp) application) have tried to model an AA-filter with four (Gaussian or Box?) blur kernels, and a variable (orthogonal? +) diagonal offset between these kernels.
Thanks for the pointer, Bart. Jack Hogan also gave me a link to the same site. I threw the 4-way beam splitter into the model, and found that it did much better than a straight box blur (modeled by making the fill factor 400%) in the stop-band, and a little worse in the pass-band.
Details here: http://blog.kasson.com/?p=5832
Jim
-
I'm a layman and don't understand much of what's been said above. I mainly shoot landscapes. If I was to buy a new camera, would I be better off with the D800 or D800E and why?
The first test is whether you're willing to go to all the trouble to get the most out of the E version: sturdy tripod, good lenses, mirror lockup, live-view focusing, keep the focal length of the lenses below, say, 300 mm... You get the idea. If you're OK with that, the next test is whether you want your files to be mostly aliasing-free with default settings in your raw converter of choice, or whether you're willing to do some work on some of them to clean up the false color. If you are willing to do some editing and you said yes to the first set of conditions, the E is a good choice. If you want your files to be mostly free of moire and false color just as they came from Lr, then you probably don't want the E.
However, as Bart pointed out above, there's a condition where the E is a better choice than the regular D800 that's counterintuitive: if you're going to have images where the combination of lens defects, diffraction, focus errors, vibration, etc are enough to keep you from having aliasing, you're better off without the extra blur of the AA filter, so you want the E.
Confusing, huh?
Jim
-
I have always wondered why digital imaging doesn't use a figure like total harmonic distortion (THD) like in audio.
THD is a measure of departure from linearity. So far, in this discussion, we've assumed linearity, so I'm not sure what your point is here. Maybe you could expand on that.
In my experience, if you stay away from electron counts near the full well capacity, where there is oftem some compression, today's sensors are remarkably linear.
Even if you had a perfect brick-wall low pass filter you will not get a high quality signal at or very close to Nyquist frequency
I'm assuming that your filter comment is unrelated to your THD comment, so I'll assume linearity here. I don't know what you mean by a high-quality signal in this context. Certainly, one of the consequences of steep filter skirts is large phase shifts. That's important in audio. Is that what you mean?
I've wondered about the effect of spatial phase shifts in imaging before (http://blog.kasson.com/?p=5241), but have not reached any conclusions. They are explicitly ignored in MTF analysis. Is that a bad thing?
Jim
-
I've wondered about the effect of spatial phase shifts in imaging before (http://blog.kasson.com/?p=5241), but have not reached any conclusions. They are explicitly ignored in MTF analysis. Is that a bad thing?
Hi Jim,
Good or bad, maybe ... In the ISO SFR analysis based on a Slanted edge, the method averages into one quarter sensel bins, and enforces an 'exact' multiple of phase rotations. That tends to produce a stable result under all circumstances, but it hides e.g. a stair-stepping tendency of aliasing.
With the tests I've done on my Slanted edge evaluation method (http://bvdwolf.home.xs4all.nl/main/foto/psf/SlantedEdge.html), I take fewer phase rotations (1/10th up to how many one has the patience for, I recommend 10/10ths), because it reveals such intricacies. As can be seen from the attached example, it takes more work, but it starts to show that for the sharpest aperture there is a slightly higher phase rotation effect than for the more blurry apertures. My results only fluctuate a bit, also because I mostly ignore Gamma, but the differences are usually buried in the fractional digits.
Cheers,
Bart
-
I've added a chart as attachment to show the trade-offs between lens quality (expressed as least achievable blur), sensel pitch, and resulting resolution in (simulated) Cycles/mm at MTF50. Diffraction is not fully modeled in, so that may reduce the amplitude at narrower sensel pitches a bit.
And here's a surface that looks a lot like Bart's for a diffraction-limited (red lambda 650nm, green lambda 550nm, blue lambda 450nm) lens (well modeled except for the discrete wavelengths: the Airy filter kernel was 3 times the diameter of the first zero times the target to sensor ratio of 32, and the run to generate this took about 4 hours) at from f/2.8 to f/17.5, and sensel pitches from 2 to 6 um. A RGGB sensor with Adobe RGB primaries was assumed, and bi-linear interpolation was used for demosaicing.
Z axis is MTF50 cycles per picture height, assuming a 24x36mm sensor, computed using sfrmat3.
(http://www.kasson.com/ll/diffractionltdmtf50.png)
Jim
-
The first test is whether you're willing to go to all the trouble to get the most out of the E version: sturdy tripod, good lenses, mirror lockup, live-view focusing, keep the focal length of the lenses below, say, 300 mm... You get the idea. If you're OK with that, the next test is whether you want your files to be mostly aliasing-free with default settings in your raw converter of choice, or whether you're willing to do some work on some of them to clean up the false color. If you are willing to do some editing and you said yes to the first set of conditions, the E is a good choice. If you want your files to be mostly free of moire and false color just as they came from Lr, then you probably don't want the E.
However, as Bart pointed out above, there's a condition where the E is a better choice than the regular D800 that's counterintuitive: if you're going to have images where the combination of lens defects, diffraction, focus errors, vibration, etc are enough to keep you from having aliasing, you're better off without the extra blur of the AA filter, so you want the E.
Confusing, huh?
Jim
Why cant they make a screw on filter for when you need the AA? Just like putting a UV filter on the lens.
-
Why cant they make a screw on filter for when you need the AA? Just like putting a UV filter on the lens.
You could try Vaseline... ;)
-
You could try Vaseline... ;)
Or panty hose.
-
Why cant they make a screw on filter for when you need the AA? Just like putting a UV filter on the lens.
Hi Arthur,
Because it becomes very difficult to restrict the blur to a very tiny region. When also lenses get involved, the effect will also be very much variable due to the variable lens aberrations. Some claim to have a patent pending product (http://www.mosaicengineering.com/products/nbaa.html) that requires some sort of Raw conversion adjustment. It also seems to be aimed at video resolutions, so there is probably a lot of blurring at the full resolution level. There do seem to be some issues (http://www.cinema5d.com/news/?p=20584) with image quality ...
Cheers,
Bart
-
Maybe someone can test the black panty hose trick. It's been around forever in film, what is the impact on a AA free digital camera?
-
Why cant they make a screw on filter for when you need the AA? Just like putting a UV filter on the lens.
Apparently the Pentax K3 uses its sensor image stabilization system to produce an OLPF effect at will. Don't know how well it works compared to a fixed AA but it seems like a pretty smart approach.
-
This is an interesting topic. I would like to take this further. In particular, I would like to understand what MTF curves for a real-world AA filter looks like.
In the frequency domain an ideal 4-dot beam splittin' AA looks like a cosine that hits first zero at 1/(4x) cycles per pixel, +/- x being the shift in pixels (http://mtfmapper.blogspot.it/2012/06/nikon-d40-and-d7000-aa-filter-mtf.html) introduced by it. In an MTF graph it is an absolute value so it bounces back after the zero. x tends to be in the 0.35-0.4 pixel range for most current (Nikon) cameras. Here is a theoretical example on a D610 (dotted blue line, x=0.39), it should look very similar on a D800:
(http://i.imgur.com/GMu1Ujw.png)
It is hard to isolate because one needs to keep everything the same in the AA and AAless version in order to measure the AA effect accurately. I think I was able to do it on the D610 and A7 because they appear to have antialiasing action in one direction only, so I was able to use green channel raw data from a single capture in each case:
(http://i.imgur.com/iDciom7.png)
(http://i.imgur.com/On3sqYe.png)
Things get unreliable after the zero because I am taking the ratio of two small noisy values. You can read more about where these charts came from here (http://www.dpreview.com/forums/thread/3654038).
Jack
-
I saw that, but does it contradict some of the other information around on the resolution differences between the two models ?
Anyway, it's an amazing camera the E.. 8)
A newer model with a bit of an update would be welcome here..
I believe that is because lenses. Most lenses will not show.Look at DxO data and you will see a bunch of lenses (that are good in many ways) rendering
13 or less perceptual megapixels. So with that glass the difference between a D600 and a D800E will be less than expected, let alone D800 vs D800E.
Resolution is not the all there is for a lens. Some times resolution is not even the key parameter. never the less I can't justify a design like the Nikon 55mm f1.4 noc simply because resolution is too bad, but in a D4 it will be all is needed.
I want to see if the D800E can differentiate between average resolutions of the Sigma 50mm Art an the 55mm otus.
I continue to believe that the 135mm Zeiss is like the best lens there is (taking into account price), unless you need autofocus.
Zeiss should license Sigma or Tamron auto focus know-how, or use the screw mechanism. Waiting for authorization by Nikon or Canon is non-sense.
Best regards,
J. Duncan
-
Now the tests are all shot wide open correct?
I never shoot wide open, if ever above f4 mostly 5.6-11.
Would any diffraction still equalize the bodies at smaller apertures.
Before I bought the d800 everyone said "the difference is so minor" "sharpening will minimize any difference"
These tests seem to indicate a pretty huge gap. What would be a more pragmatic reading?
Would I really be gaining any noticeable difference with the "e"?
Of course I'm happy with my non-e but thinking if I could ramp up sharpness that much....
The article states that wide open the otus matches or exceeds the 135mm at f2.8 but at f4.0 the 135mm beets everyone else, at any aperture, basically. So the tests are done at multiple apertures.
Also when you use the tool for comparing lens you can change the aperture (and the focal length ) and see the results with an specific camera.
The best part of DxO mark is having an standard. I hope that alternatives come along so we can use multiple sources for comparing lenses and sensors.
But having a standard is better than no standard and marketing scams about looks.
Best regards,
J. Duncan
-
Resolution is not the all there is for a lens. Some times resolution is not even the key parameter. never the less I can't justify a design like the Nikon 55mm f1.4 noc simply because resolution is too bad, but in a D4 it will be all is needed.
Interestingly, the Nikon 58mm f1.4 has just been awarded lens of the year in Japan. ;)
Cheers,
Bernard
-
Hi Jack
That's very interesting information you've provided. It gives a very good summary of how the different elements of the system contribute to MTF. For me, this is just the sort of information you need to make a proper distinction between lens performance and camera performance.
The AA blur in one direction only is interesting but I'm struggling to understand the rationale behind it !
Dave
In the frequency domain an ideal 4-dot beam splittin' AA looks like a cosine that hits first zero at 1/(4x) cycles per pixel, +/- x being the shift in pixels (http://mtfmapper.blogspot.it/2012/06/nikon-d40-and-d7000-aa-filter-mtf.html) introduced by it. In an MTF graph it is an absolute value so it bounces back after the zero. x tends to be in the 0.35-0.4 pixel range for most current (Nikon) cameras. Here is a theoretical example on a D610 (dotted blue line, x=0.39), it should look very similar on a D800:
(http://i.imgur.com/GMu1Ujw.png)
It is hard to isolate because one needs to keep everything the same in the AA and AAless version in order to measure the AA effect accurately. I think I was able to do it on the D610 and A7 because they appear to have antialiasing action in one direction only, so I was able to use green channel raw data from a single capture in each case:
(http://i.imgur.com/iDciom7.png)
(http://i.imgur.com/On3sqYe.png)
Things get unreliable after the zero because I am taking the ratio of two small noisy values. You can read more about where these charts came from here (http://www.dpreview.com/forums/thread/3654038).
Jack
-
The AA blur in one direction only is interesting but I'm struggling to understand the rationale behind it !
Hi Dave,
Perhaps to reduce Video resolution aliasing, but not lose too much still image resolution?
Cheers,
Bart
-
Hi Bart
Yes video shooters seem to be more concerned about AA than still shooters ! I don't know much about that aspect of video but I think that aliasing is probably accentuated with video due to variations in it from frame to frame, which would give a "motion" effect to the aliasing.
With AA blur in the vertical direction only, you'll get a varying amount of alias reduction depending on the direction of the high frequency detail in the image. For high frequencies purely in the vertical direction there will be no alias reduction whereas in the horizontal direction there will be maximum alias reduction. In general, something in between the two extremes will occur.
Dave
Hi Dave,
Perhaps to reduce Video resolution aliasing, but not lose too much still image resolution?
Cheers,
Bart
-
THD is a measure of departure from linearity. So far, in this discussion, we've assumed linearity, so I'm not sure what your point is here. Maybe you could expand on that.
In my experience, if you stay away from electron counts near the full well capacity, where there is oftem some compression, today's sensors are remarkably linear.
I'm assuming that your filter comment is unrelated to your THD comment, so I'll assume linearity here. I don't know what you mean by a high-quality signal in this context. Certainly, one of the consequences of steep filter skirts is large phase shifts. That's important in audio. Is that what you mean?
I've wondered about the effect of spatial phase shifts in imaging before (http://blog.kasson.com/?p=5241), but have not reached any conclusions. They are explicitly ignored in MTF analysis. Is that a bad thing?
Jim
Thanks for your comment. I might have been using the THD term in an imprecise way, since it is usually related to the harmonics created as a result of non-linearity in systems.
Similar to THD, I'm referring to the weight or power of artificially created output signal during sampling or reconstruction, which is not part of the original signal (THD considers harmonic frequencies caused by non-linearties).
In digital imaging we can have aliasing and artifacts caused by the interpolation mehtod.
My point, and I may be wrong, is that as you get closer to Nyquist frequency, the weight of those "artificial components of the signal" or artifacts, starts to weight more and degrade the quality of the signal.
Regarding phase shift, I have no idea if the affect the output.
On another note, related to the non-intuitive reasoning about why you don't need the AA filter if you lenses are not that great, you may think that you need only one low pass filter, not two.
Finally, does anybody knows if the DXO test comparing the D800 and D800E used unsharpened images?
Regards
-
Finally, does anybody knows if the DXO test comparing the D800 and D800E used unsharpened images?
Hi Francisco,
Sharpening affects the MTF (as does demosaicing) and the visibility of aliasing. Since no sharpening was specified, we can only assume that it was omitted. I'm pretty sure that the folks at DxO would have mentioned it if sharpening had been applied, because the way they could do it would reduce the differences between center and corners.
Cheers,
Bart
-
Hi Bart,
Thanks for your comment, It is just that I consider capture sharpening (preferably by deconvolution) as an essential part of the process, especially if there is an AA filter. I understand it is difficult to include it in a comparative test and not cause more controversy.
It would be interesting if DXO applied their own "lens softness" algorithm together with the corresponding camera/lens combinations and post the results for this comparative test. My guess is that the differences will be less dramatic
Regards,
-
Hi Bart,
My guess is that the differences will be less dramatic
Regards,
I think this is actually the issue at debate: are the 800 and E similar in real life or not? My reading of the DxO data says "not".
And of course most of us here wonder how much better sharpening/deconvolution/motion cancellation work when you don't throw high frequency information away deliberately.
Edmund
-
And of course most of us here wonder how much better sharpening/deconvolution/motion cancellation work when you don't throw high frequency information away deliberately.
Edmund
I think it is a compromise between high frequency information thrown away vs. information "created" or not present in the original signal (aliasing + artifacts). Now, due to the way we perceive images, some people might actually like the effect of those artifacts, giving the idea of increased sharpness.
-
I think that guess, that the D800 and E are very similar in real life with "normal" lenses and/or mediocre shooting conditions was the general assumption; whether it is true is the issue at the heart of this thread.
Hi Edmund,
Good to have you back from your mishap.
As we can see in other threads here on LuLa and elsewhere, sample density (which is identical between the D800 and D800E) and lens quality are two main deciders for image resolution. All the OLPF does is reduce the modulation of the highest spatial frequencies more than of lower spatial frequencies, but it does not eliminate them all together. The net loss of limiting resolution due to the unmodified OLPF in the D800 was measured as less than 1% (http://www.luminous-landscape.com/forum/index.php?topic=65927.msg523733#msg523733), but at the same time the amplitude of aliasing was reduced.
So I am also a bit surprised by the significantly different MTF50 + perception based scores of DxOmark. It must be the perception component (Contrast Sensitivity Function) that weighs in that heavily. Of course, deconvolution sharpening would boost the spatial frequency at which the MTF50 is achieved, and it would boost the modulation at the peak of Contrast sensitivity. It's also easier to deconvolve an OLP filtered image without boosting artifacts than from an unfiltered image.
That would suggest that the Mpix score leaves a lot of real life (which always requires Capture sharpening) quality perception out of the metric, thus reducing its predictive importance. It's just a benchmark, and it leaves a lot of questions unanswered (as most single figure metrics do). In this case it even raises questions, because the lenses in the mix are that good (they do not level the remaining differences).
In particular, it may turn out that deconvolution/sharpening/motion cancellation and lens aberration compensation works a great deal better when information has not been deliberately destroyed :)
I fully agree. And as other experiments (http://www.luminous-landscape.com/forum/index.php?topic=45038.msg378541#msg378541) have shown, even a properly behaved destruction (f/32 diffraction with known PSF kernel) can be recovered from, to a large extent. But when e.g. aliasing artifacts get into the mix, then things become very difficult to repair. GIGO still rules.
Of course, aliasing is only an issue in the narrow plane of focus, so not all subject matter is affected in the same way. Stopping down beyond f/5.6 also starts reducing the issues (and differences) on the sensel pitch of the D800/E due to diffraction. Also remember that the DxO mark tests were cherry-picking the best apertures, presumably around f/4, where diffraction does not yet affect the scores as much. Comparing them at e.g. f/8 would have given much closer scores already.
Cheers,
Bart
-
Hi Edmund,
Good to have you back from your mishap.
As we can see in other threads here on LuLa and elsewhere, sample density (which is identical between the D800 and D800E) and lens quality are two main deciders for image resolution. All the OLPF does is reduce the modulation of the highest spatial frequencies more than of lower spatial frequencies, but it does not eliminate them all together. The net loss of limiting resolution due to the unmodified OLPF in the D800 was measured as less than 1% (http://www.luminous-landscape.com/forum/index.php?topic=65927.msg523733#msg523733), but at the same time the amplitude of aliasing was reduced.
So I am also a bit surprised by the significantly different MTF50 + perception based scores of DxOmark. It must be the perception component (Contrast Sensitivity Function) that weighs in that heavily. Of course, deconvolution sharpening would boost the spatial frequency at which the MTF50 is achieved, and it would boost the modulation at the peak of Contrast sensitivity. It's also easier to deconvolve an OLP filtered image without boosting artifacts than from an unfiltered image.
That would suggest that the Mpix score leaves a lot of real life (which always requires Capture sharpening) quality perception out of the metric, thus reducing its predictive importance. It's just a benchmark, and it leaves a lot of questions unanswered (as most single figure metrics do). In this case it even raises questions, because the lenses in the mix are that good (they do not level the remaining differences).
I fully agree. And as other experiments (http://www.luminous-landscape.com/forum/index.php?topic=45038.msg378541#msg378541) have shown, even a properly behaved destruction (f/32 diffraction with known PSF kernel) can be recovered from, to a large extent. But when e.g. aliasing artifacts get into the mix, then things become very difficult to repair. GIGO still rules.
Of course, aliasing is only an issue in the narrow plane of focus, so not all subject matter is affected in the same way. Stopping down beyond f/5.6 also starts reducing the issues (and differences) on the sensel pitch of the D800/E due to diffraction. Also remember that the DxO mark tests were cherry-picking the best apertures, presumably around f/4, where diffraction does not yet affect the scores as much. Comparing them at e.g. f/8 would have given much closer scores already.
Cheers,
Bart
Bart,
I think it might be worthwhile running some real world tests or spatial domain simulations. The remark about narrow plane of focus may or may not apply to landscape and art repro shooters who are some of the untypical clusters found on this forum.
Also, as Francisco alludes to, the whole issue is clouded by the question of what gives "sparkle" to an image, and how texture is perceived. Head hair, beard stubble, skin, eye detail, feathers etc.
In fact Francisco makes an interesting point: Film grain became an integral and expected part of the "dark" photographic image, and aliasing artefacts may now be culturally expected as an indication of sharpness in digital photos.
I don't feel well served by the standard MTF results, or indeed my own MTF lens tests, in the sense of predicting subjective sharpness, while I do find that DxO's DR and noise figures reflect and predict quite well what I see in the field when using a camera.
I guess if I were more experienced I would feel more comfortable with frequency arguments, but at this point I think spatial simulation might be more illuminating. I wonder whether resampling/filtering/re-enhancing some typical dSLR images down to VGA resolution might not supply the desired enlightenment with low experimental overhead.
Edmund
PS. I do wonder is a decent camera as good as a decent hifi or is it as bad as a transistor radio? Are we really seeing the texture or does an image "only" convey the same amount of information about skin and hair as the Venus de Milo? ;)
-
Bart,
I think it might be worthwhile running some real world tests or spatial domain simulations. The remark about narrow plane of focus may or may not apply to landscape and art repro shooters who are some of the untypical clusters found on this forum.
Hi Edmund,
I agree, real images are better than charts or single number metrics, but the difficulty with visual comparisons is that they are complex. Many different types of subject matter, and many different post-processing paths, leads to complex comparisons.
Also, as Francisco alludes to, the whole issue is clouded by the question of what gives "sparkle" to an image, and how texture is perceived. Head hair, beard stubble, skin, eye detail, feathers etc.
I absolutely agree. As I tried to demonstrate, even the first step in a sharpening workflow, Capture sharpening (http://www.luminous-landscape.com/forum/index.php?topic=68089.msg539206#msg539206), can create a level playing field for subsequent post-processing and comparison, but our tools make it hard to achieve it. That's even before Creative sharpening! And even something like Clarity can be implemented in hugely different ways and again change the look of an image immensely. And then there is 'taste', or the lack of it.
In fact Francisco makes an interesting point: Film grain became an integral and expected part of the "dark" photographic image, and aliasing artefacts may now be culturally expected as an indication of sharpness in digital photos.
I agree that expectations have something to do with it, but I've never seen a noisy sky when I look at the real thing, or a stairstepped straight edge with halos. So when I want to capture reality, I'm not going to create an abstraction. When I do want to create an effect/abstraction, anything goes, even creating an image from a photograph (or some over the top HDR tonemapping). However, when artifacts start to distract from conveying the emotion or message, then something needs to be improved, IMHO.
I don't feel well served by the standard MTF results, or indeed my own MTF lens tests, in the sense of predicting subjective sharpness, while I do find that DxO's DR and noise figures reflect and predict quite well what I see in the field when using a camera.
I guess if I were more experienced I would feel more comfortable with frequency arguments, but at this point I think spatial simulation might be more illuminating. I wonder whether resampling/filtering/re-enhancing some typical dSLR images down to VGA resolution might not supply the desired enlightenment with low experimental overhead.
It's tough to devise an objective comparison. Just to illustrate one of the potential variables, look at the two attached examples (first from the "optimal Capture sharpening" thread, second with added Detail / Creative sharpening) and compare. A viewing distance of some 6 feet or 2 metres, at 100% display zoom might give a better sense of detail with our low resolution displays. Also observe how much apparent resolution is gained at closer viewing distances, because I improved spatial frequencies for closer viewing more than those for more distant viewing.
So without a rigorous regime of shooting, processing, viewing distance, and subject standardization, most images can be made to look like another, or something different, very easily.
PS. I do wonder is a decent camera as good as a decent hifi or is it as bad as a transistor radio? Are we really seeing the texture or does an image "only" convey the same amount of information about skin and hair as the Venus de Milo? ;)
LOL, define decent HiFi (and how large a room and its acoustical properties) ...
Cheers,
Bart
-
We should all know by now that expectations can result in a placebo effect, on all matters, whether it be testing the efficacy of a new drug, or the clarity and 3-dimensionality of a hi-fi or a photographic image experience.
To resolve such matters you need the double blind test. That is, a comparison in which the viewers or listeners have no knowledge of the credentials, brand, or model of the equipment being used to produce the sound or images.
In the world of hi fi, such testing has produced remarkable results. Subtleties of amplifier harmonic distortion often get drowned in the larger deficiencies of loudspeaker performance and room acoustics, often resulting in expensive amplifiers with ultra-low harmonic distortion serving no practical purpose.
Likewise, subtleties of resolution differences in images can get lost depending on print size and viewing distance.
Without a direct comparison, at very large sizes, of D800 and D800E images of the same subject using the same lens and shooting methodology, there can be no meaningful conclusion.
Such comparisons would of course have to include unsharpened images, as well as images sharpened with a variety of different techniques and programs.
-
We should all know by now that expectations can result in a placebo effect, on all matters, whether it be testing the efficacy of a new drug, or the clarity and 3-dimensionality of a hi-fi or a photographic image experience.
Reports of miracle improvements from the recent D800/D800E firmware are case in point.
On dpreview people were reporting changes in the new firmware including not just the changes that Nikon described but also faster AF, lower noise and I'm sure someone said it now cures warts.
-
Hi,
I would agree on most issues. Placebo effects may play a role. Also, sharpening very clearly plays a very major role.
Something I have noticed is that there can be very little difference in moderately small prints, like A2, between formats and megapixels. Once a format is good enough, we get diminshing returns. For instance a smaller format may give better DoF that may be preferable to the shallower DoF of the larger format.
Having more megapixels doesn't really hurt anything else, so I guess it is always a good thing, at least within reasonable limits.
Best regards
Erik
We should all know by now that expectations can result in a placebo effect, on all matters, whether it be testing the efficacy of a new drug, or the clarity and 3-dimensionality of a hi-fi or a photographic image experience.
To resolve such matters you need the double blind test. That is, a comparison in which the viewers or listeners have no knowledge of the credentials, brand, or model of the equipment being used to produce the sound or images.
In the world of hi fi, such testing has produced remarkable results. Subtleties of amplifier harmonic distortion often get drowned in the larger deficiencies of loudspeaker performance and room acoustics, often resulting in expensive amplifiers with ultra-low harmonic distortion serving no practical purpose.
Likewise, subtleties of resolution differences in images can get lost depending on print size and viewing distance.
Without a direct comparison, at very large sizes, of D800 and D800E images of the same subject using the same lens and shooting methodology, there can be no meaningful conclusion.
Such comparisons would of course have to include unsharpened images, as well as images sharpened with a variety of different techniques and programs.
-
Having more megapixels doesn't really hurt anything else, so I guess it is always a good thing, at least within reasonable limits.
...assuming excellent down-resing algorithms, which is not always the case.
Jim
-
Jim,
Thanks for making that point! Very true, indeed.
On the other hand, I would suppose that resizing an image is always prone aliasing, except special cases, so just having more pixels would not exaggerate the problem.
Best regards
Erik
Having more megapixels doesn't really hurt anything else, so I guess it is always a good thing, at least within reasonable limits./quote]
...assuming excellent down-resing algorithms, which is not always the case.
Jim
-
Hi Jack
That's very interesting information you've provided.
Thanks Dave, I owe it mostly to Frans van den Bergh (http://mtfmapper.blogspot.it/2012/05/pixels-aa-filters-box-filters-and-mtf.html), Detail Man (at DPR) and The_Suede.
The AA blur in one direction only is interesting but I'm struggling to understand the rationale behind it !
In addition to Bart's suggestion, here is what The_Suede had to say about it (http://www.dpreview.com/forums/post/53492844).
Jack
-
It is just that I consider capture sharpening (preferably by deconvolution) as an essential part of the process, especially if there is an AA filter. I understand it is difficult to include it in a comparative test and not cause more controversy.
Hi Francisco,
I agree that capture sharpening is an essential part of the image formation process. It's however not necessary in some situations, and it may be detrimental in others. I think it's useful to distinguish between two categories of questions:
1) How will lens A perform compared to lens B on my one camera?; and
2) How will lens A perform on two different cameras with varying AA strengths and most other things equal?
In the first case a metric to help with buying decisions should imo concentrate on the hardware only. As a (amateur landscape) photographer I consider the job to capture the best spatial information possible at the scene so that such information can be later processed to provide the most pleasing results when viewed on the final display medium as desired. The better the information captured, the better the final result. The best spatial information captured is obtained by using the best hardware available, so what I am interested in is the objective performance of the hardware in a camera system when selecting equipment - as opposed to subjective post processing workflow results (including demosaicing and sharpening algorithms with arbitrary parameters) which can always (and mostly will) be added later to obtain the desired effect. Perhaps this leaves open a question of diminishing returns past a certain point, but that's one I am willing to deal with.
In the second case I can see the rationale for applying capture sharpening before spatial resolution measurements because it is true that one may be able to 'restore' some of the effects of a stronger AA filter or other subsystem through the judicious application of advanced sharpening algorithms. On the other hand things get very fuzzy and subjective very quickly because perception enters the equation prominently. Plus we all know that typical sharpness metrics are very sensitive to acutance, whether that be due to the real performance of the system or introduced artificially by an overzelous operator. It doesn't take much.
Take for instance Photozone.de which opens images for MTF measurements in ACR/LR with sharpening at default, what most observers would consider mild capture sharpening. Edge Spread Function profiles so generated show overshoots and undershoots in what in the physical world is actually a monotonically increasing S-shaped curve. Even that is too much. Perhaps a reasonable criterion for how much capture sharpening to apply should be that physical limits not be exceeded. Easier to say than to do. Or perhaps the operator should sharpen both images to the best of his subjective evaluation/abilities following a strict set of criteria. Again... And this says nothing of the amplified noise and introduced artifacts that such an approach would entail.
Too many balls in the air for my taste. So while I see the rationale for the application of capture sharpening before measuring spatial resolution for some uses, I have a really hard time figuring out how such measurements can be practically useful by themselves. That's why I tend to start at Lenstip.com (mostly hardware), move on to Photozone.de (capture sharpened) before reaching DxO (perceptual milk shake) and then drawing my own Jack-Pix purchasing conclusions :-)
-
Thanks for the links Jack. I have come across Frans' work before and have played with MTFMapper. He produces some good stuff.
I had a quick look at The_Suede's comments (where do they get these User Names !!). Interesting point he makes about line skipping when reading out video.
Dave
Thanks Dave, I owe it mostly to Frans van den Bergh (http://mtfmapper.blogspot.it/2012/05/pixels-aa-filters-box-filters-and-mtf.html), Detail Man (at DPR) and The_Suede.
In addition to Bart's suggestion, here is what The_Suede had to say about it (http://www.dpreview.com/forums/post/53492844).
Jack
-
An interesting question is whether real world images correspond to the tests. Does the fact that I can capture-sharpen a 100 ISO perfectly exposed image realistically reflect the noise explosion I will get when I do the same to a 6400 ISO shot?
And by the way, what exactly is acutance?
Edmund
Hi Francisco,
I agree that capture sharpening is an essential part of the image formation process. It's however not necessary in some situations, and it may be detrimental in others. I think it's useful to distinguish between two categories of questions:
1) How will lens A perform compared to lens B on my one camera?; and
2) How will lens A perform on two different cameras with varying AA strengths and most other things equal?
In the first case a metric to help with buying decisions should imo concentrate on the hardware only. As a (amateur landscape) photographer I consider the job to capture the best spatial information possible at the scene so that such information can be later processed to provide the most pleasing results when viewed on the final display medium as desired. The better the information captured, the better the final result. The best spatial information captured is obtained by using the best hardware available, so what I am interested in is the objective performance of the hardware in a camera system when selecting equipment - as opposed to subjective post processing workflow results (including demosaicing and sharpening algorithms with arbitrary parameters) which can always (and mostly will) be added later to obtain the desired effect. Perhaps this leaves open a question of diminishing returns past a certain point, but that's one I am willing to deal with.
In the second case I can see the rationale for applying capture sharpening before spatial resolution measurements because it is true that one may be able to 'restore' some of the effects of a stronger AA filter or other subsystem through the judicious application of advanced sharpening algorithms. On the other hand things get very fuzzy and subjective very quickly because perception enters the equation prominently. Plus we all know that typical sharpness metrics are very sensitive to acutance, whether that be due to the real performance of the system or introduced artificially by an overzelous operator. It doesn't take much.
Take for instance Photozone.de which opens images for MTF measurements in ACR/LR with sharpening at default, what most observers would consider mild capture sharpening. Edge Spread Function profiles so generated show overshoots and undershoots in what in the physical world is actually a monotonically increasing S-shaped curve. Even that is too much. Perhaps a reasonable criterion for how much capture sharpening to apply should be that physical limits not be exceeded. Easier to say than to do. Or perhaps the operator should sharpen both images to the best of his subjective evaluation/abilities following a strict set of criteria. Again... And this says nothing of the amplified noise and introduced artifacts that such an approach would entail.
Too many balls in the air for my taste. So while I see the rationale for the application of capture sharpening before measuring spatial resolution for some uses, I have a really hard time figuring out how such measurements can be practically useful by themselves. That's why I tend to start at Lenstip.com (mostly hardware), move on to Photozone.de (capture sharpened) before reaching DxO (perceptual milk shake) and then drawing my own Jack-Pix purchasing conclusions :-)
-
And by the way, what exactly is acutance?
Good to see you back, Edmund, on form and clearly fully recovered ...
M
-
+1
Regarding acutance:
http://en.wikipedia.org/wiki/Acutance
Best regards
Erik
Good to see you back, Edmund, on form and clearly fully recovered ...
M
-
Regarding acutance:
http://en.wikipedia.org/wiki/Acutance
Dear Erik,
Knowing Edmund (from his posts) , I don't think it was entirely a serious question ...
In fact, I wouldn't be surprised if he had written the Wikipedia entry himself !
Best,
M
-
Hi,
I know Edmund from his postings…
On the other hand I feel that the definition of acutance is a bit fuzzy az are many terms used to describe the more or less esoteric qualities of imaging systems. The more esoteric the quality the more fuzzy the definition…
So I feel that Edmunds question may be a retorical one but it is still a good one!
Best regards
Erik
Dear Erik,
Knowing Edmund (from his posts) , I don't think it was entirely a serious question ...
In fact, I wouldn't be surprised if he had written the Wikipedia entry himself !
Best,
M
-
IMO noise explosions can be just as creatively valid in photography as in music. ;)
-Dave-
-
Dear Erik,
I wouldn't be surprised if he had written the Wikipedia entry himself !
Best,
M
Damning with faint praise. Just saw that, I think that entry is a nice start, but it might need some more love, even polyamory :)
Edmund
-
The point of a low pass filter is to cut off at a certain max frequency. Instead, it seems the D800 filter just chops 25% or so off the measured resolution (whatever that means) of each lens.
This happens with every DSLR that has an AA/OLPF in front of the sensor and in some cases gets into the 30%+ range.