Pages: 1 [2]   Go Down

Author Topic: Some good stuff from Jim Kasson  (Read 7843 times)

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Some good stuff from Jim Kasson
« Reply #20 on: April 30, 2017, 12:05:43 pm »

The reason I used lp/mm is that it is the figure used in data published by Zeiss and i wanted to see how close I could get. The input options to MTF mapper in this case are lp/mm.

One thing to remember is that the absolute values of the result is dependent on the quality of the target and the accuracy of focusing, especially with the very best lenses. When I went to the motorized rail and the razor blade, my numbers went up. It was not because the lenses suddenly got better.

Jim

Stephen Scharf

  • Full Member
  • ***
  • Offline Offline
  • Posts: 168
Re: Some good stuff from Jim Kasson
« Reply #21 on: April 30, 2017, 01:24:53 pm »

Hi Stephen,

I would suggest that you may need to read all 53 or so postings on the Fuji GFX and perhaps a few other articles explaining the measurements.

The term cy/PH is widely used and it is the same lp/PH (Line Pairs at picture height). Three parameters are not clear, the first one is the MTF the lp/PH relates to. If you read Jim's posting you learn that he measures at 50% MTF. That is also pretty standard in the industry. One issue is that he uses Lightroom for raw conversion with sharpening "disabled", but it still seems that Lightroom applies some sharpening.

I agree that Jim has probably been in a hurry enough to forget labelling some of the horizontal axes. Mostly the scale is movement of Stackshot used for focus bracketing in cm-s.

You are right that there are sample variations. What Jim's data indicates is that there are some systematic problems, though. It seems that neither the 120 macro or the 63/2.8 can match manual focus at some apertures. 

The way it is, we simply don't have any better data than Jim's as no one else has reported on similar measurements on the Fuji GFX. Finding and disclosing issues is a positive thing. It offers incentive for Fuji to fix those things and it may help GFX users in making best use of their equipment until Fuji has fixed those issues.

Best regards
Erik

Thanks, Erik. I'm just about to get on a long biz class flight to China, so that will give me time to go back and read Jim's postings. Interested to learn more about the measurement system and measurements Jim is using.

Yes, I know there are sample variations; there are no two products that are absolutely identical; this is fact of manufacturing in the real world,  and Jim's study is a good beginning here. It's important to mention, though, that the reason we need statically valid analyses is because there *is* sample variation. The statistical analysis allows us to make practically meaningful inferences from a limited data set precisely because it provides us with power (aka 1-beta) to know what is *real* from noise or sample to sample variability.
Logged

Stephen Scharf

  • Full Member
  • ***
  • Offline Offline
  • Posts: 168
Re: Some good stuff from Jim Kasson
« Reply #22 on: April 30, 2017, 01:45:34 pm »

Well, this is indeed a refreshing change of pace. My work is usually criticized for being overly quantitative, and relying on numbers as graphs when a simple picture would make the point.

That is of course a valid point. Except for Roger Cicala’s excellent work, I don’t know where you’re going to go for larger sample sets. Right here on LuLa, the sample size is usually (always?) one; are you chastising the people who charge you to read their tests for that? I do look for unreasonable results, and sometimes obtain another sample if I get them. I also check lenses for decentering and focus plane tilt, two indicators of improper assembly.

My blog posts are not intended for peer-reviewed scientific publications. I don’t have the time of inclination to test to those standards, nor would my readers have the patience to deal with writings that met the standards of scientific publications. All I am doing is applying what I call “kitchen optics” – tests that almost any reader could perform for herself, given the time and a modicum of equipment – to cameras and lenses, hoping to get insights that go beyond the usual “here are the pictures I took with the NiCanOrama QRZ – 1066, and here’s what I think of them” that most everybody else is doing.

The reason Roger Cicala uses a larger sample set is so that one can draw practically meaningful inferences from a set of data.

In the case of the graphs that Erik posted, the equipment required is a razor blade, a light source, a focusing rail, MTF Mapper and/or Imatest, and Excel. As in all my reports, I explain exactly how a reader who wishes to reproduce my results can go about it, either in the post itself, or by reference to an earlier post.

Measuring MTF50 in cycles/picture height has a long history in digital photography. Try the Imatest site for some background. If you want the paper that introduced most of us to slanted edge MTF testing, it’s here:

http://imagescienceassociates.com/mm5/pubs/26pics2000burns.pdf

If you want the Matlab demonstration code, it’s here:

http://losburns.com/imaging/software/SFRedge/index.htm

MTF50 is a well-known sharpness metric. For a discussion of it and why it’s appropriate, look at Jack Hogan’s explanation:

http://www.strollswithmydog.com/mtf50-perceived-sharpness/

Erik pulled the graphs from some of my blog posts. If you read the posts, the axes are explained. In the MTF50 vs subject distance tests, the units are cm, with 0 arbitrary.
Thanks, I'll check it out.

I don’t see a histogram in anything that Erik posted. Thank you for the statistics lesson, though.
The last two diagrams are histograms, which is why they require error bars to draw any statistically valid inference from them.

Again, my blog posts are not scientific papers. The results are not statistically significant, to be sure. However, I think they form a useful addendum to the pretty pictures that are the alternative. To my knowledge, no one, not even Roger, is testing cameras and lenses and reporting results to the general public in the way you want them tested and reported.

Correct, but, Jim, as you well know, this is is why we do science the way we do. So we know what the truth is. The reason scientific papers require results to be statistically significant so that valid inferences can be made from a (limited) data set about accuracy (i.e., the truth). Without that, there is no way to determine that the results obtained are not due to noise, sampling or random error, or ascertainment bias. This is why, as I stated in my first post, while I find your results to be of interest, I find it difficult to draw practically significant conclusions.
« Last Edit: April 30, 2017, 01:52:18 pm by Stephen Scharf »
Logged

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: Some good stuff from Jim Kasson
« Reply #23 on: April 30, 2017, 02:50:24 pm »


The last two diagrams are histograms, which is why they require error bars to draw any statistically valid inference from them.


They are not histograms; they are bar charts. In a histogram, the horizontal axis is outcomes, and the vertical axis is the count of each of the outcomes.

Jim
Pages: 1 [2]   Go Up