I am interested in IMATEST precisely because it provides a quantitative result that could be compared against other like test results for MTF. Granted, the various test methodologies and test equipment (not to mention the variability between lens "copies" (read: serial numbers)) are likely to yield a range of results. But at least there are numbers to compare, to determine whether one's lens is "way out there" as a good or bad anomaly.
I'm afraid I'm going to bring some more bad news to your kitchen table!
Basically there is no even remotely feasible way to do MTF testing yourself in a manner that will let you compare against "other like test results" or manufacturers graphs, or even those on other sites. To do it, you'd need to invest in a test lab and use extremely precise controls and the exact same procedures and methods that the manufacturer/other site does.
- What makes this difficult is you have to realize that MTF testing isn't just a software package that you throw up and generate numbers to match what Nikon or Canon or Zeiss say they are - there are many different methods, different wavelengths, different test targets - essentially you really have to become a popular photography grade testing lab.
Then there's a whole laundry list of other issues that come to play, such as variability - for instance, if you buy Imatest, what are you going to do when you run it a couple of times, perhaps on different days, re-setting up your environment, and you get different results? I've actually tried Imatest - and got different results even on multiple passes of the same test target - showing me that a very scientific, careful, statistically controlled type of approach would be neccessary to do it, and I came up with the conclusion that it wasn't even remotely near worth the time to do it, and then again, the final numbers could only be useful in comparing in MY environment on THAT day - no way you could reconfigure your environment the next day and compare numbers from that, and no way you could compare numbers against anyone elses tests. Not worth it, and basically almost impossible for most people to do.
So: let's go back to your premise for WHY you want to do quantitative analysis on the lenses - I'm going to guess that you ultimately don't want to get stuck with one that, well, sucks. That's fair enough. It's the quantitative part that is near impossible. But that doesn't mean we can't do tests that tell us what we basically need to know. I certainly do lens testing - quite thorough tests actually, involving targets that accurately represent the two things I photograph the most a: studio work with strobe (90%+) and b: landscape at infinity in natural light, and I even run the tests 3 different times over 2 bodies to rule out my error and any variability I may have introduced.
My theory on the type of tests that WE can do, while not purely scientific in perhaps the manner you would like, certainly will let us know if the lens we purchased has problems, is based on this:
Problems with lenses typically manifest themselves in the following areas.
a ) lens element decentering - where the end result is the lens is sharper on one side more than the other
b ) lens autofocus calibration issues - the back or front focus that we've all heard about
c) excessive dirt or junk in the lens (a tiny bit is normal, but at times you'll read of one lens that is just filthy)
Items 'a' and 'b' are by far the most common - I'd almost argue that item 'b' is the most common. What I've seen very, very, very little of is actual sharpness deviations in a lens that is properly functioning in terms of items 'a' and 'b' above being fine. I own some 'duplicate' copies of lenses (even though I'm a Nikon shooter) and while there have been calibration differences and at times slight decentering differences, there never has been a marked sharpness difference between the lenses. So thinking this through, I came to the conclusion that I needed to isolate out whether the lenses I've purchased exhibit either 'a' or 'b' above - and beyond that, I had to trust that if none of the typical items/defects were found that I was okay - since there is no way I can become a full fledged testing lab versus that of a professional full fledged testing lab.
And in practice, I've found out this works well. I run resolution target tests at studio distances with strobe (all tripod mounted, live-view carefully focused using sensor based contrast detection autofocus if the body has it, etc, etc) and then I repeat a series of tests outdoor with a subject that lets me evaluate side to side and corner to corner sharpness. In the end, I've identified some cases of 'b' in my own lenses, and for focus calibration issues I've found it easier to simply send them in for warranty service and they come back perfect.
Is this perfect? No - I'd love to be able to spend 30 minutes running a lens I buy and seeing if it matches up as close as possible with the manufacturers standard, but most assuredly there is simply no easy, moderate, or even moderately difficult way of doing this - in this fast food/ATM society we all want perfect answers now, instantly, but in this case it's not possible. So my testing approach is what I do, and has weeded out some issues, which were then dealt with. After that, the lenses go into "production" so to speak, where I evaluate them again - I should note it's been far more than once that I've found lens characteristics that are discovered in "real shooting" far more so than in lens tests, and then if I go out and take pictures and not worry about the pixel peeping any further.
So my wordy thesis here can be distilled down to: your original goal isn't realistically achievable, so review what you're trying to do (discover if a lens you aquired is a stinker) and rework your tests to cover the basics - get 80% of what you need tested and leave the other 20% on the table. Unless you're a millionaire and can afford your own full blown controlled testing environment for your four lenses of course.
-m