The lines in the printout are converging into finer detail. Nothing is going to change that. You see the lines get finer to the point they blur together. The target is distance invariant. The pixels are not going to resolve more by re-sampling.
While that is correct, upsampling does lose microdetail contrast. Therefore the target contrast may be a bit (can't tell if and how much) lower at the highest level of detail, making it easier for the camera to not develop aliasing. Because it's not possible to compare with the original, I can't judge if and how much of an influence it has. All I know (from other PM exchanges) is that a lower resolution target can influence the outcome of the Slanted edge score, which is of course much more sensitive than the visual star target. It was even possible to detect a difference in blur sigma between the left and right side of the target, because it was shot at a 1 degree angle off perpendicular.
If you dont trust the laser printout look at the fine lines on the cracked paint. I put the target on that building in the park for exactly that reason, the random fine lines of the paint cracks.
Well that's another issue that's often overlooked, diffraction first kills the lowest contrast microdetail, before it kills the higher contrast microdetail. One may be able to restore a certain level of detail, but some is already lost. From the looks of it, your camera+lens+Rawconverter combination seems to do a very good job, and strike a nice balance. Good for you.
You do understand that RT has built in deconvolution? In the unsharpened images it is not turned on. In the one sharpened image I attached later it is using R-L deconvolution.
Not only do I understand it, I pointed it out to a lot of folks who didn't know that.
It is also using "micro-contrast" and "contrast by detail" which I assume is wavelets. You can see the difference at the edge of the target where the page is white. On the unsharpened files the image just goes white off the pattern. In the sharpened file you see the pattern bleed into the white area. That is invented detail.
Yes, the amount of control is super useful, and effective. Not something for those who get intimidated easily by such features though.
Any of these methods that use "variable gradients" are making predictions. Roger Clark talked about invented detail in digital years back when he did the comparison velvia drum scanned vs digital. He showed zooms of reeds where some of the apparent digital detail did not exist in the drum scan at higher res. I am not talking artifacts, I am talking a few ghost stalks of reeds.
They are artifacts though, and Roger didn't say they weren't (it's mentioned at the bottom of this section
). The demosaicing algorithms back then were not as advanced as what we have available today.
I believe diffraction is no longer an issue. It obviously has not gone away, it is being predicted out by gradient type de-bayer along with all the sharpening type routines. By definition de-bayer has to figure out how to fill holes. The best routines are filling the diffraction blur hole.
I wouldn't generalize a specific (specific very sharp
lens / camera sensor with mild AA-filter and a not too small 5.97 micron sensel pitch / a very effective demosaicing algorithm / a specific level of contrast) situation, as if it were universally applicable.
What the star target learns us is that for this combination of components
, apparently f/16 still produces good visual detail, approaching the Nyquist limit. Deconvolution sharpening with a relatively small radius
can boost the signal to noise ratio to even less of a contrast loss near the limiting resolution, which can help e.g. with producing large output. Even the aliasing seems to be behaving quite nicely, thanks to the Amaze algorithm, so some of it may go unnoticed as false detail.
It looks like a very fortunate combination, congratulations. Diffraction is less of a consideration when you use this lens, so you can focus on other elements that make the shot.