[font color=\'#000000\']Unlike film, where grain intrudes in the enlargement process long before exhausting resolution, digital allows enlargement until resolution exhaustion stops us. Simply speaking, film is always grain limited for resolution purposes while digital is always resolution limited.
Since film has well known limits - such as 16 x 24 inches for fine grain color - we naturally wonder what these limits are for a certain digital "resolution." Obviously, as mentioned here and in countless posts, there is a "fixed" size print possible from a native digital capture which depends on the actual number of pixels in the vertical and horizontal file matrix (such as 1600 by 1200 for 2 megapixel captures) and the print density desired (such as 300 dpi). By simply dividing this file matrix by the print density (1600/300 and 1200/300) we see, in this case, 5.33 by 4 inches. Regardless of the actual file size, the thing which becomes immediately apparent is that there is but "one" print size possible with a digital file and a fixed print density unless we either add or subtract pixels.
So, how does this relate to maximum print size? Any deviation downward from the native size possible with a given print density means we must remove pixels. Removing pixels, to a point, has little effect on print quality - that is we are loosing "some" detail, but as the print size decreases, our ability to actually see these difference with the naked eye decreases accordingly so that in general we might say that there is no appreciable effect on the print. But what about when we enlarge and add pixels?
Any time pixels are added, we must have some "pattern" upon which to build these new pixel. This is done by various software algorithms known collectively as "interpolation." Interpolation, simply put, is a process whereby available adjacent pixels are examined (remember, at this point pixels are simply numbers representing small areas of color) and depending on their existing values, new pixels of like or intermediate values are actually "created." How well this works depends on the specifics of the algorithm and the degree that the available pixels accurately represent "reality" in the capture. This, in turn, depends a great deal upon how many pixels were available to "define" the particular geographic area of the frame.
It quickly becomes apparent that the more pixels we have available in a frame limited geography, the more likely it will be that we have an accurate representation of the detail within the frame. So, two factors play into the equation now. First, the number of pixels then the physical, geographical, size of the environment contained within the frame.
Let's look at a couple scenarios and see how this might work. First assume we have a small frame geometry - something like a head and shoulders portrait of a human. Considering the resolving limits of the lens and even with a reasonably small file matrix, there isn't too much here to "define" so that even a reasonably low resolution capture such as 3 or 4 megapixels may be quite sufficient to give an accurate representation of available detail. Next assume a large frame geometry, meaning we capture a relatively large portion of the environment with lots of detail - say something like a hyperfocal wide angle scene. This differs greatly because there is much, much more geography and detail to attempt to define within this frame and what happens when there are insufficient pixels to do this?
Briefly, let's digress and examine how our brains work. When an artist paints and oil of a mountain and forest scene, deception is used to "represent" detail. A few brush strokes represents a grassy field, leaves or pine needles, distant rocks, etc., and our brains have no problem at all "seeing" what the artist wishes us to see. Of course if we are forced to view the painting very closely and under intense magnification, the deception is revealed and we are forced to see simply brush strokes in oil. But step away a bit and the brain again happily interprets these brush stroke as adequate representation for what they "stand for" in the real world.
The digital equivalent of these brush strokes are what I like to call "marker pixels." Marker pixels happen when there are insufficient pixels available to properly define boundaries of detail in the environment and denote position, color and rough shapes, which when viewed from a distance or when the physical print size is small enough, pass for adequate representation of the detail our brains "expect" to see. On the other hand, because of the very low noise in pro-level digital captures, we can enlarge enough to actually see them, and it's like looking at the oil painting under the magnifying glass up close. The "deception" is revealed and the "magic" is lost and we see them for what they are.
Back to interpolation now. Interpolation algorithms very accurately reproduce what was actually captured. When the boundaries of true detail are adequately defined as in the head and shoulders portrait, these algorithms faithfully reproduce this detail at nearly any print size limited only by the print technology. On the other hand when the interpolation algorithm encounters marker pixels, it faithfully duplicates them as well so that eventually, somewhere in the enlargement process as we make larger and larger prints, we reach a point where these marker pixels transcend the threshold of visual recognition and we are forced to see them for what they are. At this point we must either back up to a smaller print size, or view the print from a greater distance.
So what this all boils down to is that with digital, the amount of enlargement possible varies a great deal depending on the capture resolution, actual detail in the subject and frame geography. There is no "fixed" limit for a particular capture resolution which varies widely depending on subject matter and field of view. The better interpolation algorithms quite accurately reproduce what they find at the pixel level. When they find true detail they can hold that pattern very well at huge print sizes. When they encounter marker pixels they very accurately reproduce this pseudo-detail and when a certain point in the enlargement process is reached, we plainly see these electronic "brush strokes" and it simply doesn't look "right" to us.
It's an issue we really never had to deal with with film thanks to the intrusion of grain which limited our degree of enlargement well before these marker pixels became apparent. With digital there is as yet no handy dandy "rule of thumb" which tells us that for a particular electronic "resolution" we can be absolutely assured of a great image beyond the "uninterpolated" file size. Of course we rarely are happy with the uninterpolated size and also it rarely falls within any "standardized" print size.
The easiest way I've found is to crop an area of fine detail in the capture then enlarge the crop by some incremental percentages. Once a limit is reached where marker pixels are clearly visible, revert to the last percentage where you can't see them and that percentage will produce the optimal print size for that specific capture.
Best regards,[/font]