Maybe I wasn't entirely clear. When referring to a printer's "native resolution" this has nothing with nozzles or droplets or diffusion algorithms. I'm talking about the PPI which the print driver (ie software) uses.
[a href=\"index.php?act=findpost&pid=174668\"][{POST_SNAPBACK}][/a]
See, that's where ya got it wrong...the driver TAKES the image data at it's actual dimension and then runs what is, in effect, an error diffusion process to determine where droplets will and won't be. The driver (at least from the Epson side and this is coming from a guy WAY smarter than me-Parker Plaisted) takes whatever data it's given and runs it through a sieve (the metaphor for the error diffusion), depending on the resolution settings on the driver, the sieve gets larger or smaller openings to create the stochastic halftone that is then broken down into a droplet map to tell the print head when to and when not to squirt some ink.
In the case of Epson, there are (and I'm remembering this from a while ago so it may have changed with newer printers) at least 3 distinct droplet sizes that the print head can create and they are measured in picoliters which one can not easily translate into a physical size. A picoliter is a trillionth (one millionth of a millionth, or 10 to the -12th power) of a liter but due to differing densities and volumes, that simply does not translate to actual dots per inch.
The best you can do is talk about "relative resolutions", not "absolute resolutions" when it comes to error diffusion type of halftoning that is then printed in droplets.
Yes, there are settings in Windows (and Mac) print drivers that "announce" their "resolution" to the system...but that's more to qualify as low resolution devices vs high resolution devices and should not be construed to be an actual, defined, absolute resolution.
Bruce Fraser, in his Real World Image Sharpening talked about the kind of resolution you need to NOT see any actual dots when printed...but here again, the problem is human vision isn't measured in PPI or DPI...since human vision is measured in minutes of arc (at about 1.5 minute of arc per line pair) that doesn't translate to dots on a page...what Bruce did was to factor out what the human vision was capable of resolving at various distances (since distance has an impact because of the arc). Bruce figured that a person with 20/20/20 vision in good light could resolve about 355 dots/inch at a distance of 12 inches. Note, the 20/20/20 is a Bruce joke, that equates to a 20 year old with 20/20 vision. Close focus gets poorer the older you get.
So, hold a print about 12 inches away...if the print has ~355 DPI, you won't "SEE THE DOTS"...hold it closer and you will. Which is why it's pretty cool that if you are making small prints (where due to viewing distances you NEED more rez) you can resize without resampling and get smaller, higher resolution prints. On the other hand, if making large prints that will be viewed beyond 12 inches, the falloff of the required resolution drops quickly.
Of course, Bruce also has been quoted as saying that the intended viewing distance of ANY print made by a photographer is limited only by the length of their nose...(or the quality of their reading glasses).
What does all this mean?
Well, the bottom line is, where possible, always try to maintain the "native resolution" of your file and resize, without resampling to get the SIZE of the print image you want and let the PPI resolution fall where it will. Once you get the image SIZE figured out (which will then and only then give you the final pixels per inch) you sharpen for that pixel density. This works well when the native resolution is between 180-480 PPI.
Need to make a small print? Resize without resample smaller, then sharpen for the output.
In my case, with the typical MP (mega pixels-yet ANOTHER unrelated measurement scale) that I shoot and the print sizes I generally want, 360PPI is just about optimal...which is a long way of explaining why I chose to set the image resolution to 360PPI (without much concern over the image size).