Why does the density of the nozzle itself have anything to do with it? I believe all Epson printers are capable of full 2880 by 1440 DPI. It just means those with lower density nozzles have to do more passes. The denser head is one reason the new printers are substantially faster than the older ones .. they can lay down twice as many dots in a single pass.
The relevant "DPI" is how the printer driver handles the data ... which for Epson appears to be 720/360, and for Canon 600/300.
Yup. In a word, yup :-) I am often glad for your voice of reason and straight-talking, Wayne!
The density of the nozzles has very little to do with it. The Epsons, for example, use variable dot sizes and extremely complex LUTs in addition to various halftone processing to determine an effective dot pattern to lay down. The pro level devices can achieve a matrix of 2880x1440 as Wayne says and the consumer level devices can do 5760x1440 (though I dare anyone to pick the differences from 2880x1440 on the same device).
Since any given actual dot can only be one of a few colors, the screening process is far more complicated than just sending your pixels to the surface of the paper. Personally I find I get terrific results if I send the native resolution to the printer and let the printer driver handle all of the sizing and screening together. Certainly simplifies the workflow. Side by side comparisons with prints I have interpolated in photoshop to the magic "360" number are virtually identical with those that I just send at native resolution, as long as I stay above 170-180 ppi.
Exactly right again. There is no direct correlation between your image pixels and the individual dots laid down by the printer. It is only a combination of various dots, of various sizes, in various positions, viewed relative to and in combination with, other dots on variable substrates that provides the illusion of colour (is that a redundancy? ;p ) at a given point.
If you resample (particularly upres) you create data that does not exist to fill the gaps. The printer then attempts to render this non-existent data as accurately as it can. Whether that will provide a better or worse result than the printer "filling the gap" itself if you had not upressed will simply "depend". Sometimes it will, sometimes it won't, most times you wouldn't pick it except in a direct comparison (and not even then many times).
It depends on all the factors involved, starting with the original image all the way through to the final physical printing and the inks and substrates involved and their method of deployment etc etc.
If you have a scene with very sharp, diagonal lines in high contrast to their surrounds, then you're far more likely to see the advantge of higher resolution images - in fact, in that case, if oyu had the data I'd turn on Finest Detail in an Epson driver and send it 720 data. But in most other cases, you have to ask, "Do I want the printer to do its best rendition of real and fake pixels or do I just want it to do its best rendition of real pixels, even though there are less"? And only doing test prints will really tell you.
I have one image, that is of the pages of a book, torn and tattered. The crop from the original 6MP isn't that large so it's a relatively low res image. Upressing and printing even at A4 shows pixelation from the upressing because the text on the pages is a sticking point. Printing natively at the same size renders a far more acceptable and desirable image even though it's softer. Yes, it's hiding a lacking of resolution in the softness, but it's still a better result than a sharp image clearly showing fake pixels.
(Of course this really isn't the OP's question ... think he was really asking how did 300dpi sort of end up being a standard of some type for minimum resolution, which I have read about at some point in the past, but cannot pull the answer nor the source to that out of my aging brain cells.)
It relates to LPI numbers from pre-press, physical capability of earlier printing devices such as laser printers, an easy number to use and it being over the normal level of human vision to see line pairs (so images looked "solid"). All in all, it remains a good number, but it's not a holy grail and often times effort, time, money and quality are lost in chasing it.