So, I did some comparisons (probably not very scientific and certainly not extensive) this morning to see if I could detect a difference in a couple of images that were up-ressed and sharpened, in both 8 bit and 16 bit. The comparison was only done on-screen because, as previously noted, my printer is down for the moment. The images were taken with a D3X and, although landscapes, each had some high and low frequency detail.
Each image was copied and one of the pair was maintained at 16 bit while its matching pair was converted to 8 bit. Both pairs were up-ressed to 12096 X 8064 (400%), using Bicubic Smoother in CS5 then output sharpened with the Photokit plugin, using the same parameters on each image.
Through an on-screen visual comparison up to 400%, I was unable to detect any difference at all (I used PWP to lay one image over the other and toggled back and forth between the two at exactly the same spots in the image, moving around the image to areas of high contrast and low contrast in tone and colour). I then used PWPs Absolute Difference Composite Transform and no difference was detected (this transform will identify a difference between two images in tone/colour, down to a single pixel - I'm not familiar with Photoshop to know if it has a similar ability). I will also add, that the up-ressing and sharpening of the 16 bit image took way, way longer on my machine than for the 8 bit.
Taking the time factor, along with the absence of any difference in on-screen comparison and Andrew Rodney's assertion that he can't detect a difference at the printed stage, it seems unnecessary to keep a file in 16 bit past the editing stage, at least from my casual tests and with the state of the software/hardware currently available to me (as I have a moderate setup). Of course, I will always retain the RAW file and a 16 bit master TIFF.
Once again, not extensive/scientific and I am open to any other input.