Missing the point.
Andrew is going to *love* hearing from me, but we're missing some logical points here at the expense of arguing about quasi_applied post processing that's not dealing with the aquisition stage. Given a properly profiled scan from a device of known good caliber only a fool would argue that there's no advantage to the better data assigned via 16 -vs- 8 bits. If I'm paying somebody $50 for a drum scan of a 6x7 chrome, he'd *better* deliver something that has more than 8-bits per channel.
Color me wrong (sorry for the bad pun), but how many desktop scanners out there scan in a *legitimate* 16-bit greyscale (intensity) levels? 'Greyscale' to many desktop scanners really means:
(1) scan in color - sloppily
(2) convert to monochrome -via- a process worse than you can accomplish in Photoshop.
(3) invent those extra bits to fit into Photoshop's 16-bit preferred image space.
Essentially it's not much different than using a dSLR with the absurd 'monochrome' feature and then converting to a 16-bit working space when there wasn't 16-bits of data per channel to begin with.
So a more important question would be the difference between 8-bit and 16-bit *if* you aren't scanning native 16-bits per channel. Obviously if you have a device that can aquire 16-bits of greyscale (or color) native, then sticking to 16/48-bit file at least for initial editing is obvious and there is no arguement to the contrary. Andrew is dead right on this.
However, 48-bit desktop scanners aren't the norm, many that are 48-bit capable essentially hack it from cheaper A/D components, and while a 16-bit greyscale file can certainly take more editing than an 8-bit one, is this really a question dealing with processing and not really dependant on the scanner? Epson flatbed scanners now seem to be able to handle 48-bit scans. Many Nikon scanners however are only able to scan at less than 48-bits. So, please convince me that the Epson makes a better scan simply because of the larger bit depth that is only psuedo-native in the first place.
Also, a film scanner *is* a digital camera, and all analogies apply. However, if my dSLR can't map 48-bits, and my film scanner can't map 48-bits, why is it more important for the film scanner to produce 16/48 file even if it can't do so native?
Also, I've been scanning junk film on good scanners and had to 'wing' my own profiles for too many years to NOT emphasize that quality of aquisition during scanning trumps all other considerations. Proper mapping of the density range of film via a good profile will yield a more workable scan at 8-bits per channel than a 16-bit per channel scan from a badly nurfed black point. If we go back to the dSLR analogy, a RAW capture gives you more data to work with, but only a bit more wiggle room on the exposure ends (over/under). If you can't expose properly in the first place, RAW isn't going to save your skin for long.
My advice on this has not changed in over a decade and it doesn't now. Get a good film profile, and/or learn to set white/black points properly. *Then* scan at the maximum data level of the scanner, 24, 36, 48-bits, with a *color* film profile to a 16-bit space in Photoshop. Convert the resulting file to monochrome in PS and *then* apply your tonal curve. Unless Andrew can convince me otherwise, I don't see the benefit to scanning at 8 vs 16 bit greyscale when you should be scanning with *all* the color bits and capturing all the data the scanner is capable of.