From what I've read, the primary reason for 32-bit is to minimize rounding errors during the calculations that Photoshop performs to create an HDR image. As an exaggerated example, if you use 3.1 as your value for pi and some calculation requires you to square that, you get 9.61. But if you use the more precise 3.1415962 that gives you a square of 9.8696267. That 0.26 difference when multiplied repeatedly could accumulate into a visible discrepancy. For most radical transforms, such as a fairly extreme curve, 16-bit precision does not introduce visible problems, but apparently HDR pushes the limits of the envelope to such a degree that even 16-bit precision can be insufficient.

However, both monitors and printers are 8-bit devices. Having performed HDR at 32-bits, Photoshop now has as nearly distortion-free a result as possible. When you send that to the printer Photoshop first derives the optimum 8-bit rendition of the colour of each pixel in your image. Remember that when we say 8-bits we actually mean 8-bits per each of the three RGB colour channels, which means 24-bits per pixel, which means over a million different hues, if I remember correctly. Apparently, under ideal conditions the human eye can distinguish about 12-bit/channel hue differences*, but of course 8-bit/channel gets the job done.

what is the "bit quality" of paper? Is there a printer which can faithfully produce a 32-bit image?

Got me; but the answer has to do with the dot gain (ink "bleed" or spread) of a given paper together with the smallest dot (ink droplet size) a given printer can deposit. I assume 8-bits/channel is still a good match to what current papers and inks can do, given that printer drivers are still 8-bit.