A much fuller treatment of the effects of noise in digital photography can be found in any decent textbook on astro imaging, such as Steve Howell's very good "Handbook of CCD Astronomy" ISBN 0-521-64834-3 http://www.amazon.co.uk/Astronomy-Cambridg...5075&sr=8-1
or the more practical-amateur-photography orientated "Handbook of Astronomical Image Processing" by Richard Berry and James Burnell, ISBN 0-9433396-82-4 http://www.amazon.co.uk/Handbook-Astronomi...5385&sr=1-2
None of this is even faintly new. Astronomers have been dealing with noise issues and digital sensors for decades and their signal to noise ratios are vastly less favourable than ours. That actually means there are a hell of a lot of tricks in the arsenal for controlling the noise that daylight photographers rarely if ever use.
For example, anyone intending to make serious astro images would certainly cool their sensor significantly below ambient temperature (reduces the thermal noise), capture dark frames (which allows an averaged noise subtraction), bias frames (zero exposure dark frames which allow you to separate readout noise from thermal noise) and flat fields to correct for uneven illumination by the optics, sensor contamination and channel to channel variations in the gain.
Sure, there will be a sqrt(N)/N behaviour from photon statistics, but there are an awful lot of other sources of noise which are also important, and they kick in in the shadows, not the bright areas.
Suppose that you DO get the case where one pixel randomly gets more photons and ends up one bit higher than it should be. So what? The pixel will read out as (for 14 bits) 16380 instead of 16379. No human being is sensitive to such a small change, and unless you are trying to resolve a pattern painted in stripy greys differing by 1 part in 16384 (i.e identical to the human eye) at a resolution equal to the pixel pitch (i.e. far smaller than the human eye could resolve) the impact on the perceived final image will be zero. Averaged over more than a few pixels, the noise in the bright areas soon gets beaten down to utterly imperceptible levels. Then add in the human eye's logarithmic response, which makes us even less sensitive to small differences in the bright areas and this is, frankly, a non-issue.
As lots of others have pointed out, the noise problems are in the shadows, not in the bright areas. N is small there, sqrt(N)/N is therefore large, and the expected deviation of a given pixel from the "true" value is much larger, made even worse by the Poisson tails which are appreciable for small N but utterly, utterly irrelevant for large N where the behaviour is so close to Gaussian that your spreadsheet couldn't even calculate it, as you found out.
You've also omitted quantisation noise, which is the noise introduced by digitising the signal. Controlling this noise is one reason why cameras have gone from a 12-bit internal readout to a 14-bit: you don't want to bodge up your nice clean signal by crudely slicing it into too coarse bins. You always want to digitise at a level comparable with or better than the noise, or the quantisation noise would overwhelm the shot noise, which would be really dumb.
The other noise sources also become relevant in the shadows, or with low light levels, which is why all astro imaging camera sensors are cooled but most dSLR or MFDB chips are not. (My Hasselblad has a hefty heat sink it in, though, and my Canon D30 did automatic dark frame subtraction for long exposures- but these features are not widespread these days).
Cheers, Hywel Phillips