area, a sensor of a given size will have the same overall noise performance whether I divide the sensor area into small pixels or large pixels. The smaller pixels will have worse S/N per pixel, but in the final image that will be precisely compensated for by the increased number of pixels.
In practice, this works roughly for digital cameras (as demonstrated in Emil Martinec's paper).
Now, consider the following cameras that happen to have 1 unit of read noise.
Camera 1 has a sensel that collects 16 units of lights. When read, it will be 15 - 16 - 17
Camera 2 has 4 sensels that collect 4 units of lights each. When read, you'll have 3-4-5 / 3-4-5 / 3-4-5 / 3-4-5 (worst case 12 - 20)
So you see that this is not "precisely compensated" although the distribution of your signal will indeed be centered on the same 15-16-17.
It doesn't matter too much with photography because you are almost always using a decent exposure and working with a significantly larger number of units than the ones above. So you clould simply say that you'll ignore the issue because it is negligible (do the math with 60000 units and 4 times 15000 units and a 10 units read noise for example) but redefining noise or demonstrating an equality that doesn't exist will not satsify everyone.
The noise (uncertainty on the captured signal) doesn't change after the capture if you can store the data reliably.
One last thing: the "scientific" definition of noise and the "photographic" perception of noise are two different things: shooting a deep dark sky background should be noisy simply because the sky background is not uniform (there's also the variability rates or light unit arrivals, Poisson, but le't's not get into that) but a photographer will find the totally uniform black background produced by the Nikon blackbox less noisy.
(fixed typos)