A pixel of the D800 has a pitch of 4.88µ.

The diagonal of a square of 4 pixels is (2*4.88)*sqrt(2)=13.4µ.

13.4µ corresponds to an f-stop of 13.4/1.35=9.9 at green light.

So - at F11 you already have a size of the first diameter of the diffraction disc which is big enough to swallow 4 pixels.

This means every single light ray is blurred to 4 pixels!

This alone will drop the effective resolution of a 36 MP image of the D800 to something below 9 Megapixels.

if you consider a light ray as a single bit of information, then a 4 pixel patch on the sensor is this information bit.

but consider that each pixel is the sum of the corners 4 adjacent patches-the information bits overlap.

and each information bit is the result of a different light 'ray'- so the overlapping information bits are sightly different.

i think we can calculate the unique and accurate value of individual pixels by solving a set set of simultaneous equations-

consider each pixel on the sensor as a unique variable. 4 pixels(a,b,c,d for example) are grouped into an equation (a+b+c+d=light value)

there are as many equations as there are pixels.

therefore there is a unique solution using matrix algebra that solves for the values of the individual pixels.

at least that's how i would solve the diffraction problem. is this solution flawed? has it been done?

solving 36 million equations in 36 million variables should not be too hard-computer graphics cards and cpus operate in gigahertz frequencies (billions and billions as Carl Sagan said about stars)

PS: sorry for a reply to an off topic fork that started in this thread but a lot of people seem to be participating in it. if you feel a different thread is needed, let me know and i will do so.