Luminous Landscape Forum

Equipment & Techniques => Cameras, Lenses and Shooting gear => Topic started by: shadowblade on April 05, 2016, 05:40:42 am

Title: A7rIII - 70-80 megapixels
Post by: shadowblade on April 05, 2016, 05:40:42 am
From Sonyalphrumors (http://www.sonyalpharumors.com/sr3-first-a7riii-rumors-7080-megapixel-and-improved-ibis/)

Incredible if it turns out to be true. It's certainly possible, given that Sony's latest lenses are rated to 100MP.

Also, I hope these huge resolution jumps spur on the development of more tilt-shift lenses, better anti-diffraction deconvolution software (including as part of RAW conversion) or the eventual adoption of a Lytro Light Field camera-type design, since depth of field will become a major constraint.
Title: A7rIII - 70-80 megapixels
Post by: Christopher on April 05, 2016, 07:51:06 am
Why is depth of field the major constraint? First of all nothing changes if the sensor has the same size. Secondly, do we really need the unrealistic dof from a few inches to infinity?

To the sensor. I'm pretty sure we will see it soon in a Sony and Nikon camera.


Christopher Hauser
ch@chauser.eu
Title: Re: A7rIII - 70-80 megapixels
Post by: Bo Dez on April 05, 2016, 07:52:46 am
It's certainly interesting times. I can't see how medium format can keep up with this sort of pace, if true. But I'm not interested in the current Sony bodies at all. If there is an 80MP Nikon D820 then I would jump in.

as for Lytro, they just announced they are dropping cameras and getting into VR. So, I think that goes to show how much interest there is in that system.
Title: Re: A7rIII - 70-80 megapixels
Post by: dwswager on April 05, 2016, 07:54:22 am
Why is depth of field the major constraint? First of all nothing changes if the sensor has the same size. Secondly, do we really need the unrealistic dof from a few inches to infinity?

To the sensor. I'm pretty sure we will see it soon in a Sony and Nikon camera.


Christopher Hauser
ch@chauser.eu

While the actual DOF does not change, he may be looking at elargement sizes.  Since more MPs means it can be enlarged to more then the apparent DOF decreases with Enlargement.  The Circle of Confusion chosen for DOF is based on the amount of enlargement (Sensor size and output size).
Title: Re: A7rIII - 70-80 megapixels
Post by: Christopher on April 05, 2016, 07:55:10 am
True, but when printed it will look the same or better. Or st least from my experience.


Christopher Hauser
ch@chauser.eu
Title: Re: A7rIII - 70-80 megapixels
Post by: dwswager on April 05, 2016, 07:57:40 am
It's certainly interesting times. I can't see how medium format can keep up with this sort of pace, if true. But I'm not interested in the current Sony bodies at all. If there is an 80MP Nikon D820 then I would jump in.

as for Lytro, they just announced they are dropping cameras and getting into VR. So, I think that goes to show how much interest there is in that system.

There are both benefits and drawbacks to larger and small pixel size on a sensor. But with ever shrinking pixel size come increasing demands on lenses and technique.
Title: Re: A7rIII - 70-80 megapixels
Post by: dwswager on April 05, 2016, 08:01:59 am
True, but when printed it will look the same or better. Or st least from my experience.


Christopher Hauser
ch@chauser.eu

No it won't.  Every image I take looks wicked sharp on the 3.2" screen until you enlarge it.  Print the same image at 3x (4.5" x 3") and at 20x (30" x 20").  At 3x the whole image might look fairly sharp, but at 20x, not so much.  Of course, we can help it along in post, but it won't make it sharp.
Title: Re: A7rIII - 70-80 megapixels
Post by: Christopher on April 05, 2016, 08:05:30 am
And? The same image with a lower res camera with the same settings will not look any better. Especially if you go to x20 or more.


Christopher Hauser
ch@chauser.eu
Title: Re: A7rIII - 70-80 megapixels
Post by: shadowblade on April 05, 2016, 08:17:31 am
And? The same image with a lower res camera with the same settings will not look any better. Especially if you go to x20 or more.


Christopher Hauser
ch@chauser.eu

No, it won't look better.

But I want to make full use of all the megapixels, especially when shooting landscapes at longer focal lengths. And, if I need to shoot at f/32 to make use of all the available resolution, I'd like to be able to remove diffraction-related loss of resolution using software. Diffraction, after all, follows well-known laws of physics, so it can be done.
Title: Re: A7rIII - 70-80 megapixels
Post by: dwswager on April 05, 2016, 08:19:26 am
And? The same image with a lower res camera with the same settings will not look any better. Especially if you go to x20 or more.

Christopher Hauser
ch@chauser.eu

A 12MP image one might print at 10x while a 75MP one might print at 30x.  That was the point.  With more MP, it is potentially possible to print larger and so DOF becomes a potential constraint depending on the image.
Title: Re: A7rIII - 70-80 megapixels
Post by: shadowblade on April 05, 2016, 08:25:58 am
There are both benefits and drawbacks to larger and small pixel size on a sensor. But with ever shrinking pixel size come increasing demands on lenses and technique.

Pixel size has no impact on technique. Total pixel count does.

An 80MP full-frame sensor is no more demanding on vibration reduction, focus accuracy, etc. than an 80MP medium-format back at the same angle of view.
Title: Re: A7rIII - 70-80 megapixels
Post by: dwswager on April 05, 2016, 08:35:19 am
Pixel size has no impact on technique. Total pixel count does.

An 80MP full-frame sensor is no more demanding on vibration reduction, focus accuracy, etc. than an 80MP medium-format back at the same angle of view.

You are correct, but what we are talking about are ever increasing pixel density on the 135 size sensors.  Moving from 12MP, to 24, to 36, to 50 and now potentially 70-80MP.  The sensor remains the same, the pixel size decreases and the pixel count increases.
Title: Re: A7rIII - 70-80 megapixels
Post by: Bo Dez on April 05, 2016, 08:35:26 am
It only has an effect on technique when taking into account current technology and designs, not suited for high res. Don't expect this to to stay the way it is though.
Title: Re: A7rIII - 70-80 megapixels
Post by: Bart_van_der_Wolf on April 05, 2016, 08:46:16 am
No, it won't look better.

But I want to make full use of all the megapixels, especially when shooting landscapes at longer focal lengths. And, if I need to shoot at f/32 to make use of all the available resolution, I'd like to be able to remove diffraction-related loss of resolution using software. Diffraction, after all, follows well-known laws of physics, so it can be done.

Hi,

I understand what you are saying, but allow me to underline a few issues and opportunities. A sensor with denser sampling, i.e. more photo-sites per unit area, will extract more resolution from a given lens. The diffraction for a given sensor surface/area will remain the same for a given aperture number, but the per pixel resolution will suffer from more diffraction blur (lower contrast and loss of resolution). Also issues like camera shake become more significant.

However, with more samples of the blur, there are better opportunities for deconvolution software to restore the original signal from the scene before the lens/aperture blurred it, and additionally it reduces aliasing artifacts. Nevertheless, there is a limit to how much diffraction can be restored, and that limit is due to physics that cannot be beaten or improved (unlike the blur).

Assuming Green wavelengths of, say, 555nm and a circular aperture, that means that for e.g. f/32 the physical resolution (where the MTF drops to 0 response) the resolution in limited at 1 / (0.000555 * 32) = 56.3 cycles/mm. That is equal to what an 8.88 micron sensel pitch sensor array achieves, so we might as well use a lower resolution camera, as far as resolution goes.

Cheers,
Bart
Title: Re: A7rIII - 70-80 megapixels
Post by: shadowblade on April 05, 2016, 08:53:17 am
Hi,

I understand what you are saying, but allow me to underline a few issues and opportunities. A sensor with denser sampling, i.e. more photo-sites per unit area, will extract more resolution from a given lens. The diffraction for a given sensor surface/area will remain the same for a given aperture number, but the per pixel resolution will suffer from more diffraction blur (lower contrast and loss of resolution). Also issues like camera shake become more significant.

However, with more samples of the blur, there are better opportunities for deconvolution software to restore the original signal from the scene before the lens/aperture blurred it, and additionally it reduces aliasing artifacts. Nevertheless, there is a limit to how much diffraction can be restored, and that limit is due to physics that cannot be beaten or improved (unlike the blur).

Assuming Green wavelengths of, say, 555nm and a circular aperture, that means that for e.g. f/32 the physical resolution (where the MTF drops to 0 response) the resolution in limited at 1 / (0.000555 * 32) = 56.3 cycles/mm. That is equal to what an 8.88 micron sensel pitch sensor array achieves, so we might as well use a lower resolution camera, as far as resolution goes.

Cheers,
Bart

Enter new, diffraction-based 'super-lens' technologies based on nano-scale surfaces that can resolve detail beyond the usual diffraction and wavelength-imposed limits.
Title: Re: A7rIII - 70-80 megapixels
Post by: Bart_van_der_Wolf on April 05, 2016, 09:03:29 am
Just to give an idea, 80 MP on a 36x24mm sensor would translate to 10960 x 7296 pixels, and a photosite pitch of approx. 3.29 micron.

That would produce a maximum resolution of 152 cycles/mm, and an unrecoverable loss of physical resolution at apertures of f/11 or narrower. The first signs of diffraction at the pixel level will become visible at f/4.0 and will gradually increase at narrower apertures.

Cheers,
Bart
Title: Re: A7rIII - 70-80 megapixels
Post by: Bart_van_der_Wolf on April 05, 2016, 09:06:14 am
Enter new, diffraction-based 'super-lens' technologies based on nano-scale surfaces that can resolve detail beyond the usual diffraction and wavelength-imposed limits.

Diffraction is caused by the diameter of the aperture, not by the lens elements.

Cheers,
Bart

P.S. As a means to reduce the amount of light with a wider aperture (to improve resolution), one could employ Neutral density filters, but that would not have an effect on DOF. DOF requires either narrow apertures, or a different technology like focus bracketing or incident light-angle sensitive sensors.
Title: Re: A7rIII - 70-80 megapixels
Post by: shadowblade on April 05, 2016, 09:47:54 am
Diffraction is caused by the diameter of the aperture, not by the lens elements.

Cheers,
Bart

In classical optics, yes.

Not so when you bring metamaterials with special optical properties (e.g. negative refractive index) into it. You can bend light around an object, such as the aperture, without diffraction, or correct it optically before it reaches the sensor. Lots of interesting recent (last 5 years) developments in optical materials.
Title: Re: A7rIII - 70-80 megapixels
Post by: NancyP on April 05, 2016, 10:50:07 am
I still can't wrap my mind around "negative refractive index" - I want to see this in action.

I just keep thinking, FIRST upgrade the computer!   ::)  Otherwise processing these big files will feel like using dial-up.
Title: Re: A7rIII - 70-80 megapixels
Post by: Paul Roark on April 05, 2016, 11:36:20 am
For stitching, etc., whenever there is geometric manipulation of the data, we lose information.  The higher MP count would help preserve what our lens captured. 

My limiting factor with the Sony a7r2 is more from the noise, however.  So, this trade off is what I'll be interested in.

As an example, the image currently on my web page -- http://www.paulroark.com/ -- was hand held (auto bracketed).  Sadly, the hand held shots (at 1000 iso) were not good enough to show all that the lens (Leica apo 135mm) could capture.  So, I took my tripod out and re-did the shot of Jupiter to capture the three of its moons that my spotting scope could (barely) see.  A 100% section of the image is at http://www.paulroark.com/Jupiter-30th-400iso-135mm-Apo-Telyt-at-100pc.jpg .  Yes, three moons of Jupiter show in large prints (20x26 inches minimum).  You just have to love what technology is bringing to us photographers.

Paul
www.PaulRoark.com
Title: Re: A7rIII - 70-80 megapixels
Post by: ErikKaffehr on April 05, 2016, 12:09:26 pm
Hi,

Increasing resolution is always good:

- System MTF will increase (as sensor MTF will be better)
- The system will be in less need of OLP filtering
- Diffraction is not affected
- Higher resolution generally benefits sharpening

What may be negatively affected is DR, larger pixels will have somewhat higher DR.

What happens is that the pictures will look less sharp at actual pixels, because viewing is magnified. But information will improve in quality.

So a good lens on a 80 MP sensor will probably perform better than an excellent lens on a 40 MP sensor.

But, I think the 36-50 MP sensors are quite adequate for most needs.

Best regards
Erik



From Sonyalphrumors (http://www.sonyalpharumors.com/sr3-first-a7riii-rumors-7080-megapixel-and-improved-ibis/)

Incredible if it turns out to be true. It's certainly possible, given that Sony's latest lenses are rated to 100MP.

Also, I hope these huge resolution jumps spur on the development of more tilt-shift lenses, better anti-diffraction deconvolution software (including as part of RAW conversion) or the eventual adoption of a Lytro Light Field camera-type design, since depth of field will become a major constraint.
Title: Re: A7rIII - 70-80 megapixels
Post by: shadowblade on April 05, 2016, 12:30:22 pm
Hi,

Increasing resolution is always good:

- System MTF will increase (as sensor MTF will be better)
- The system will be in less need of OLP filtering
- Diffraction is not affected
- Higher resolution generally benefits sharpening

What may be negatively affected is DR, larger pixels will have somewhat higher DR.

What happens is that the pictures will look less sharp at actual pixels, because viewing is magnified. But information will improve in quality.

So a good lens on a 80 MP sensor will probably perform better than an excellent lens on a 40 MP sensor.

But, I think the 36-50 MP sensors are quite adequate for most needs.

Best regards
Erik

Not necessarily true any more with BSI sensors.

Between gapless microlenses and BSI sensors, 100% of the sensor's forward-facing surface area can be made available for light collection. In this case, the number of photons collected no longer changes with the pixel count (since all the electronics are at the back), so the final DR will be the same, when normalised to any given resolution.
Title: Re: A7rIII - 70-80 megapixels
Post by: Peter McLennan on April 05, 2016, 01:22:37 pm
Yes, three moons of Jupiter show in large prints (20x26 inches minimum).  You just have to love what technology is bringing to us photographers.
Paul
www.PaulRoark.com

Just back from your site, Paul.

"Holy crap", as the kids say. 

My initial reaction to idea of an 80MP DSLR was "we won't need long lenses as much".  Your amazing image demonstrates that.
Title: Re: A7rIII - 70-80 megapixels
Post by: ErikKaffehr on April 05, 2016, 01:40:42 pm
Hi,

The reason DR is lost is statistics.

Say that FWC (Full Well Capacity) is 60000 electrons per pixel on sensor A and read noise is 4 electron charges. That would give an engineering DR of log(60000/4)/log(2) > 13.9 EV. Now make those pixels halv that size and downscale to same size. FWC will simply add, so you still get 60000 electrons per pixel in the downsized image, but noise will add in quadrature, so you will have a readout noise of sqrt(16 +16) = 5.66, so your DR will be log(60000/5.66)/log(2) 13.4 EV.

Best regards
Erik


Not necessarily true any more with BSI sensors.

Between gapless microlenses and BSI sensors, 100% of the sensor's forward-facing surface area can be made available for light collection. In this case, the number of photons collected no longer changes with the pixel count (since all the electronics are at the back), so the final DR will be the same, when normalised to any given resolution.
Title: Re: A7rIII - 70-80 megapixels
Post by: shadowblade on April 05, 2016, 02:05:30 pm
Hi,

The reason DR is lost is statistics.

Say that FWC (Full Well Capacity) is 60000 electrons per pixel on sensor A and read noise is 4 electron charges. That would give an engineering DR of log(60000/4)/log(2) > 13.9 EV. Now make those pixels halv that size and downscale to same size. FWC will simply add, so you still get 60000 electrons per pixel in the downsized image, but noise will add in quadrature, so you will have a readout noise of sqrt(16 +16) = 5.66, so your DR will be log(60000/5.66)/log(2) 13.4 EV.

Best regards
Erik

That's assuming that the read noise is still 4 elementary charges per pixel in the denser sensor. From past and present examples, denser sensors tend to have less read noise per pixel. Not sure if this is simply because they haven't increased the pixel density if they can't adequately reduce the read noise or if smaller photosites just tend to have less read noise.

I'd like to see deeper wells capable of delivering a lower ISO, for even greater DR. ISO 6.25 for 4 extra stops over ISO 100. Shouldn't be a problem with 3D chip-manufacturing techniques capable of putting capacitors with huge surface areas behind each photosite...
Title: Re: A7rIII - 70-80 megapixels
Post by: Bart_van_der_Wolf on April 05, 2016, 03:32:04 pm
From past and present examples, denser sensors tend to have less read noise per pixel. Not sure if this is simply because they haven't increased the pixel density if they can't adequately reduce the read noise or if smaller photosites just tend to have less read noise.

No, although there seems to be a correlation, there is no causality. It was mainly caused by using on sensor amplification instead of off sensor amplification. The issue that remains is that the well depth is rather intimately related to surface area (a very thin layer of capacitance). So while the per area read noise has been reduced by using another method of amplification and some gains in well capacity, there is still a significant limit to increasing well depth to compensate for reduced per pixel surface.

A photosite with a pitch of 4.88 micron, has a surface area of (simplified) 4.88^2 = 23.8 square mm. A photosite with a pitch of 3.29 micron has a surface area of 10.8 square mm which is less than half. Hence it will have roughly half of the Full Well Capacity (FWC), say 30000 instead of 60000. That will cause DR to be halfed if the well capacity stays the same, a full stop less DR.

Quote
I'd like to see deeper wells capable of delivering a lower ISO, for even greater DR. ISO 6.25 for 4 extra stops over ISO 100. Shouldn't be a problem with 3D chip-manufacturing techniques capable of putting capacitors with huge surface areas behind each photosite...

Exactly. Full Well Capacity will make a difference, but it will be hard to increase at the same rate as the per pixel surface area reduction. I'm told that Back Side Illumination BSI does not necessarily improve the FWC, but it does help quantum efficiency. So sensitivity will benefit from BSI, DR maybe less so.

Cheers,
Bart
Title: Re: A7rIII - 70-80 megapixels
Post by: mbaginy on April 05, 2016, 03:39:51 pm
I just keep thinking, FIRST upgrade the computer!   ::)  Otherwise processing these big files will feel like using dial-up.
My line of though as well, Nancy.  The file sizes of my images keep growing with every new camera / body but I'm reluctant to upgrade my iMac, and that has slowed things down noticeably.  I'd hate to think about processing even larger files with my old (?) hardware.
Title: Re: A7rIII - 70-80 megapixels
Post by: NancyP on April 05, 2016, 04:33:46 pm
Paul Roark, all I see on the screen is three dust particles around Jupiter.  ;D
Title: Re: A7rIII - 70-80 megapixels
Post by: ErikKaffehr on April 05, 2016, 05:30:21 pm
Hi Bart,

I don't agree fully. To begin with, it is a common practice to have an extra capacitor on the pixels. So full well capacity is increased by that capacitor. There is a trick patented by Aptina to isolate that capacitor from the photodiode at higher ISOs thus increasing voltage. Sony A7rII sensor uses that trick at 640 ISO.

The other issue I have is that you are right, reducing pixel are to half reduces FWC to half, but you now have twice amount of pixels. My understanding is that if you normalise the number of pixels you get half an EV of extra DR over the per pixel DR. (OK, I hope you can figure out what I mean).

An interesting point is that the Canon 5DsR seems to have much improved DR over the 5DIII although using smaller pixels.

Best regards
Erik


No, although there seems to be a correlation, there is no causality. It was mainly caused by using on sensor amplification instead of off sensor amplification. The issue that remains is that the well depth is rather intimately related to surface area (a very thin layer of capacitance). So while the per area read noise has been reduced by using another method of amplification and some gains in well capacity, there is still a significant limit to increasing well depth to compensate for reduced per pixel surface.

A photosite with a pitch of 4.88 micron, has a surface area of (simplified) 4.88^2 = 23.8 square mm. A photosite with a pitch of 3.29 micron has a surface area of 10.8 square mm which is less than half. Hence it will have roughly half of the Full Well Capacity (FWC), say 30000 instead of 60000. That will cause DR to be halfed if the well capacity stays the same, a full stop less DR.

Exactly. Full Well Capacity will make a difference, but it will be hard to increase at the same rate as the per pixel surface area reduction. I'm told that Back Side Illumination BSI does not necessarily improve the FWC, but it does help quantum efficiency. So sensitivity will benefit from BSI, DR maybe less so.

Cheers,
Bart
Title: Re: A7rIII - 70-80 megapixels
Post by: Bart_van_der_Wolf on April 05, 2016, 05:58:35 pm
Hi Bart,

I don't agree fully. To begin with, it is a common practice to have an extra capacitor on the pixels. So full well capacity is increased by that capacitor. There is a trick patented by Aptina to isolate that capacitor from the photodiode at higher ISOs thus increasing voltage. Sony A7rII sensor uses that trick at 640 ISO.


Hi Erik,

But when the photosite's area is reduced, the capacitor will also have a smaller size.

Quote
The other issue I have is that you are right, reducing pixel are to half reduces FWC to half, but you now have twice amount of pixels. My understanding is that if you normalise the number of pixels you get half an EV of extra DR over the per pixel DR. (OK, I hope you can figure out what I mean).

Have to re-read that at a later moment.

Quote
An interesting point is that the Canon 5DsR seems to have much improved DR over the 5DIII although using smaller pixels.

Yes, the per pixel DR is somewhat comparable despite the smaller surface area per pixel, and has improved when comparing the normalized (down-sampled) 'screen' values. But that just shows that also Canon's technology has advanced over the course of the 3 years between introduction of those models. As said, there has also been an gradual improvement of the FWC, but that is harder than reducing the surface-area of the photosites.

Cheers,
Bart
Title: Re: A7rIII - 70-80 megapixels
Post by: shadowblade on April 05, 2016, 09:47:38 pm
No, although there seems to be a correlation, there is no causality. It was mainly caused by using on sensor amplification instead of off sensor amplification. The issue that remains is that the well depth is rather intimately related to surface area (a very thin layer of capacitance). So while the per area read noise has been reduced by using another method of amplification and some gains in well capacity, there is still a significant limit to increasing well depth to compensate for reduced per pixel surface.

A photosite with a pitch of 4.88 micron, has a surface area of (simplified) 4.88^2 = 23.8 square mm. A photosite with a pitch of 3.29 micron has a surface area of 10.8 square mm which is less than half. Hence it will have roughly half of the Full Well Capacity (FWC), say 30000 instead of 60000. That will cause DR to be halfed if the well capacity stays the same, a full stop less DR.

Exactly. Full Well Capacity will make a difference, but it will be hard to increase at the same rate as the per pixel surface area reduction. I'm told that Back Side Illumination BSI does not necessarily improve the FWC, but it does help quantum efficiency. So sensitivity will benefit from BSI, DR maybe less so.

Cheers,
Bart

Increasing surface area is now much easier with 3D (as opposed to the previous 2D) etching/printing methods for producing electronics. The surface area of each photosite for purposes of capacitance has no correlation with the surface area of the photosite exposed to light - the area exposed to light merely determines how fast the capacitor can be filled. A capacitor rolled up behind each light-collecting area, or as a spongelike structure behind the photosite, has a huge surface area. Difficult to make just a few years ago, but much easier now.
Title: Re: A7rIII - 70-80 megapixels
Post by: Bart_van_der_Wolf on April 06, 2016, 04:28:14 am
Increasing surface area is now much easier with 3D (as opposed to the previous 2D) etching/printing methods for producing electronics. The surface area of each photosite for purposes of capacitance has no correlation with the surface area of the photosite exposed to light - the area exposed to light merely determines how fast the capacitor can be filled. A capacitor rolled up behind each light-collecting area, or as a spongelike structure behind the photosite, has a huge surface area. Difficult to make just a few years ago, but much easier now.

Yes, as I've said progress has been made, but I'm not so sure it's much easier now (I have no info on e.g. yield numbers and cost), and I do not know if we can expect the trend to continue at the same pace as the shrinking pitch does.

When I look at the development of a few models I see the following (based on data from http://www.sensorgen.info/ and  http://www.photonstophotos.net/):
Canon:
EOS-5D Mark III: saturation level = 70635 e-, with 6.1 micron pitch = 1898 e- per square micron.
EOS-1DX: saturation level = 90101 e-, with 6.9 micron pitch = 1898 e- per square micron.
EOS-7D-Mark-II:  saturation level = 29544 e-, with 4.1 micron pitch = 1758 e- per square micron.
EOS-5DS R: saturation level = 34470 e-, with 4.1 micron pitch = 2051 e- per square micron.
EOS 80D:  ???
EOS-1DX Mark II: ???

Unfortunately no info yet on the more recently redesigned sensor models, with more on sensor amplification.

Nikon:
D3: saturation level = 50626 e-, with 8.4 micron pitch = 717 e- per square micron.
D3s: saturation level = 84203 e-, with 8.4 micron pitch = 1193 e- per square micron.
D3X: saturation level = 47765 e-, with 5.9 micron pitch = 1372 e- per square micron.
D4: saturation level = 118339 e-, with 7.2 micron pitch = 2282 e- per square micron.
D4s: saturation level = 128489 e-, with 7.3 micron pitch = 2411 e- per square micron.
D800: saturation level = 48818 e-, with 4.7 micron pitch = 2210 e- per square micron.
D800E: saturation level = 54924 e-, with 4.7 micron pitch = 2486 e- per square micron.
D810: saturation level = 78083 e-, with 4.9 micron pitch = 3252 e- per square micron.
D5: ???

Sony:
SLT-Alpha-77: saturation level = 25206 e-, with 3.9 micron pitch = 1050 e- per square micron.
SLT-Alpha-99:  saturation level = 64682 e-, with 5.9 micron pitch = 1858 e- per square micron.
SLT-Alpha-77 II: saturation level = 39783 e-, with 3.9 micron pitch = 2616 e- per square micron.
A7: saturation level = 51688 e-, with 5.9 micron pitch = 1485 e- per square micron.
A7R: saturation level = 49714 e-, with 4.9 micron pitch = 2071 e- per square micron.
A7S: saturation level = 153207 e-, with 8.3 micron pitch = 2224 e- per square micron.
A7S II: saturation level = 158671 e-, with 8.4 micron pitch = 2249 e- per square micron.
A7R II: saturation level = 51856 e-, with  4.5 micron pitch = 2561 e- per square micron.
A7R III: ???

The combination of improved Well depth and closer integration of amplifier circuits have brought us much improved DR performance, but with an 14-bit ADC environment we are getting close to the achievable limits. Moving to a 16- bit environment, as the latest Phase One IQ3 100mp shows, will raise the ceiling significantly.

Cheers,
Bart
Title: Re: A7rIII - 70-80 megapixels
Post by: shadowblade on April 07, 2016, 03:43:00 am
Yes, as I've said progress has been made, but I'm not so sure it's much easier now (I have no info on e.g. yield numbers and cost), and I do not know if we can expect the trend to continue at the same pace as the shrinking pitch does.

When I look at the development of a few models I see the following (based on data from http://www.sensorgen.info/ and  http://www.photonstophotos.net/):
Canon:
EOS-5D Mark III: saturation level = 70635 e-, with 6.1 micron pitch = 1898 e- per square micron.
EOS-1DX: saturation level = 90101 e-, with 6.9 micron pitch = 1898 e- per square micron.
EOS-7D-Mark-II:  saturation level = 29544 e-, with 4.1 micron pitch = 1758 e- per square micron.
EOS-5DS R: saturation level = 34470 e-, with 4.1 micron pitch = 2051 e- per square micron.
EOS 80D:  ???
EOS-1DX Mark II: ???

Unfortunately no info yet on the more recently redesigned sensor models, with more on sensor amplification.

Nikon:
D3: saturation level = 50626 e-, with 8.4 micron pitch = 717 e- per square micron.
D3s: saturation level = 84203 e-, with 8.4 micron pitch = 1193 e- per square micron.
D3X: saturation level = 47765 e-, with 5.9 micron pitch = 1372 e- per square micron.
D4: saturation level = 118339 e-, with 7.2 micron pitch = 2282 e- per square micron.
D4s: saturation level = 128489 e-, with 7.3 micron pitch = 2411 e- per square micron.
D800: saturation level = 48818 e-, with 4.7 micron pitch = 2210 e- per square micron.
D800E: saturation level = 54924 e-, with 4.7 micron pitch = 2486 e- per square micron.
D810: saturation level = 78083 e-, with 4.9 micron pitch = 3252 e- per square micron.
D5: ???

Sony:
SLT-Alpha-77: saturation level = 25206 e-, with 3.9 micron pitch = 1050 e- per square micron.
SLT-Alpha-99:  saturation level = 64682 e-, with 5.9 micron pitch = 1858 e- per square micron.
SLT-Alpha-77 II: saturation level = 39783 e-, with 3.9 micron pitch = 2616 e- per square micron.
A7: saturation level = 51688 e-, with 5.9 micron pitch = 1485 e- per square micron.
A7R: saturation level = 49714 e-, with 4.9 micron pitch = 2071 e- per square micron.
A7S: saturation level = 153207 e-, with 8.3 micron pitch = 2224 e- per square micron.
A7S II: saturation level = 158671 e-, with 8.4 micron pitch = 2249 e- per square micron.
A7R II: saturation level = 51856 e-, with  4.5 micron pitch = 2561 e- per square micron.
A7R III: ???

The combination of improved Well depth and closer integration of amplifier circuits have brought us much improved DR performance, but with an 14-bit ADC environment we are getting close to the achievable limits. Moving to a 16- bit environment, as the latest Phase One IQ3 100mp shows, will raise the ceiling significantly.

Cheers,
Bart

All of these are still based on current, 2-dimensional manufacturing processes, which have difficulty producing anything more than a few layers thick.

3D fabrication techniques have no such limit. You can stack layer upon layer upon layer, increasing the size, thickness and surface area of the capacitors until you run into limits due to heat or physical size.

In effect, the size and efficiency of the light-collecting area determines how fast you can collect photons (i.e. the noise at any given ISO). The volume of capacitors behind it, and the achievable surface area per unit volume, determines the minimum-achievable ISO.
Title: Re: A7rIII - 70-80 megapixels
Post by: Bo Dez on April 07, 2016, 05:56:52 am
All of these are still based on current, 2-dimensional manufacturing processes, which have difficulty producing anything more than a few layers thick.

3D fabrication techniques have no such limit. You can stack layer upon layer upon layer, increasing the size, thickness and surface area of the capacitors until you run into limits due to heat or physical size.

In effect, the size and efficiency of the light-collecting area determines how fast you can collect photons (i.e. the noise at any given ISO). The volume of capacitors behind it, and the achievable surface area per unit volume, determines the minimum-achievable ISO.

awesome info. it always frustrates me when people get angry (yes they get angry!) about not wanting more resolution via megapixels, with their understanding based on past technology, theory and designs. The laws of physics may remain but technology continually finds new ways to move around them.
Title: Re: A7rIII - 70-80 megapixels
Post by: shadowblade on April 08, 2016, 11:23:13 pm
awesome info. it always frustrates me when people get angry (yes they get angry!) about not wanting more resolution via megapixels, with their understanding based on past technology, theory and designs. The laws of physics may remain but technology continually finds new ways to move around them.

Pretty much.

When you don't change the underlying technology (whether sensor technology, manufacturing technology or the computer technology that drives it) you get incremental improvements. But, when one of the underlying technologies changes, you can have a big leap, although it may take a few generations to get it right. For instance, Sony/Nikon's sudden leapfrog with Exmor, the big leap with the 5D2 (going from 12 to 21 megapixels and adding video/live view), as opposed to Canon's incremental changes with its endless incarnations of the 18MP crop sensor.

With BSI, you suddenly suffer far less from a denser sensor. If they've opened a new production line using finer-scale circuitry, it suddenly becomes much easier to make high-density sensors at full-frame size. If they can introduce a 3D manufacturing technique (unlikely with this generation) then the possibilities explode.

If, on the other hand, you try to build a new chip based on the same technology, using the same manufacturing process, then the best you can hope for is incremental change.
Title: Re: A7rIII - 70-80 megapixels
Post by: hjulenissen on April 12, 2016, 04:24:11 am
If each sensel is small enough to typically only be hit by one (or zero) photons, then one would not need a lot of well capacity? Just being able to (somewhat accurately, limited by physics uncertainty) generate an electron for each photon.

-h
Title: Re: A7rIII - 70-80 megapixels
Post by: shadowblade on April 12, 2016, 04:52:41 am
If each sensel is small enough to typically only be hit by one (or zero) photons, then one would not need a lot of well capacity? Just being able to (somewhat accurately, limited by physics uncertainty) generate an electron for each photon.

-h

You still need the well capacity if you want enough DR, and to minimise the effect of read noise. This may mean reducing the base ISO (i.e. increasing the exposure time) in order to capture enough photons per pixel. To get 14 stops of DR, you need a minimum well capacity of 16383 e-; since you'll have some read noise, that will increase. And those shooting with a super-high-resolution sensor are likely very interested in detail and DR and are shooting on a tripod, so you'll probably want to aim for even higher well capacity and DR.
Title: Re: A7rIII - 70-80 megapixels
Post by: hjulenissen on April 12, 2016, 05:13:29 am
You still need the well capacity if you want enough DR, and to minimise the effect of read noise. This may mean reducing the base ISO (i.e. increasing the exposure time) in order to capture enough photons per pixel. To get 14 stops of DR, you need a minimum well capacity of 16383 e-; since you'll have some read noise, that will increase. And those shooting with a super-high-resolution sensor are likely very interested in detail and DR and are shooting on a tripod, so you'll probably want to aim for even higher well capacity and DR.
What do you need the well capacity for if the system only ever needs to differentiate between "hit by a photon" and "not hit by a photon"?

The per-sensel granularity would be binary. I would dare to claim that if we ever see such a hypothetical camera, the "image DR" would be better than todays cameras.

Just like how inkjet printers offers fine gradations based on binary "ink drop" "no ink drop" patterns.

Being a little more down to earth:
If we increase the number of sensels, does it not make sense to also decrease the (ambitions for) maximum number of photons per sensel before saturation, as a given scene/exposure will throw fewer photons at each of the (smaller) sensels?

-h
Title: Re: A7rIII - 70-80 megapixels
Post by: shadowblade on April 12, 2016, 05:42:26 am
What do you need the well capacity for if the system only ever needs to differentiate between "hit by a photon" and "not hit by a photon"?

The per-sensel granularity would be binary. I would dare to claim that if we ever see such a hypothetical camera, the "image DR" would be better than todays cameras.

Just like how inkjet printers offers fine gradations based on binary "ink drop" "no ink drop" patterns.

Trouble is, when the read noise is also around that level, you lose the ability to distinguish between read noise, photon shot noise and actual detail and you lose all detail. Even if there were no such thing as read noise, you'd still end up with little detail. After all, what a sensor essentially measures - and what corresponds to 'bright' areas and 'dark' areas - is the rate of photon hits (i.e. number of hits in a given period of time) rather than whether it was hit or not. A binary sensor gives you a 1-bit image.

Quote
Being a little more down to earth:
If we increase the number of sensels, does it not make sense to also decrease the (ambitions for) maximum number of photons per sensel before saturation, as a given scene/exposure will throw fewer photons at each of the (smaller) sensels?

Down to a certain point, yes. Beyond that, read noise will become significant and reduce the dynamic range.
Title: Re: A7rIII - 70-80 megapixels
Post by: hjulenissen on April 12, 2016, 06:44:26 am
Trouble is, when the read noise is also around that level, you lose the ability to distinguish between read noise, photon shot noise and actual detail and you lose all detail.
So read noise is an issue, and it would have to be sorted.
Quote
Even if there were no such thing as read noise, you'd still end up with little detail. After all, what a sensor essentially measures - and what corresponds to 'bright' areas and 'dark' areas - is the rate of photon hits (i.e. number of hits in a given period of time) rather than whether it was hit or not. A binary sensor gives you a 1-bit image.
And a "dithered"/"noisy" 1-bit image is (essentially) what our inkjet does. And how mother nature generates a landscape scene in the first place.

If binary images cannot have smooth gradations, then how can I stand on a hill and see a landscape with smooth gradations?

-h
Title: Re: A7rIII - 70-80 megapixels
Post by: dwswager on April 12, 2016, 07:27:34 am
So read noise is an issue, and it would have to be sorted.And a "dithered"/"noisy" 1-bit image is (essentially) what our inkjet does. And how mother nature generates a landscape scene in the first place.

If binary images cannot have smooth gradations, then how can I stand on a hill and see a landscape with smooth gradations?

-h

Is this rhetorical?  Obviously, standing on a hill viewing a landscape is neither binary, nor discrete.  That is actually the hurdle for both film in it's way and Digital sensors in theirs.  How to represent something of one type as something of another type.
Title: Re: A7rIII - 70-80 megapixels
Post by: shadowblade on April 12, 2016, 08:13:47 am
So read noise is an issue, and it would have to be sorted.

Read noise is already pretty close to zero. You'd get a bit more improvement) by putting an A/D converter behind each pixel (rather than just at the end of a column), but the technology needed to do that (3d-printed circuitry) also lets you put huge capacitors behind each pixel.

Quote
And a "dithered"/"noisy" 1-bit image is (essentially) what our inkjet does.

But the inkjet has no 'write noise', plus it has light inks, plus it works on a subtractive rather than additive process.

Quote
And how mother nature generates a landscape scene in the first place.

If binary images cannot have smooth gradations, then how can I stand on a hill and see a landscape with smooth gradations?

-h

It's not. Rods and cones in the eye are stimulated by individual photons, yes. But what the brain interprets isn't a simple 'on' or 'off' - it's the rate of stimulation that determines how bright an object looks. What matters is not whether a cell is being stimulated or not, but how quickly it is being stimulated.

Is this rhetorical?  Obviously, standing on a hill viewing a landscape is neither binary, nor discrete.  That is actually the hurdle for both film in it's way and Digital sensors in theirs.  How to represent something of one type as something of another type.

Actually, digital sensors record an image in much the same way as the human eye. Like a digital sensor, colour is derived from cone cells with red, green and blue pigment in front of the photosensitive part (L, M and S cones respectively) and interpolated by the brain - it's why dichromats and tetrachromats see different colours to typical trichromats, but are still able to recognise 'blue' as 'blue', even though it looks different to them (the exception being certain colours being indistinguishable for dichromats, and true tetrachromats being able to distinguish certain colour pairs that appear identical to trichromats). Brightness is derived from the rate of stimulation of the cone and rod cells, which is directly dependent on the rate of photons hitting the cell; a digital sensor essentially counts photons, and more photons collected in the same space of time (the shutter speed) equates to a faster photon hit rate on a given photoreceptor.
Title: Re: A7rIII - 70-80 megapixels
Post by: hjulenissen on April 13, 2016, 02:59:45 am
Is this rhetorical?  Obviously, standing on a hill viewing a landscape is neither binary, nor discrete.  That is actually the hurdle for both film in it's way and Digital sensors in theirs.  How to represent something of one type as something of another type.
I have to admit that it has been a few years since I had physics, but I do believe that the world of photons is properly described as "binary" in this context.

Human perception is less relevant: if the physical scene is "really" binary in nature, then our perception can work this way or the other but would still be inherently limited by the information present in the scene.

-h
Title: Re: A7rIII - 70-80 megapixels
Post by: hjulenissen on April 13, 2016, 03:06:12 am
But the inkjet has no 'write noise', plus it has light inks, plus it works on a subtractive rather than additive process.
So the precense of light inks (as well as color) is obviously a modification to my simplified description, but this does not change the fact that a pretty good B/W image could be generated by a single (black) ink splatted onto a white paper. I am not sure that subtractive vs additive is all that relevant, as it could (again, in principle) have splatted white ink on black paper instead, or we could have some kind of display tech that offered small bright spots on a white background.

Interestingly, this kind of printing would not work very well without noise (dithering).
Quote
It's not. Rods and cones in the eye are stimulated by individual photons, yes. But what the brain interprets
If we could record and recreate the discrete set of photons from a real scene, we could recreate the scene. Then human perception is irrelevant, as we would offer our senses the same stimuli.
Quote
But what the brain interprets isn't a simple 'on' or 'off' - it's the rate of stimulation that determines how bright an object looks.
And if the scene had 100 photons within a (small) spatio-temporal volume and the corresponding recreation had 100 photons within the corresponding spatio-temporal volume, our brain would (AFAIK) have no way of responding differently. By making this volume smaller and smaller, we would (eventually) reach a state where each volume realistically got either 1 or 0 photons.

Now, that kind of precision might be overkill for photography applications, and there may be subtle Heisenberg issues that I don't really comprehend, but I think that my point stands: such a device (if it is ever possible) could potentially record every bit of information present in a given projection by a lens onto a (sensor) plane, using only (in the case of monochromatic light) a single bit per sensel. This image would have as much DR as the scene allows. Claims that one needs to store lots of charge per sensel to have lots of dynamic range is thus false by my reconning.

You still need the well capacity if you want enough DR...

Eric Fossum has been working on such sensors, but I do not know how far from practically usable his papers are:
http://ericfossum.com/Publications/Papers/2015%20CMOS%20April%20Saleh%20Binary%20Sensor%20Abstract.pdf

Quote
Quanta Image sensors (QIS) are proposed as a paradigm shift in image capture to take advantage of shrinking pixel sizes [1]. The key aspects of the single-bit QIS involve counting individual photoelectrons using tiny, spatially-oversampled binary photodetectors at high readout rates, representing this binary output as a bit cube (x,y,t) and finally processing the bit cubes to form high dynamic range images. ...
A QIS may contain over a billion specialized photodetectors, called jots, each producing just 1mV of signal, with a field readout rate 10-100 times faster than conventional CMOS image sensors.

So my question remains: given that we (at some point) can have 100 MP or 200 MP sensors in M43, APS-C or FF sizes where readnoise is kept sufficiently low so as to offer a "balanced" design. Do we really "need" to keep well capacity at current levels, or store ADC readouts at 14 bits, or might it be sensible to compromise on those two if that buys us higher sensel densities?

-h
Title: Re: A7rIII - 70-80 megapixels
Post by: Hywel on April 13, 2016, 09:36:02 am
I have to admit that it has been a few years since I had physics, but I do believe that the world of photons is properly described as "binary" in this context.

Human perception is less relevant: if the physical scene is "really" binary in nature, then our perception can work this way or the other but would still be inherently limited by the information present in the scene.

-h

You're technically correct ( https://www.youtube.com/watch?v=hou0lU8WMgo ): photons are absorbed and therefore detected discretely.

However, the flux of photons is huge. Daylight provides something of order of 10^21 photons per square meter per second.

This can be made as near to continuous as makes no odds just by upping the integration time. It is very very likely that you'll hit the limits of your sampling device's abilities before you hit the fundamental limitations for the light being composed of discrete quanta. So at least for daylight landscape, it is pretty much as if you are sampling a continuous signal.

This doesn't apply at night, when the photon counts are much lower, and the discrete nature of the signal becomes much more apparent.

It's in this latter scenario where your "one photon per pixel" camera breaks down- it's likely to be overwhelmed by noise, because each sensel will have separate noise sources which are apt to make it register a photon hit when one has not in fact occurred. Many of these noise sources can be reduced (eg by cooling the sensor to reduce thermal noise sources) but can't be eliminated. The usual way of combating this is to up the integration time, allowing more signal to accumulate before reading out. This helps a lot with noise sources which doesn't scale per unit time (eg readout noise) but also helps with noise sources which do accumulate with time, because you get a higher signal to allow you to differentiate from the noise, and the "shot noise" (the inherent variation from sampling small numbers of photons in a Poisson distribution).

In theory you can find the optimum readout time to maximise the signal-to-noise ratio for a given signal. In order to have the flexibility to do that for pixels in the shadows, you'll need to allow pixels in the highlight to accumulate much more signal and not clip. Or you could optimise for the signal-to-noise ratio on the highlight pixels, but then you'll very likely to obliterating any detail in the shadows by the read noise and thermal noise when you could have done substantially better by integrating for longer.

The need to allow decent signal-to-noise in the shadows whilst preventing clipping in the highlights is exactly why camera sensors have big wells and low readout noise; the optimisation I referred to above has a well-known procedure for normal shooting conditions- expose to the right! That gathers maximum signal in the hottest pixels without clipping, and allows maximum signal to noise in the shadows with the maximum integration time.

I'm far from convinced that your super-segmented one photon per pixel camera can do better. If it is one photon per pixel in the highlights,  in the shadows it becomes one photon per hundred thousand pixels and there is NO WAY to spot that the one electron which is signal from all the electrons caused by noise spread over all those channels.

If you can do it, you will definitely need to be sampling the sensor quickly and doing your integration offline by exposure stacking and try to build up a picture of the noise including the temporal behaviour. As astrophotogaphers already do with exposure stacking for faint sources now, and as in the paper you quoted. You'll need to store all the data in time slices- this will make the data rate requirements of 4K video look like a walk in the park if you are aiming at one photon per sensel.

I can see that this could work, but it'll be extremely compute and storage intensive offline and very demanding on sensor readout noise, dark current, thermal noise, etc. on the sensor. What I'm not so convinced is that it will provide decisive advantages for general photographic use compared with just doing the integration physically with the shutter and having deep wells on the chip, as we do now.

It's an interesting ideas, but I don't think we've got the computing power in our cameras yet to read out and store the information fast enough, or the offline computing power to do an offline reconstruction of an HDR image in sensible time. But maybe it will come :)

Cheers, Hywel
Title: Re: A7rIII - 70-80 megapixels
Post by: dwswager on April 13, 2016, 10:40:06 am
I think we might be neglecting the particle/wave duality principle.  Just because we approximate it as a particle to understand it, does not mean it acts just like a particle.  Einstein theorized it such that light was a particle, but the flow of light was a wave.  At the end of the day, the sensor cannot directly measure the flow as a wave, only a discrete levels of particles.  Hence digital sensors approximate what we see with our eyes, they cannot duplicate it.  I'm fairly good with that though!
Title: Re: A7rIII - 70-80 megapixels
Post by: shadowblade on April 13, 2016, 10:59:05 am
I think we might be neglecting the particle/wave duality principle.  Just because we approximate it as a particle to understand it, does not mean it acts just like a particle.  Einstein theorized it such that light was a particle, but the flow of light was a wave.  At the end of the day, the sensor cannot directly measure the flow as a wave, only a discrete levels of particles. Hence digital sensors approximate what we see with our eyes, they cannot duplicate it.  I'm fairly good with that though!

Neither can the eye. Like a digital sensor, the eye is composed of cells with red, blue and green pigments filtering light, that just counts hits. Nothing to do with wave/particle duality (which applies to all objects, not just photons) - that only comes into effect when you're dealing with how refraction, diffraction, etc. work.

A digital sensor works in pretty much the same way as the eye. That's why it works.

Not sure what you mean about 'measuring the flow'. The amplitude? Photons don't have amplitude. The frequency? That's the inverse of the wavelength, i.e. whether it's red, green, blue or something else. Probably the best way to visualise photons, without delving into statistics and quantum mechanical equations, is as objects whose behaviour in large numbers can be approximated as classical waves, but whose behaviour as individuals is better approximated as individual particles within that wave.
Title: Re: A7rIII - 70-80 megapixels
Post by: hjulenissen on April 14, 2016, 06:14:03 am
You're technically correct ( https://www.youtube.com/watch?v=hou0lU8WMgo ): photons are absorbed and therefore detected discretely.

However, the flux of photons is huge. Daylight provides something of order of 10^21 photons per square meter per second.
Sure. Practically, the problem must be really, really hard. I was using it more as a "pedagogic vehicle" in order to argue that the well capacity, number of bits per sensel and the sensel density are depending on each other, and have some counter-intuitive consequences for what most of us would describe as "dynamic range".

I am assuming that making binary silicon is (in itself) significantly simpler than doing multi-level or "continous" machines. If not, the photon counter could just as well relax the requirements and target sensels that accurately counted "a few" photons. I guess that Eric Fossum is the right person to consult in this regard.
Quote
This doesn't apply at night, when the photon counts are much lower, and the discrete nature of the signal becomes much more apparent.

It's in this latter scenario where your "one photon per pixel" camera breaks down- it's likely to be overwhelmed by noise, because each sensel will have separate noise sources which are apt to make it register a photon hit when one has not in fact occurred. Many of these noise sources can be reduced (eg by cooling the sensor to reduce thermal noise sources) but can't be eliminated.
So my knowledge about silicon ends with vague memory of lectures about P-doping and N-doping and idealized models of transistors. I don't know this stuff, and I have never practiced that part of my education.

My claim is that _if_ someone could make a single-photon counter with negligible self-noise and have this sufficiently spatio-temporal dense so as to make the probability of 2 photons hitting one sensel negligible, then said device would (in some ways) be tapping directly into the information of mother nature.

There is the added complexity that "color" is connected with the energy of each photon, making the recording no longer binary. But a brute-force (less satisfying) approach would be to apply a Bayer filter in front of a photon counter.
Quote
If you can do it, you will definitely need to be sampling the sensor quickly and doing your integration offline by exposure stacking and try to build up a picture of the noise including the temporal behaviour. ...

I can see that this could work, but it'll be extremely compute and storage intensive offline and very demanding on sensor readout noise, dark current, thermal noise, etc. on the sensor. What I'm not so convinced is that it will provide decisive advantages for general photographic use compared with just doing the integration physically with the shutter and having deep wells on the chip, as we do now.

It's an interesting ideas, but I don't think we've got the computing power in our cameras yet to read out and store the information fast enough, or the offline computing power to do an offline reconstruction of an HDR image in sensible time. But maybe it will come :)

Cheers, Hywel
I am not sure that compute power is the issue. I would think that making the sensor and reading out the binary raw representation is the hard part. Making that into a jpeg is just a matter of how much PC time you poor into the problem.

As regular camera sensels are integrating photons within a (semi-) rectangular time-space volume, I would expect that the easiest way to make the binary photon counter file into a traditional image file would be something similar: 3-d convolution with a (semi-) rectangular kernel. That is not very compute intensive. Benefits would include the possibility of non-physically-realizable filter kernels (e.g. lanczos).

Doing "probabilistic" motion tracking of the (space,time) sampled binary image would be an interesting way to attempt "sharp" still-images.

In some ways, the (countable) photons is all the information there is at the sensor, but the thing that we are really trying to estimate is the reflectance or illuminance of some visual object(s). Thus, the way that the Poisson distribution "dithers" the scene illuminance/reflectance given low light is interesting, but I don't know how well it does this.

So, for any given ("gray") scene brightness, how many zero (no photon) sensels would there be per one (photon) sensel in order to make the probability of having >1 photon per sensel < some p? Could the challenge of adapting to bright vs dark scenes be solved by having a (highly) variable readout rate (instead of physically changing the size of sensels)?

-h