Pages: 1 2 [3]   Go Down

Author Topic: A7rIII - 70-80 megapixels  (Read 14142 times)

dwswager

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1375
Re: A7rIII - 70-80 megapixels
« Reply #40 on: April 12, 2016, 07:27:34 am »

So read noise is an issue, and it would have to be sorted.And a "dithered"/"noisy" 1-bit image is (essentially) what our inkjet does. And how mother nature generates a landscape scene in the first place.

If binary images cannot have smooth gradations, then how can I stand on a hill and see a landscape with smooth gradations?

-h

Is this rhetorical?  Obviously, standing on a hill viewing a landscape is neither binary, nor discrete.  That is actually the hurdle for both film in it's way and Digital sensors in theirs.  How to represent something of one type as something of another type.
Logged

shadowblade

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2839
Re: A7rIII - 70-80 megapixels
« Reply #41 on: April 12, 2016, 08:13:47 am »

So read noise is an issue, and it would have to be sorted.

Read noise is already pretty close to zero. You'd get a bit more improvement) by putting an A/D converter behind each pixel (rather than just at the end of a column), but the technology needed to do that (3d-printed circuitry) also lets you put huge capacitors behind each pixel.

Quote
And a "dithered"/"noisy" 1-bit image is (essentially) what our inkjet does.

But the inkjet has no 'write noise', plus it has light inks, plus it works on a subtractive rather than additive process.

Quote
And how mother nature generates a landscape scene in the first place.

If binary images cannot have smooth gradations, then how can I stand on a hill and see a landscape with smooth gradations?

-h

It's not. Rods and cones in the eye are stimulated by individual photons, yes. But what the brain interprets isn't a simple 'on' or 'off' - it's the rate of stimulation that determines how bright an object looks. What matters is not whether a cell is being stimulated or not, but how quickly it is being stimulated.

Is this rhetorical?  Obviously, standing on a hill viewing a landscape is neither binary, nor discrete.  That is actually the hurdle for both film in it's way and Digital sensors in theirs.  How to represent something of one type as something of another type.

Actually, digital sensors record an image in much the same way as the human eye. Like a digital sensor, colour is derived from cone cells with red, green and blue pigment in front of the photosensitive part (L, M and S cones respectively) and interpolated by the brain - it's why dichromats and tetrachromats see different colours to typical trichromats, but are still able to recognise 'blue' as 'blue', even though it looks different to them (the exception being certain colours being indistinguishable for dichromats, and true tetrachromats being able to distinguish certain colour pairs that appear identical to trichromats). Brightness is derived from the rate of stimulation of the cone and rod cells, which is directly dependent on the rate of photons hitting the cell; a digital sensor essentially counts photons, and more photons collected in the same space of time (the shutter speed) equates to a faster photon hit rate on a given photoreceptor.
Logged

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: A7rIII - 70-80 megapixels
« Reply #42 on: April 13, 2016, 02:59:45 am »

Is this rhetorical?  Obviously, standing on a hill viewing a landscape is neither binary, nor discrete.  That is actually the hurdle for both film in it's way and Digital sensors in theirs.  How to represent something of one type as something of another type.
I have to admit that it has been a few years since I had physics, but I do believe that the world of photons is properly described as "binary" in this context.

Human perception is less relevant: if the physical scene is "really" binary in nature, then our perception can work this way or the other but would still be inherently limited by the information present in the scene.

-h
Logged

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: A7rIII - 70-80 megapixels
« Reply #43 on: April 13, 2016, 03:06:12 am »

But the inkjet has no 'write noise', plus it has light inks, plus it works on a subtractive rather than additive process.
So the precense of light inks (as well as color) is obviously a modification to my simplified description, but this does not change the fact that a pretty good B/W image could be generated by a single (black) ink splatted onto a white paper. I am not sure that subtractive vs additive is all that relevant, as it could (again, in principle) have splatted white ink on black paper instead, or we could have some kind of display tech that offered small bright spots on a white background.

Interestingly, this kind of printing would not work very well without noise (dithering).
Quote
It's not. Rods and cones in the eye are stimulated by individual photons, yes. But what the brain interprets
If we could record and recreate the discrete set of photons from a real scene, we could recreate the scene. Then human perception is irrelevant, as we would offer our senses the same stimuli.
Quote
But what the brain interprets isn't a simple 'on' or 'off' - it's the rate of stimulation that determines how bright an object looks.
And if the scene had 100 photons within a (small) spatio-temporal volume and the corresponding recreation had 100 photons within the corresponding spatio-temporal volume, our brain would (AFAIK) have no way of responding differently. By making this volume smaller and smaller, we would (eventually) reach a state where each volume realistically got either 1 or 0 photons.

Now, that kind of precision might be overkill for photography applications, and there may be subtle Heisenberg issues that I don't really comprehend, but I think that my point stands: such a device (if it is ever possible) could potentially record every bit of information present in a given projection by a lens onto a (sensor) plane, using only (in the case of monochromatic light) a single bit per sensel. This image would have as much DR as the scene allows. Claims that one needs to store lots of charge per sensel to have lots of dynamic range is thus false by my reconning.

You still need the well capacity if you want enough DR...

Eric Fossum has been working on such sensors, but I do not know how far from practically usable his papers are:
http://ericfossum.com/Publications/Papers/2015%20CMOS%20April%20Saleh%20Binary%20Sensor%20Abstract.pdf

Quote
Quanta Image sensors (QIS) are proposed as a paradigm shift in image capture to take advantage of shrinking pixel sizes [1]. The key aspects of the single-bit QIS involve counting individual photoelectrons using tiny, spatially-oversampled binary photodetectors at high readout rates, representing this binary output as a bit cube (x,y,t) and finally processing the bit cubes to form high dynamic range images. ...
A QIS may contain over a billion specialized photodetectors, called jots, each producing just 1mV of signal, with a field readout rate 10-100 times faster than conventional CMOS image sensors.

So my question remains: given that we (at some point) can have 100 MP or 200 MP sensors in M43, APS-C or FF sizes where readnoise is kept sufficiently low so as to offer a "balanced" design. Do we really "need" to keep well capacity at current levels, or store ADC readouts at 14 bits, or might it be sensible to compromise on those two if that buys us higher sensel densities?

-h
« Last Edit: April 13, 2016, 03:26:26 am by hjulenissen »
Logged

Hywel

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 294
    • http://www.restrainedelegance.com
Re: A7rIII - 70-80 megapixels
« Reply #44 on: April 13, 2016, 09:36:02 am »

I have to admit that it has been a few years since I had physics, but I do believe that the world of photons is properly described as "binary" in this context.

Human perception is less relevant: if the physical scene is "really" binary in nature, then our perception can work this way or the other but would still be inherently limited by the information present in the scene.

-h

You're technically correct ( https://www.youtube.com/watch?v=hou0lU8WMgo ): photons are absorbed and therefore detected discretely.

However, the flux of photons is huge. Daylight provides something of order of 10^21 photons per square meter per second.

This can be made as near to continuous as makes no odds just by upping the integration time. It is very very likely that you'll hit the limits of your sampling device's abilities before you hit the fundamental limitations for the light being composed of discrete quanta. So at least for daylight landscape, it is pretty much as if you are sampling a continuous signal.

This doesn't apply at night, when the photon counts are much lower, and the discrete nature of the signal becomes much more apparent.

It's in this latter scenario where your "one photon per pixel" camera breaks down- it's likely to be overwhelmed by noise, because each sensel will have separate noise sources which are apt to make it register a photon hit when one has not in fact occurred. Many of these noise sources can be reduced (eg by cooling the sensor to reduce thermal noise sources) but can't be eliminated. The usual way of combating this is to up the integration time, allowing more signal to accumulate before reading out. This helps a lot with noise sources which doesn't scale per unit time (eg readout noise) but also helps with noise sources which do accumulate with time, because you get a higher signal to allow you to differentiate from the noise, and the "shot noise" (the inherent variation from sampling small numbers of photons in a Poisson distribution).

In theory you can find the optimum readout time to maximise the signal-to-noise ratio for a given signal. In order to have the flexibility to do that for pixels in the shadows, you'll need to allow pixels in the highlight to accumulate much more signal and not clip. Or you could optimise for the signal-to-noise ratio on the highlight pixels, but then you'll very likely to obliterating any detail in the shadows by the read noise and thermal noise when you could have done substantially better by integrating for longer.

The need to allow decent signal-to-noise in the shadows whilst preventing clipping in the highlights is exactly why camera sensors have big wells and low readout noise; the optimisation I referred to above has a well-known procedure for normal shooting conditions- expose to the right! That gathers maximum signal in the hottest pixels without clipping, and allows maximum signal to noise in the shadows with the maximum integration time.

I'm far from convinced that your super-segmented one photon per pixel camera can do better. If it is one photon per pixel in the highlights,  in the shadows it becomes one photon per hundred thousand pixels and there is NO WAY to spot that the one electron which is signal from all the electrons caused by noise spread over all those channels.

If you can do it, you will definitely need to be sampling the sensor quickly and doing your integration offline by exposure stacking and try to build up a picture of the noise including the temporal behaviour. As astrophotogaphers already do with exposure stacking for faint sources now, and as in the paper you quoted. You'll need to store all the data in time slices- this will make the data rate requirements of 4K video look like a walk in the park if you are aiming at one photon per sensel.

I can see that this could work, but it'll be extremely compute and storage intensive offline and very demanding on sensor readout noise, dark current, thermal noise, etc. on the sensor. What I'm not so convinced is that it will provide decisive advantages for general photographic use compared with just doing the integration physically with the shutter and having deep wells on the chip, as we do now.

It's an interesting ideas, but I don't think we've got the computing power in our cameras yet to read out and store the information fast enough, or the offline computing power to do an offline reconstruction of an HDR image in sensible time. But maybe it will come :)

Cheers, Hywel
« Last Edit: April 13, 2016, 09:51:06 am by Hywel »
Logged

dwswager

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1375
Re: A7rIII - 70-80 megapixels
« Reply #45 on: April 13, 2016, 10:40:06 am »

I think we might be neglecting the particle/wave duality principle.  Just because we approximate it as a particle to understand it, does not mean it acts just like a particle.  Einstein theorized it such that light was a particle, but the flow of light was a wave.  At the end of the day, the sensor cannot directly measure the flow as a wave, only a discrete levels of particles.  Hence digital sensors approximate what we see with our eyes, they cannot duplicate it.  I'm fairly good with that though!
Logged

shadowblade

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2839
Re: A7rIII - 70-80 megapixels
« Reply #46 on: April 13, 2016, 10:59:05 am »

I think we might be neglecting the particle/wave duality principle.  Just because we approximate it as a particle to understand it, does not mean it acts just like a particle.  Einstein theorized it such that light was a particle, but the flow of light was a wave.  At the end of the day, the sensor cannot directly measure the flow as a wave, only a discrete levels of particles. Hence digital sensors approximate what we see with our eyes, they cannot duplicate it.  I'm fairly good with that though!

Neither can the eye. Like a digital sensor, the eye is composed of cells with red, blue and green pigments filtering light, that just counts hits. Nothing to do with wave/particle duality (which applies to all objects, not just photons) - that only comes into effect when you're dealing with how refraction, diffraction, etc. work.

A digital sensor works in pretty much the same way as the eye. That's why it works.

Not sure what you mean about 'measuring the flow'. The amplitude? Photons don't have amplitude. The frequency? That's the inverse of the wavelength, i.e. whether it's red, green, blue or something else. Probably the best way to visualise photons, without delving into statistics and quantum mechanical equations, is as objects whose behaviour in large numbers can be approximated as classical waves, but whose behaviour as individuals is better approximated as individual particles within that wave.
Logged

hjulenissen

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2051
Re: A7rIII - 70-80 megapixels
« Reply #47 on: April 14, 2016, 06:14:03 am »

You're technically correct ( https://www.youtube.com/watch?v=hou0lU8WMgo ): photons are absorbed and therefore detected discretely.

However, the flux of photons is huge. Daylight provides something of order of 10^21 photons per square meter per second.
Sure. Practically, the problem must be really, really hard. I was using it more as a "pedagogic vehicle" in order to argue that the well capacity, number of bits per sensel and the sensel density are depending on each other, and have some counter-intuitive consequences for what most of us would describe as "dynamic range".

I am assuming that making binary silicon is (in itself) significantly simpler than doing multi-level or "continous" machines. If not, the photon counter could just as well relax the requirements and target sensels that accurately counted "a few" photons. I guess that Eric Fossum is the right person to consult in this regard.
Quote
This doesn't apply at night, when the photon counts are much lower, and the discrete nature of the signal becomes much more apparent.

It's in this latter scenario where your "one photon per pixel" camera breaks down- it's likely to be overwhelmed by noise, because each sensel will have separate noise sources which are apt to make it register a photon hit when one has not in fact occurred. Many of these noise sources can be reduced (eg by cooling the sensor to reduce thermal noise sources) but can't be eliminated.
So my knowledge about silicon ends with vague memory of lectures about P-doping and N-doping and idealized models of transistors. I don't know this stuff, and I have never practiced that part of my education.

My claim is that _if_ someone could make a single-photon counter with negligible self-noise and have this sufficiently spatio-temporal dense so as to make the probability of 2 photons hitting one sensel negligible, then said device would (in some ways) be tapping directly into the information of mother nature.

There is the added complexity that "color" is connected with the energy of each photon, making the recording no longer binary. But a brute-force (less satisfying) approach would be to apply a Bayer filter in front of a photon counter.
Quote
If you can do it, you will definitely need to be sampling the sensor quickly and doing your integration offline by exposure stacking and try to build up a picture of the noise including the temporal behaviour. ...

I can see that this could work, but it'll be extremely compute and storage intensive offline and very demanding on sensor readout noise, dark current, thermal noise, etc. on the sensor. What I'm not so convinced is that it will provide decisive advantages for general photographic use compared with just doing the integration physically with the shutter and having deep wells on the chip, as we do now.

It's an interesting ideas, but I don't think we've got the computing power in our cameras yet to read out and store the information fast enough, or the offline computing power to do an offline reconstruction of an HDR image in sensible time. But maybe it will come :)

Cheers, Hywel
I am not sure that compute power is the issue. I would think that making the sensor and reading out the binary raw representation is the hard part. Making that into a jpeg is just a matter of how much PC time you poor into the problem.

As regular camera sensels are integrating photons within a (semi-) rectangular time-space volume, I would expect that the easiest way to make the binary photon counter file into a traditional image file would be something similar: 3-d convolution with a (semi-) rectangular kernel. That is not very compute intensive. Benefits would include the possibility of non-physically-realizable filter kernels (e.g. lanczos).

Doing "probabilistic" motion tracking of the (space,time) sampled binary image would be an interesting way to attempt "sharp" still-images.

In some ways, the (countable) photons is all the information there is at the sensor, but the thing that we are really trying to estimate is the reflectance or illuminance of some visual object(s). Thus, the way that the Poisson distribution "dithers" the scene illuminance/reflectance given low light is interesting, but I don't know how well it does this.

So, for any given ("gray") scene brightness, how many zero (no photon) sensels would there be per one (photon) sensel in order to make the probability of having >1 photon per sensel < some p? Could the challenge of adapting to bright vs dark scenes be solved by having a (highly) variable readout rate (instead of physically changing the size of sensels)?

-h
« Last Edit: April 14, 2016, 06:34:10 am by hjulenissen »
Logged
Pages: 1 2 [3]   Go Up