Pages: 1 ... 8 9 [10]   Go Down

Author Topic: larger sensors  (Read 191406 times)

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #180 on: January 23, 2007, 11:50:17 am »

Quote
That seems to be the case, at least for now. Suggesting perhaps that one rule for choice of exposure index (ISO speed) is rather film like: use high enough EI get the levels of the shadow regions up to where you want them (within the constraints of highlight head-room), to protect the signal from read-noise introduced after pre-amplification, or part way through pre-amplification. And if all else fails, bin pixels!
[a href=\"index.php?act=findpost&pid=97177\"][{POST_SNAPBACK}][/a]

If you can bin as the Dalsa does in theory, with a real reduction in read noise beyond what software binning can do, you gain something, but you lose a lot, too, in a single image, in terms of resolution.  I once believed the mantra that less noise per pixel was good as an end in itself, but every simulation or experiment I try suggests that low resolution is worse than noise, and an exaggerater of existing noise.

The Dalsa approach might work very well if you take two images; one at full resolution, and one at 1/4MP resolution - and use a luminance mask from the low-res image to blend the images together.  Or, it could be used with multiple exposures in low-res mode with slight registration differences between exposures, and stacked with sub-pixel alignment (a good idea for a low-res, aliasing camera like the Sigma SD9, as well).
Logged

BJL

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6600
larger sensors
« Reply #181 on: January 23, 2007, 03:17:36 pm »

Quote
If you can bin as the Dalsa does in theory ...
[{POST_SNAPBACK}][/a]
... or as Kodak apparently does in practice in the new KAI-10100, with the option of 2:1 binning to non-square pixels for milder loss of resolution. See this PowerPoint on [a href=\"http://www.sbig.com/aic2006/AIC2006.PPT]SBIG's forthcoming astronomy cameras[/url]

Quote
I once believed the mantra that less noise per pixel was good as an end in itself, but every simulation or experiment I try suggests that low resolution is worse than noise, and an exaggerater of existing noise.
[a href=\"index.php?act=findpost&pid=97179\"][{POST_SNAPBACK}][/a]
I am inclined to agree; once the practical effects of getting the same prints size at higher PPI from the higher pixel count image (and/or the latitude for more NR processing), fewer, bigger pixels might lose a lot of its IQ appeal. Maybe all I want is good dynamic range at low ISO, and suitable noise processing at higher ISO. (Maybe binning is more relevant to technical work with extremes of dynamic range like astronomy.)

Quote
The Dalsa approach might work very well if you take two images; one at full resolution, and one at 1/4MP resolution ...
[a href=\"index.php?act=findpost&pid=97179\"][{POST_SNAPBACK}][/a]
But maybe if you have the time for two exposures, the second exposure can be long enough to have no need for binning or other resolution reduction.
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #182 on: January 23, 2007, 11:58:42 pm »

I notice at 'dpreview news' that Sharp have announced a 1/2.5" sensor with pixel pitch of just 1.75 microns.

Quote
Sharp Japan has announced a new 1/2.5" CCD, now packing a frankly shocking eight million pixels into an area measuring just 5.8 x 4.3 mm it has a pixel pitch of just 1.75 µm which Sharp are proud to announce is the smallest in its class. Is this a good thing? I think probably not, as we will see manufacturers cramming this sensor into existing designs with average quality lenses and then claiming to deliver high sensitivities such as ISO 1600. Progress marches on, at least the marketing department will be happy. (21:15 GMT)

Now, to get back to my concept of the true digital sensor, a 7 micron Foveon type pixel could consist of 16 Foveon sub-pixels, or 48 separate photon detectors over three layers.

The number of possible combinations (or colors) would then be 8 to the power of 16 which, according to my maths, represents a theoretically possible 2.8 thousand trillion colors. (Perhaps I'm out by a factor of 10, but never mind. We have room to move, here.   ).
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #183 on: January 25, 2007, 08:54:02 pm »

Quote
I notice at 'dpreview news' that Sharp have announced a 1/2.5" sensor with pixel pitch of just 1.75 microns.
Now, to get back to my concept of the true digital sensor, a 7 micron Foveon type pixel could consist of 16 Foveon sub-pixels, or 48 separate photon detectors over three layers.

That's 48 photons maximum for a 7 micron pixel pitch.  That's way too few photons captured.  That will mean lots of shot noise.  True even if you make each photobit trigger when X number of photons have struck it.

Quote
The number of possible combinations (or colors) would then be 8 to the power of 16 which, according to my maths, represents a theoretically possible 2.8 thousand trillion colors. (Perhaps I'm out by a factor of 10, but never mind. We have room to move, here.   ).
[a href=\"index.php?act=findpost&pid=97262\"][{POST_SNAPBACK}][/a]

The number you have calculated, 281,474,976,710,656 - is the number of possible unique combinations within each superpixel, noting the status of each individual photobit.  You will not have any use for this information when you make your superpixel from the 48 bits.  All that matters is how many red, how many green, and how many blue photobits are triggered in each superpixel.  These values are 0 to 15. There are only 16^3 = 4096 colors possible for each 7u "superpixel".  These superpixels are far too large to have this little bit depth!  Your superpixels (Canon 10D size) are only 4 bits per color channel!

Finally, I understand what you are saying (your previous attempts didn't register with me), and finally, as you had hoped, your idea is shot down!  
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #184 on: January 25, 2007, 10:40:52 pm »

Quote
That's 48 photons maximum for a 7 micron pixel pitch.  That's way too few photons captured.  That will mean lots of shot noise.  [a href=\"index.php?act=findpost&pid=97569\"][{POST_SNAPBACK}][/a]

Thanks for exercising your mind on this, John. But I don't see this at all as being 48 photons. It's 48 distinct noise-free values grouped into parcels of 3, each parcel having a possible 8 different combinations of red, blue and green. The fact that a red value in one parcel is the same as a red value in another parcel, or that a green value in one parcel is the same as a green value in all other parcels, should not alter the fact that each parcel can have 8 different values. 16 parcels, each having a theoretical 8 different values, amount to a possible 2.8 thousand trillion colors for the final 7 micron pixel.

In practice, of course, you would not get anywhere near that number, even assuming we had the processing power. When we produce a high resolution image in 8 bit with a theoretical 16.7 million colors, there's almost always no where near that number of actual distinct colors in the image. It's probably more like 50,000, perhaps 100,000 at the most.
« Last Edit: January 25, 2007, 10:44:17 pm by Ray »
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #185 on: January 25, 2007, 11:03:15 pm »

Quote
Thanks for exercising your mind on this, John. But I don't see this at all as being 48 photons.

It really doesn't matter if its 48 photons, or 48 potential thresholds requiring any number of photons.

Quote
It's 48 distinct noise-free values grouped into parcels of 3, each parcel having a possible 8 different combinations of red, blue and green. The fact that a red value in one parcel is the same as a red value in another parcel, or that a green value in one parcel is the same as a green value in all other parcels, should not alter the fact that each parcel can have 8 different values. 16 parcels, each having a theoretical 8 different values, amount to a possible 2.8 thousand trillion colors for the final 7 micron pixel.

No, they don't.  There are that many possible smaller images within the superpixel.  The number of RGB intensities as a 7u unit are extremely finite; there are only 4096 of them.  You're counting the number of possible images, not the total number of possible colors.

And your "noise-free" thing is nothing more than fantasy.  You sound like Yogi Bear, looking for free lunch.

You need lots of bit depth and/or lots of pixels to get high image color, not drawing a box around every group of 16 extremely inefficient pixels.
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #186 on: January 25, 2007, 11:56:45 pm »

Quote
The number of RGB intensities as a 7u unit are extremely finite; there are only 4096 of them.

That's within a 12 bit system, is it? With the future computing power that I'm talking about, we'll be way ahead of a mere 12 bits   .

Quote
It really doesn't matter if its 48 photons, or 48 potential thresholds requiring any number of photons.

It matters greatly. It's more of a threshold that's flexible between the noise floor of the system and a few photons above that noise floor, not any number of photons above that threshold. No 35mm lens can deliver 70% MTF at a 1.75 micron spacing. The difference between a sub-pixel element that's switched on with 500 photons as opposed to another pixel that's switched on with an 800 photon signal, represents the 'inaccuracy' of the system. I'm not trying to describe absolute perfection here.

In my system, we're gathering 'grass roots' data from the lens. Maximising the potential of the lens with perhaps a bit of help from DXO type algorithms.

Quote
You need lots of bit depth and/or lots of pixels to get high image color, not drawing a box around every group of 16 extremely inefficient pixels.

First, such pixels are not inefficient. All they require is any signal above the noise floor for 100% efficiency (within the resolution limits of the system).

The number of (say 7 micron) pixels depends on the size of the sensor and the processing power to handle that number. 2.8 thousand trillion colors is probably serious overkill   . You can change the pixel sizes and sub-pixel sizes to suit.
« Last Edit: January 26, 2007, 12:11:48 am by Ray »
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #187 on: January 26, 2007, 06:46:26 pm »

Quote
That's within a 12 bit system, is it? With the future computing power that I'm talking about, we'll be way ahead of a mere 12 bits   .

Your system IS a 12-bit system (4 bits per channel) for each 7u super-pixel.  The superpixel can only recognize 16 levels for each color channel.

Quote
It matters greatly. It's more of a threshold that's flexible between the noise floor of the system and a few photons above that noise floor, not any number of photons above that threshold.

Thresholds above the noise floor do not eliminate read noise, at all.  Read noise is there in every signal.  If your signal were somehow magically uniform, and high, the pattern of noise in the thresholding would be determined by the blackframe noise upon which the signal is added, if the signal level is at the threshold.  If the signal is a little below the threshold, you will have black for all sub-pixels, and black for the superpixel.  If the signal is above the threshold, then you will have all white.  Not a very useful system.  The only way a single threshold can be of much use is if the pixels are much smaller than your model, and can only detect one photon, and a good, strong exposure would not have any areas where all pixels were triggerd by a photon.  All ones for an area, no matter how small, is indistinguishable from clipping.

Quote
No 35mm lens can deliver 70% MTF at a 1.75 micron spacing. The difference between a sub-pixel element that's switched on with 500 photons as opposed to another pixel that's switched on with an 800 photon signal, represents the 'inaccuracy' of the system. I'm not trying to describe absolute perfection here.

I have no idea what you mean by that.  You need to be a little bit more specific in the details.

Quote
In my system, we're gathering 'grass roots' data from the lens. Maximising the potential of the lens with perhaps a bit of help from DXO type algorithms.
First, such pixels are not inefficient. All they require is any signal above the noise floor for 100% efficiency (within the resolution limits of the system).

That sounds like poetry.  It doesn't mean anything real and tangible to me.  Your idea reminds me of people who dream that they are making beautiful original music, and wake up thinking that they might be a latent musician.  When asked to reproduce the music, they can't remember it, just that it was beautiful.  The case may really be that they dreamt of the "feeling" of beautiful original music, but there was actually no real beautiful original music in the dream.  Same with your dreams of avoiding noise, and getting good IQ with your system.  Your system, if realized in the form of 16 foveon-like 1-bit subpixels comprising each 7u superpixel, will result in horrible posterization.  You need a lot more than 16 levels per color channel for pixels that big, unless your sensor is huge and/or there is a lot of noise.

Quote
The number of (say 7 micron) pixels depends on the size of the sensor and the processing power to handle that number. 2.8 thousand trillion colors is probably serious overkill   . You can change the pixel sizes and sub-pixel sizes to suit.
[a href=\"index.php?act=findpost&pid=97600\"][{POST_SNAPBACK}][/a]

There aren't that many colors in your system.  Your arithmetic is correct, but your application is wrong.  That is the number of possible *subimages* within each super-pixel, which you are clearly reducing to the sum of the on-pixels in the super-pixels.

For any given color channel, each of the following subimages results in the same super-pixel value for that color:

0000
0000
0000
0001

0000
0000
0000
0010

...

0100
0000
0000
0000

1000
0000
0000
0000

The only possibilities for each superpixel in a given color channel are 0 ones, 2 ones, 3 ones, etc, up to 15 ones.
« Last Edit: January 26, 2007, 07:00:15 pm by John Sheehy »
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #188 on: January 26, 2007, 08:54:26 pm »

Quote
Your system IS a 12-bit system (4 bits per channel) for each 7u super-pixel.  The superpixel can only recognize 16 levels for each color channel.

Ah! I see now you have misunderstood my concept. You are treating the superpixel as though it's a separate physical entity receiving signals from the 16 sub-pixels. The 7 micron pixel is in fact just a collection of 16 sub-pixels each with its own values. These values (all 48 of them, since they are Foveon type sub-pixels) are read by (or passed on to) the in-camera computer and the information is 'summed, averaged, analyzed etc' within, say a 64 bit system in order to assign a value to each 'virtual' superpixel, which is the pixel you will see on your monitor.

In the particular example of 16 sub-pixels of 3 layers, there are a possible 8 to the power of 16 values (at least you agree with the maths). If we imagined the highly implausible situation where there are as many 7 micron 'virtual' pixels as there are combinations of these 48 sub-pixel elements, then each image theoretically could contain 2.8 thousand trillion different colors, if the computing power was there. The numbers are unrealistically large of course, but it's the principle I'm trying to get across.

My first example used a group of 9x2 micron sub-pixels giving a more realistic possible 134 million values which could theoretically be 'assigned' to each of the, say 134m 6 micron 'virtual' superpixels on a full frame MF sensor.

Quote
If the signal is a little below the threshold, you will have black for all sub-pixels, and black for the superpixel.

You would only have black for all sub-pixels if the entire image were black. But I gues you mean, if a small group of adjacent sub-pixels did not receive a sufficiently strong signal to turn on any of the colors, then the rendering would be black. Yes, of course it would. What's wrong with that? Black is a necessary component of photographs. If there's a black speck on a white background, then it has to be black. What else should it be?

Quote
If the signal is above the threshold, then you will have all white.  Not a very useful system.

Simply not true. If I point my all-digital camera at a red flower petal, then most of the red sub-pixel elements will be switched on but relatvely few of the green and blue because the blue and green signals are relatively weak. Many such blue and green signals will be below the noise threshhold.

There's probably not much point in my continuing to address each of your objections stated in your previous post because they are really based on a misunderstanding of the concept. Hope I've cleared this up   .
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #189 on: January 26, 2007, 10:03:10 pm »

Quote
No 35mm lens can deliver 70% MTF at a 1.75 micron spacing. The difference between a sub-pixel element that's switched on with 500 photons as opposed to another pixel that's switched on with an 800 photon signal, represents the 'inaccuracy' of the system. I'm not trying to describe absolute perfection here.[/QUOTE]

Quote
I have no idea what you mean by that. You need to be a little bit more specific in the details.

Okay! I'll be more specific. The fatal flaw in my concept might at first seem to be (and probably is), there will be insufficient differentiation between a strong signal, a moderately strong signal and a weak signal. My point is, photon detectors only 1.75 microns in diameter would not receive strong signals from a 35mm lens. If one imagines a FF 35mm sensor filled with 1.75 micron photosites (just a single layer) there would be about 280m of them. The resolution required from the lens would be around 285 lp/mm.

Using Rayleigh's derived laws for diffraction limitation in respect of green light, a lens, diffraction limited at f8 for example, will produce a resolution of 200 lp/mm at just 9% MTF.

That's a very weak signal. My concept is, the system would be 'tuned' so that such weak signals would be generally above the threshold for a given lighting conditions; an ISO setting, if you like. Those that aren't above the threshold are rendered as black, those that are above the threshold, are either red, blue or green, or all three for white. Nothing wrong with white is there?

At this resolution, the variation of signal strength would probably be less than + or - 150 photons that I've suggested. Of course, if you could have a sytem that could respond to + or - a single photon, then that would be the ultimate.
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #190 on: January 26, 2007, 11:35:47 pm »

Quote
Ah! I see now you have misunderstood my concept. You are treating the superpixel as though it's a separate physical entity receiving signals from the 16 sub-pixels.

I see the superpixel as the binning of the subpixels.  Otherwise, it would not make any sense to even refer to the superpixel, if the final data has unique information about each subpixel.

Quote
The 7 micron pixel is in fact just a collection of 16 sub-pixels each with its own values. These values (all 48 of them, since they are Foveon type sub-pixels) are read by (or passed on to) the in-camera computer and the information is 'summed, averaged, analyzed etc' within, say a 64 bit system in order to assign a value to each 'virtual' superpixel, which is the pixel you will see on your monitor.

That's exactly what I thought you meant.  I really don't see how you think that there is anything worthy of 64-bit data here.  If the superpixel contains a single value for each of red, green, and blue, then it only has 16 possible levels for each.  I don't see where you're getting these delusions of grand detail from.  There are 16 red possibilities times 16 green possibilities times 16 blue possibilities; only 4096 color possibilities for each large 7u superpixel; a recipe for posterization.

Quote
In the particular example of 16 sub-pixels of 3 layers, there are a possible 8 to the power of 16 values (at least you agree with the maths).

I only agree as to what the result of 8^16 is; its application here only applies to the number of possible unique states for the 4x4 arrays of subpixels, *BEFORE* they are turned into superpixels.  These unique states are *NOT* unique superpixel states.

Quote
If we imagined the highly implausible situation where there are as many 7 micron 'virtual' pixels as there are combinations of these 48 sub-pixel elements, then each image theoretically could contain 2.8 thousand trillion different colors, if the computing power was there. The numbers are unrealistically large of course, but it's the principle I'm trying to get across.

Any discussion of whether or not you'll ever use all the colors available is ridiculous, IMO, and has absolutely nothing to do with the reasons for using higher bit depths.  Higher bit depths are for increased accuracy, and nothing else.  In any event, your superpixels only come in 4096 varieties; not 2.81x10^14.

Quote
My first example used a group of 9x2 micron sub-pixels giving a more realistic possible 134 million values

Try 729 possible values.

Quote
which could theoretically be 'assigned' to each of the, say 134m 6 micron 'virtual' superpixels on a full frame MF sensor.
You would only have black for all sub-pixels if the entire image were black. But I gues you mean, if a small group of adjacent sub-pixels did not receive a sufficiently strong signal to turn on any of the colors, then the rendering would be black. Yes, of course it would. What's wrong with that? Black is a necessary component of photographs. If there's a black speck on a white background, then it has to be black. What else should it be?

The speck is most likely dark grey, not black.

Quote
Simply not true. If I point my all-digital camera at a red flower petal, then most of the red sub-pixel elements will be switched on

Then there won't be any highlight detail in the red channel.

Quote
but relatvely few of the green and blue because the blue and green signals are relatively weak. Many such blue and green signals will be below the noise threshhold.

That does not sound like anything that is going to give accurate color or even luminosity.

Quote
There's probably not much point in my continuing to address each of your objections stated in your previous post because they are really based on a misunderstanding of the concept. Hope I've cleared this up   .
[a href=\"index.php?act=findpost&pid=97714\"][{POST_SNAPBACK}][/a]

No, you haven't changed my perception of your idea at all.  You have repeated the same technological and mathematical fantasies as before, AFAICT.  You are trying to discard every basic principle of maintaining image quality to save your idea.
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #191 on: January 27, 2007, 06:29:25 am »

Quote
I see the superpixel as the binning of the subpixels.  Otherwise, it would not make any sense to even refer to the superpixel, if the final data has unique information about each subpixel.

John,
This is where you have misunderstood the concept. You are still thinking analog. The discrete values of the analog pixels that are binned are not combined in different variations. They are simply voltages that are added to provide one larger voltage or value.

Quote
If the superpixel contains a single value for each of red, green, and blue, then it only has 16 possible levels for each.

Wrong. The super pixel doesn't contain any values because it's not analog. It's a virtual pixel that is 'assigned' a value from the many combinations of the 48 (on/off) diodes that comprise it.

Your notion that there are only 16 possible values of red because there are only 16 red diodes within the superpixel is again analog think.

It'll probably get a bit tedious, but I'll try to go through some of the possible values for just one red sub-pixel. We'll name the pixels 1 to 16 and call the individual elements (48 of them) diodes.

These are some of the possible different values that a single red pixel can have, call it R1, a red pixel consisting of a red diode turned on and the blue and green diodes (belonging to that pixel) turned off.

(1) R1 + 15 pixels off (black). This would be the darkest red possible. You wouldn't really be able to distinguish it from black. The value is one red diode turned on and all other 47 diodes turned off.

(2) R1 + 14 pixels off (black) + 1 pixel on (white) just a shade lighter.

(3) R1 + 13 pixels off (black) + 2 pixels on (white) another shade lighter.

(16) skip a few, R1 + 15 pixels on (white).

There we already have 16 different values for just one red sub-pixel.

We can repeat the same process for two red pixels.

(17) (R1 + R2) + 14 pixels off (black)

(18) (R1 + R2) + 12 pixels off + 2 pixels on

and repeat the process for 3 red pixels, and 4 red pixels and so on.

(19) (R1 + R2 + R3) + 13 pixels off..... (plus 12 pixels off and one on, plus 11 pixels off and 2 on, etc etc etc.

The lightest shade of red possible in such a system would be one red sub-pixel on  (red diode on, blue and green off) plus all other pixels on (white). If we were looking at sub-pixels on the monitor at great magnification, this would look like a tiny speck of red on a larger white dot. The speck could be in the middle of the white dot or at the edge, it doesn't matter because we are actually only looking at a single much larger pixel (7 micron) which has been assigned a value of red which is almost white.

I hope this is now as clear to you as it is to me   .
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #192 on: January 27, 2007, 12:45:04 pm »

Quote
John,
This is where you have misunderstood the concept. You are still thinking analog. The discrete values of the analog pixels that are binned are not combined in different variations. They are simply voltages that are added to provide one larger voltage or value.
[a href=\"index.php?act=findpost&pid=97753\"][{POST_SNAPBACK}][/a]

I started replying point by point to your post, but I have decided that it would be a total waste of time, a growing snowball of confusion.  You don't seem capable of engaging in a focused argument, always going off on tangents.  Every time you allegedly clarify my alleged misconception, you say exactly what I already thought you were saying.  The word "Value" applies to both analog and digital numbers.

Ray, answer this one simple question ... what is it that you expect to be outputted in the RAW data ... what is the *exact* format of your RAW data; what is it supposed to contain?

I can only see two things that you might be trying to accomplish:

1) Package each 4x4 subimage and call it a "super-pixel" (a totally semantic-oriented approach with no practical IQ value whatsoever), or

2) You are trying to use these subpixels to create a single super-pixel, which will have three DIGITAL VALUES, one each for red, green, and blue luminance.  In this case there are only 17^3 or 4913 possible DIGITAL RGB VALUES for the full super-pixel, as there are only 17 possible states of each color (not 16 as mistakenly implied earlier) within each superpixel.

It seems that it is #2 that you are trying to accomplish, at times, and at other times you seem to be implying #1, especially with your 2.81x10^14 figure, which only has (purely semantic) application in #1.

Furthermore, any single-threshold-based (1-bit) capture is only efficient when there is less than a 50% chance of capture of a *SINGLE* photon in the brightest highlights, which would either mean a much finer pixel pitch, or an extremely low quantum efficiency.  Having some value like 500 or 800 photons will *increase* either noise or posterization, depending on the amount of light.  You can't avoid noise and posterization with thresholds and truncation.  You only make matters worse.
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #193 on: January 27, 2007, 07:45:58 pm »

Quote
I started replying point by point to your post, but I have decided that it would be a total waste of time, a growing snowball of confusion.  You don't seem capable of engaging in a focused argument, always going off on tangents.

I think you are beiginning to bluster, John. I threw in this idea as a possible solution to the the current noisy and resolution limited analog imaging devices. I realise the numbers and computing power required are too great for such a system to be practical at present and that simply because we can now manufacture pixels with a pitch of 1.75 microns doesn't mean that we can manufacture foveon type pixels of the same pixel pitch, or that we can spread hundreds of millions of them over large sensors, without enormous expense at least.

The practical difficulties of constructing such a system are for the engineer. What I was trying to elicit from you are valid objections to the theory on the grounds it  might not be mathematically sound, for example, or that it might contravene the laws of physics.

Now, some of your objections are valid. I think it really would be impossible to build such a system with a 'one photon' accuracy. You'd probably need a liquid-nitrogen-cooled camera the size of a house to achieve that. So there is obviously some noise in my system as envisaged, if one defines noise as anything less than absolute accuracy, so clearly my claims of 'switch the pixel on for total accuracy' are meant to be taken as rhetoric. (Did you see this sign   after such statements?)

Clearly, there's going to be noise at the threshold. If we take my example of the minimum value of red, one red diode in a cluster 48 diodes 47 of which are turned off; the reason that the one red diode is turned on, is likely due to noise of one sort or another. At stronger signal levels, there's a possibliliy of two adjacent red pixels both being turned on by slightly different signal strengths. R1 is turned on by say, a 500 photon signal and R2 is turned on by say, a 550 photon signal. There's no distinction between the two values and that represents another inaccuracy in the system. However, it's not possible for R1 to R16 to all be switched on by signals varying significantly, from say 500 photons to 5,000 photons, because our 35mm lens cannot transmit such intensities over such a small area of 1.75 microns in diameter. If it were able to, then such a lens would have an MTF response of 90% at 200 lp/mm, which is clearly impossible for a 35mm lens.

Quote
I can only see two things that you might be trying to accomplish:

1) Package each 4x4 subimage and call it a "super-pixel" (a totally semantic-oriented approach with no practical IQ value whatsoever), or

What! You merely object to the name? Then give it another name. I've tried to clarify things by calling it a 'virtual' pixel. The virtual pixel, as seen on the monitor, represents a summation of a complex analysis of the many, many different values one can get from all the possible variations of 16x3 RGB photodiodes. The virtual pixel does not exisit on the sensor. All that exists on the sensor are millions of on/off switches that are activated by a certain level of photonic signal, tuned as precisely as possible to the resolution limits of the lens. As I amplify on my theory, I now see that such a system would work best with lens and sensor designed as an integrated system and there would probably need to be some very sophisticated DXO Optics type of correction built in.

Quote
2) You are trying to use these subpixels to create a single super-pixel, which will have three DIGITAL VALUES, one each for red, green, and blue luminance.  In this case there are only 17^3 or 4913 possible DIGITAL RGB VALUES for the full super-pixel, as there are only 17 possible states of each color (not 16 as mistakenly implied earlier) within each superpixel.

So, when I tried in my previous post to enumerate the possible values of just 1 of the 16 red pixels, indicating clearly I thought, that there would be far more than 16 or 17 different possible values, I just wasted my time did I?

Now, I admit that maths is not my strong point. I can't say for sure that there will be 16^3 (4096) possible values of red, because there might be some duplication of values there, and I suspect there is some duplication of values for the total number of colours in such a 16 pixel array (2.8 thousand trillion   ). Perhaps a mathematician can help out here.

Quote
Ray, answer this one simple question ... what is it that you expect to be outputted in the RAW data ... what is the *exact* format of your RAW data; what is it supposed to contain?

I'll try and make it as graphic as possible how I imagine the values derived from an analysis of the 16 pixel array would be assigned to the virtual pixel. I'll assume that we have 4096 possible values of red, but I'm not certain about this.

(1) The palest shade of red will consist of one (fully saturated, as they all are) red pixel plus 15 white pixels (on the sensor). For the red element of our virtual pixel, we assign a number out of 4096.

(2) Slightly darker than the palest shade of red, we have one red pixel, 14 white pixels and one black pixel. We assign another number to the red element of the virtual pixel. What number should it be? I don't know. I thought perhaps you might?  

I assume that all 16 red pixels turned on (meaning that all green and blue pixels are switched off) results in the most saturated red the system can achieve and that this would be assigned the number 4096.

If you consider this a waste of time, that's fine by me. I can sense you are getting rather irritated.
« Last Edit: January 27, 2007, 07:49:40 pm by Ray »
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #194 on: January 28, 2007, 11:23:45 am »

Quote
I think you are beiginning to bluster, John. I threw in this idea as a possible solution to the the current noisy and resolution limited analog imaging devices.

I still have no clear picture of what your idea is.  You seem to be relying on telepathy to communicate your idea.  All I can surmise is that you're doing something with small, 1-bit-per-color pixels, and combining 16 of them into one channel of a super-pixel.  The only reason I can think of to have a super-pixel is to make an output pixel that represents the sum of all light registered.  In that case, it doesn't matter which 1-bit subpixels registered a hit, the only thing that matters is how many of them registered a hit.  The list of all possible results is

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

for each color value of each super-pixel.  With 17 possible values for each color of the superpixels, there are 17^3 or 4913 possible RGB values for each superpixel (as opposed to the 2^36 or 68,719,476,736 RGB values possible for a real-world Foveon pixel in a Sigma SD9 or SD10, which has just a slightly larger pixel pitch than your example's superpixel.

Quote
I realise the numbers and computing power required are too great for such a system to be practical at present and that simply because we can now manufacture pixels with a pitch of 1.75 microns doesn't mean that we can manufacture foveon type pixels of the same pixel pitch, or that we can spread hundreds of millions of them over large sensors, without enormous expense at least.

The biggest clear problem with your system is that you are not collecting enough information at each subpixel, and even at the superpixel.  A 1.75u pixel pitch covers far too large an area to measure a single threshold (1 bit hit).  The shot noise is going to be the same as if it only took one photon to register a hit, if the hit rate is about 50%.  It is dependent on *ONE* single photon to cross the threshold, or not cross it, for each subpixel.  That is worse than if the there were no subpixels, and the superpixel (which is a real pixel now) only recorded 17 levels per color channel, because a fairly uniform level of light across all 16 subpixels could fail to register any hits at all, and just 1/4 stop more could have all of them registering; IOW, 0 and 16 could be just 1/4 stop apart.

Quote
The practical difficulties of constructing such a system are for the engineer. What I was trying to elicit from you are valid objections to the theory on the grounds it  might not be mathematically sound, for example, or that it might contravene the laws of physics.

Your idea, as I understand it, does not collect useful information.  A 1-bit hit at a high number of photons is only useful for high-contrast copying, like text, and contains no detail anywhere except in the tonal range around the threshold.

Quote
Now, some of your objections are valid. I think it really would be impossible to build such a system with a 'one photon' accuracy. You'd probably need a liquid-nitrogen-cooled camera the size of a house to achieve that. So there is obviously some noise in my system as envisaged, if one defines noise as anything less than absolute accuracy, so clearly my claims of 'switch the pixel on for total accuracy' are meant to be taken as rhetoric. (Did you see this sign   after such statements?)

If you don't think your idea will reduce noise, or improve on current technology in some way, then why are you even mentioning it?  What is the purpose of your idea?  You still haven't made it clear if you are recording the exact 4x4 arrays in your output and just giving them a purely semantic name of "superpixel", or if you are actually counting the number of hits within it.  I asked you to make a choice, made it clear that it was very important that you answer this in order for me to understand what your idea actually is, and you defended both of them!

Quote
Clearly, there's going to be noise at the threshold. If we take my example of the minimum value of red, one red diode in a cluster 48 diodes 47 of which are turned off; the reason that the one red diode is turned on, is likely due to noise of one sort or another.  At stronger signal levels, there's a possibliliy of two adjacent red pixels both being turned on by slightly different signal strengths. R1 is turned on by say, a 500 photon signal and R2 is turned on by say, a 550 photon signal. There's no distinction between the two values and that represents another inaccuracy in the system. However, it's not possible for R1 to R16 to all be switched on by signals varying significantly, from say 500 photons to 5,000 photons,

That's a shame, because, really, that is the only way you're going to get any tonality out of such a system.  Here's what you get with a system like yours (noise in middle half, thresholding in lower half):



Quote
because our 35mm lens cannot transmit such intensities over such a small area of 1.75 microns in diameter.  If it were able to, then such a lens would have an MTF response of 90% at 200 lp/mm, which is clearly impossible for a 35mm lens.

I am not even going to *begin* to figure out what you think that lens MTF has to do with this context (thresholding).

Quote
What! You merely object to the name? Then give it another name.

No.  I object to the fact that all that you'd be doing is giving it a name.  You're not changing anything over having 16x as many pixels, 1/16th the size, if the condition I mentioned were true of your intent (I asked you to choose between #1 and #2, and you defended them both).

Quote
  I've tried to clarify things by calling it a 'virtual' pixel. The virtual pixel, as seen on the monitor, represents a summation of a complex analysis of the many, many different values one can get from all the possible variations of 16x3 RGB photodiodes.

You can't have a meaningful complex analysis of such coarse data.  The data your system collects is garbage.  You're recording a single threshold hit for hundreds or thousands of photons.  That results in garbage collection, and nothing more.
Logged

John Sheehy

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 838
larger sensors
« Reply #195 on: January 28, 2007, 11:25:39 am »

Quote
The virtual pixel does not exisit on the sensor. All that exists on the sensor are millions of on/off switches that are activated by a certain level of photonic signal, tuned as precisely as possible to the resolution limits of the lens.

That doesn't make any sense, whatsoever.  The resolution of the lens is only worth considering for pixel-pitch, not for thresholds.  And again, all these thresholds can do is posterize.  

Quote
As I amplify on my theory, I now see that such a system would work best with lens and sensor designed as an integrated system and there would probably need to be some very sophisticated DXO Optics type of correction built in.
So, when I tried in my previous post to enumerate the possible values of just 1 of the 16 red pixels, indicating clearly I thought, that there would be far more than 16 or 17 different possible values, I just wasted my time did I?

I really can't answer that Ray, because, even though I've asked you quite clearly, several times now, exactly what you are recording and interested in, you have failed to give a clear response every single time.  I am well aware of how many possible results there are within each superpixel; I didn't need your enumeration.  Simply stating that each superpixel's internal detail is expressed by a 48-bit number tells all that.  If you are interested in *HOW MUCH* light hits the superpixel, in each color, then there are only 17 levels per channel, or 4913 possible RGB values.  If you want to record, in the RAW file, the exact patterns of hits within the superpixels, then there are 2.81x10^14 possible superpixels, but that big number is not really impressive when you think about what it really means; that the possible number of 16-pixel, 1-bit per channel RGB images possible.  It's still just a 1-bit-per-channel 4x4 pixel image.  Now, if your idea is to perform some processing on the superpixel unique data, to write a "better" super-pixel out to RAW than you can get with just counting the number of hits within each superpixel, then you have yet to give even a clue of what you think the system can do; the idea, so far, would be akin to asking a Genie wish.  And frankly, with a single threshold at hundreds or thousands of photons, you're going to need a Genie because the data is not worth analyzing for anything but high-contrast line-copying.  

Quote
Now, I admit that maths is not my strong point. I can't say for sure that there will be 16^3 (4096) possible values of red, because there might be some duplication of values there, and I suspect there is some duplication of values for the total number of colours in such a 16 pixel array (2.8 thousand trillion   ). Perhaps a mathematician can help out here.

I won't enumerate all the possiblilities for 16 photobits, so lets just say there are 4, for the sake of argument.  There are, then, 16 possible states, each with 0 to 4 1s or "hits":

0000  0
0001  1
0010  1
0011  2
0100  1
0101  2
0110  2
0111  3
1000  1
1001  2
1010  2
1011  3
1100  2
1101  3
1110  3
1111 4

So, enumerating the number of states that give each possible # of hits, we get:

hits  occurence
0            1
1            4
2            6
3            4
4            1

As you can see, hit rates in the 50% range account for a far higher percentage of possible states than the ones at or near 0% and 100%; this is even more dramatic with higher numbers of possible states.

Quote
I'll try and make it as graphic as possible how I imagine the values derived from an analysis of the 16 pixel array would be assigned to the virtual pixel. I'll assume that we have 4096 possible values of red, but I'm not certain about this.

(1) The palest shade of red will consist of one (fully saturated, as they all are) red pixel plus 15 white pixels (on the sensor). For the red element of our virtual pixel, we assign a number out of 4096.

"We assign"?  What exactly is that supposed to mean?  You're hiding your whole "complex analysys" inside this magic box called "we assign".  Or is there really any "complex analysis" at all before you write the super-pixel to the RAW file?  This is why it is so difficult trying to have this conversation with you.

If you're just counting the hits within the superpixel, which some of your language suggests is the case (but your reference to "complex analysis" seems to contradict), then the state of the red pixels is most efficiently stated with a number between 0 and 16 (17 levels).  As an RGB value, the superpixel you describe would be 16,15,15.  You could scale these values so that 16 was 4095, or 255, but why?  All that is needed to be known by the RAW converter is that there are 17 levels.

Quote
(2) Slightly darker than the palest shade of red, we have one red pixel, 14 white pixels and one black pixel. We assign another number to the red element of the virtual pixel. What number should it be? I don't know. I thought perhaps you might? 

I can't read your mind, Ray, and you have yet to even give a hint of what your "complex analysis" might entail.  To the best of my understanding, you have "assigned" what should be an RGB value of 16 in #1 above, the value of 4096.  4096 is inflated, and even moreso when you realize that it takes 1 more bit (13 bits) to express the value 4096 than 4095.

Quote
I assume that all 16 red pixels turned on (meaning that all green and blue pixels are switched off)

That doesn't mean that at all.  The red states do not affect the blue and green states; they are independent.

Quote
results in the most saturated red the system can achieve and that this would be assigned the number 4096.[a href=\"index.php?act=findpost&pid=97858\"][{POST_SNAPBACK}][/a]

Are you talking about color saturation or luminance saturation?  Talking about color saturation wouldn't make any sense here, unless your output data is going to be HSV or something similar.

There are only 17 levels to distinguish, so a value greater than 16 is just fluff.
« Last Edit: January 28, 2007, 09:19:40 pm by John Sheehy »
Logged

Ray

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 10365
larger sensors
« Reply #196 on: January 28, 2007, 06:51:55 pm »

Quote
Are you talking about color saturation or luminance saturation?  Talking about color saturation wouldn't make any sense here, unless your output data is going to be HSV or something similar.

There are only 17 levels to distinguish, so a value greater than 16 is just fluff.
[a href=\"index.php?act=findpost&pid=97928\"][{POST_SNAPBACK}][/a]

John,
That's it. In some cockeyed way I've combined luminance values with saturation values to produce an inflated range of saturation levels. Just fluff as you say.

This idea goes in the bin. Let it never be said I'm too proud to admit I am wrong   .

To get a sufficient number of 'real' levels for each color, the virtual pixel would need to be much bigger than 6 or 7 microns. We'd need to use a huge sensor, and lenses for such a large format would not have sufficient resolution to make such tiny sub-pixels meaningful in any way. The idea is crap.

Thanks for your patience and time in sorting this out.

Buy you a beer if we ever meet   .
Logged
Pages: 1 ... 8 9 [10]   Go Up