I started replying point by point to your post, but I have decided that it would be a total waste of time, a growing snowball of confusion. You don't seem capable of engaging in a focused argument, always going off on tangents.
I think you are beiginning to bluster, John. I threw in this idea as a possible solution to the the current noisy and resolution limited analog imaging devices. I realise the numbers and computing power required are too great for such a system to be practical at present and that simply because we can now manufacture pixels with a pitch of 1.75 microns doesn't mean that we can manufacture foveon type pixels of the same pixel pitch, or that we can spread hundreds of millions of them over large sensors, without enormous expense at least.
The practical difficulties of constructing such a system are for the engineer. What I was trying to elicit from you are valid objections to the theory on the grounds it might not be mathematically sound, for example, or that it might contravene the laws of physics.
Now, some of your objections are valid. I think it really would be impossible to build such a system with a 'one photon' accuracy. You'd probably need a liquid-nitrogen-cooled camera the size of a house to achieve that. So there is obviously some
noise in my system as envisaged, if one defines noise as anything less than absolute accuracy, so clearly my claims of 'switch the pixel on for total accuracy' are meant to be taken as rhetoric. (Did you see this sign after such statements?)
Clearly, there's going to be noise at the threshold. If we take my example of the minimum value of red, one red diode in a cluster 48 diodes 47 of which are turned off; the reason that the one red diode is turned on
, is likely due to noise of one sort or another. At stronger signal levels, there's a possibliliy of two adjacent red pixels both being turned on by slightly different signal strengths. R1 is turned on by say, a 500 photon signal and R2 is turned on by say, a 550 photon signal. There's no distinction between the two values and that represents another inaccuracy in the system. However, it's not possible for R1 to R16 to all be switched on by signals varying significantly, from say 500 photons to 5,000 photons, because our 35mm lens cannot transmit such intensities over such a small area of 1.75 microns in diameter. If it were able to, then such a lens would have an MTF response of 90% at 200 lp/mm, which is clearly impossible for a 35mm lens.
I can only see two things that you might be trying to accomplish:
1) Package each 4x4 subimage and call it a "super-pixel" (a totally semantic-oriented approach with no practical IQ value whatsoever), or
What! You merely object to the name? Then give it another name. I've tried to clarify things by calling it a 'virtual' pixel. The virtual pixel, as seen on the monitor, represents a summation of a complex analysis of the many, many different values one can get from all the possible variations of 16x3 RGB photodiodes. The virtual pixel does not exisit on the sensor. All that exists on the sensor are millions of on/off switches that are activated by a certain level of photonic signal, tuned as precisely as possible to the resolution limits of the lens. As I amplify on my theory, I now see that such a system would work best with lens and sensor designed as an integrated system and there would probably need to be some very sophisticated DXO Optics type of correction built in.
2) You are trying to use these subpixels to create a single super-pixel, which will have three DIGITAL VALUES, one each for red, green, and blue luminance. In this case there are only 17^3 or 4913 possible DIGITAL RGB VALUES for the full super-pixel, as there are only 17 possible states of each color (not 16 as mistakenly implied earlier) within each superpixel.
So, when I tried in my previous post to enumerate the possible values of just 1 of the 16 red pixels, indicating clearly I thought, that there would be far more than 16 or 17 different possible values, I just wasted my time did I?
Now, I admit that maths is not my strong point. I can't say for sure that there will be 16^3 (4096) possible values of red, because there might be some duplication of values there, and I suspect there is some duplication of values for the total number of colours in such a 16 pixel array (2.8 thousand trillion ). Perhaps a mathematician can help out here.
Ray, answer this one simple question ... what is it that you expect to be outputted in the RAW data ... what is the *exact* format of your RAW data; what is it supposed to contain?
I'll try and make it as graphic as possible how I imagine the values derived from an analysis of the 16 pixel array would be assigned to the virtual pixel. I'll assume that we have 4096 possible values of red, but I'm not certain about this.
(1) The palest shade of red will consist of one (fully saturated, as they all are) red pixel plus 15 white pixels (on the sensor). For the red element of our virtual pixel, we assign a number out of 4096.
(2) Slightly darker than the palest shade of red, we have one red pixel, 14 white pixels and one black pixel. We assign another number to the red element of the virtual pixel. What number should it be? I don't know. I thought perhaps you might?
I assume that all 16 red pixels turned on (meaning that all green and blue pixels are switched off) results in the most saturated red the system can achieve and that this would be assigned the number 4096.
If you consider this a waste of time, that's fine by me. I can sense you are getting rather irritated.