Pages: 1 ... 3 4 [5]   Go Down

Author Topic: Foveon vs no on chip filters  (Read 34546 times)

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Foveon vs no on chip filters
« Reply #80 on: January 12, 2013, 04:12:17 pm »

Hi,

Shot noise. Natural variation in number of photons collected. That is what I think. May be something else. Increase "nose reduction->color" if you are using Lightroom. If it is shot noise there is little else to do about it.

At low ISO you sample more photons so the problem goes away.

Best regards
Erik

Here are 2 100% crops out of a high ISO shot. Notice the tendency to make red and green splotches? We have gotten so used to just turning on a bit of NR that wipes it out. Is it really noise or is it de-bayering? The ISO was to crank the gain making the splotches visible. A low ISO shot does not show them in any visible way. Are they there in a subtle way?


Logged
Erik Kaffehr
 

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Foveon vs no on chip filters
« Reply #81 on: January 12, 2013, 04:31:06 pm »

Hi,

Shot noise. Natural variation in number of photons collected. That is what I think. May be something else. Increase "nose reduction->color" if you are using Lightroom. If it is shot noise there is little else to do about it.

At low ISO you sample more photons so the problem goes away.

Best regards
Erik


Maybe people call it shot noise. My point is what should be a brown field is turned into red and green. The same on the darker mountain spots. Why is the color changed? If it is noise it should be fluctuations around brown not red green.
Logged

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: Foveon vs no on chip filters
« Reply #82 on: January 12, 2013, 04:47:45 pm »

Hi,

The sensor doesn't see browns but reds or greens. So you have a variation in reds and greens.

The natural variation is essentially the square root of the number of photons collected. Let's assume that you collect 50000 photons at saturation. Now, let us assume neutral 18% gray, that is about 9000 photons. Let us further assume that you shoot at 800 ISO, with nominal ISO being 100. Than you would collect 1125 photons per pixel. Let assume that you are looking at a part that is in the shadow, say two stop under. Now we are at 280 photons.

So if we have 280 photons, natural variation would be +/- 16 sof photon count would vary between 263 and 296. Actually 65% of the pixels would vary between 263 and 296, if I recall statistics right.  So you get a lot of natural variation. Nothing to do about it, except increasing exposure.

If you want to learn about noise, I would recommend this article: http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/index.html

Best regards
Erik

Maybe people call it shot noise. My point is what should be a brown field is turned into red and green. The same on the darker mountain spots. Why is the color changed? If it is noise it should be fluctuations around brown not red green.
Logged
Erik Kaffehr
 

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Foveon vs no on chip filters
« Reply #83 on: January 12, 2013, 06:03:38 pm »

Hi,

The sensor doesn't see browns but reds or greens. So you have a variation in reds and greens.

The natural variation is essentially the square root of the number of photons collected. Let's assume that you collect 50000 photons at saturation. Now, let us assume neutral 18% gray, that is about 9000 photons. Let us further assume that you shoot at 800 ISO, with nominal ISO being 100. Than you would collect 1125 photons per pixel. Let assume that you are looking at a part that is in the shadow, say two stop under. Now we are at 280 photons.

So if we have 280 photons, natural variation would be +/- 16 sof photon count would vary between 263 and 296. Actually 65% of the pixels would vary between 263 and 296, if I recall statistics right.  So you get a lot of natural variation. Nothing to do about it, except increasing exposure.

If you want to learn about noise, I would recommend this article: http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/index.html

Best regards
Erik


I don't think so. Noise is random at the pixel level. These are splotches >10 pixels diameter. Calculate the probability that a pattern of spaced red pixels would be hit with higher levels of photons over 10x10 pixels, then spaced green pixels 10x10 interweaving all over the shot the same. Maybe an 800% view will make it look less random.

Emil's article is exellent btw, I dont think this pattern is shot noise.
Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Foveon vs no on chip filters
« Reply #84 on: January 12, 2013, 06:26:36 pm »

It's a software issue. I opened the image in Sony's IDC which showed the noise sprinkled at the pixel level instead of the large splotches. Attached is the IDC version, NR off, sharpening off.

The chroma speckles are a fairly smooth random pattern. You can still see some areas where larger splotches of green show up. De-bayering is a software interpretation routine. It can make a mistake.

I had replied to another thread today about using Noise Ninja for film grain. They had a new version on their website. After installation of the new "picture ninja" I saw wow, it does raw conversion! To test the NR I opened a high ISO raw which led to the god awful color splotches above. They should stick to NR. Beware bad de-bayering.

« Last Edit: January 12, 2013, 07:09:08 pm by Fine_Art »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Foveon vs no on chip filters
« Reply #85 on: January 12, 2013, 08:40:27 pm »

Hi Bart, I see we are starting to deviate from the initial 'ideal' thought experiment :-)

Ok, in this case we need to decide whether we are dealing with a uniform patch of tones or the single sensel (star) version.

Hi Jack,

Actually, I'm a bit at a loss as to what it is that you are trying to ask, tell, or suggest.

Are you trying to make some kind of statement about resolution, or noise? Either way, I'm most willing to explain the situation as I see it, but I do have some difficulty with the various totally unrealistic scenarios that are being proposed, hence my simplification to a uniform area in an attempt to focus on a concept that's simple to understand, with relatively simple math, although even that's being used to stack the deck against the Bayer CFA by using unrealistic scenarios.

Quote
I take it from your example that we are looking at a uniform patch, so the first image works and we are forgetting about the loss of detail.

Correct, for the reason given above.

Quote
For simplicity, let's assume that the standard daylight light source can be filtered perfectly by three equally sized bands (vertically by the Foveon sensor and horizontally by the Bayer) and that the area of one Foveon Pixel is the same as that of a single Bayer sensel (the Green one in your example) so that the sensors would output (128,128,128) and (0,128,0) to their respective R*G*B* raw data.

Okay sofar, when seen from the perspective of what will be a single output pixel, although it can require some 49 adjacent Bayer CFA samples to reconstruct a single central RGB output pixel.

Quote
The Bayer sensor would therefore produce a matrix of repeating data such as (128,0,0) (0,128,0) (128,0,0)... in a 'Red' row followed by an offset repeating (0,0,128) (0,128,0) (0,0,128)... in the 'Blue' row and so on.

Yes, although there are no 'Red', and 'Blue' rows, but I understand what you are describing, 2 rows from a Bayer CFA filtered larger array.

Quote
On the other hand the Foveon would produce an equal number of raw data points of value (128,128,128).   As you say the value 128 above is the mean, while in fact it would vary from raw value to raw value according to Poisson statistics.  These variations would be restricted to the recorded values and not spill over from one sensel to the one next to it because they are inherent in the incoming photons which, if filtered, would simply not be there.

The per sensel (shot noise) photon statistics have a Poisson distribution indeed, and they are independent per sensel (in our simplified model).

Quote
If the above is correct, then the second image appears too sparse and too noisy, but I assume you did not use it for demosaicing since the final one looks correct :-)

I used the first image (idealized RGB Foveon model) to produce a Bayer CFA version (second image), which in turn was used to demosaic (third image).

Quote
In theory any demosaicing noise improvement in the ideal Bayer will only result from a further reduction in the substantially lower detail available (the real differentiator in the 'uniform patch' case).

That's not correct, and I mentioned the reasons. The demosaicing algorithm, when faced with random noise in a uniform area, will reduce the overall noise for two reasons. The first reason is that random noise, when averaged over a region will average out at a rate that's equal to the square root of the noise variances (AKA the standard deviations add in quadrature), and 2/3rds have zero noise because it's zero (in our theoretical model). The second reason is that the Red and Blue sensels are less densely sampled, and thus have a lower spatial (and thus Luminance, and thus amplitude) contribution in the demosaiced result.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Foveon vs no on chip filters
« Reply #86 on: January 12, 2013, 09:57:41 pm »

To clarify, there is no need for binning in this thought experiment, just simple demosaicing.

Okay, let's stick to that for now, for clarity.

Quote
Four ideal Bayer Sensels (one red, one blue and two green) cover exactly the same real estate as one ideal Foveon Pixel in the example above.

When 4 Bayer CFA sensels cover the same real estate as a single Foveon type of sensel, then the Bayer CFA sensels will have a 2x higher sampling density, and thus almost 2x higher luminance resolution, and one quarter of the surface, each. What happened to our simple one R + G + B sample has the same area as one R or G or B sample?

Quote
If, for a given exposure as before, 384 photons of D50 light arrive on such an area of the Foveon sensor, it will record (128,128,128) for this position in its R*G*B* raw file.  On the other hand the same 384 photons will arrive on the same area of a Bayer sensor - but each of the sensels, being 1/4 the area of the Pixel, will only see 1/4 of them on its turf, that is 96.  And since each sensel is covered by a perfect bassband filter that only lets through 1/3 of the arriving daylight photons, the sensor will record (32,0,0) (0,32,0) (0,0,32) (0,32,0) in its R*G*B*G* raw file.

That doesn't make sense when you view the Bayer CFA sensels as individual sensels and want to compare them to the 4x larger Foveon type sensor. It would require taking the 4 Bayer CFA sensels together (as a sort of RGGB binned pixel) to allow any type of comparison, however flawed it would still be from a practical (demosaicing) point of view.

Quote
For simplicity let's assume that no demosaicing is needed for the Foveon and a simple algorithm (say -h) is used to demosaic the Bayer data.  The result would be (32,32,32) ...

No, the demosaiced result would be [128,128,128] for the 4 theoretically perfect output pixels (the same as the Foveon type of sensor) added together to the same surface area.

Quote
Contrary to the others we dreamed up,

I dream with my eyes closed ...

Quote
imo this example is better at comparing apples to apples because the resolution from both ideal sensors should be similar,

No it isn't, IMHO. It's not even apples to oranges, but rather kiwis to kangaroos ...

Quote
including the effects of a 4-dot beam splittin' antialiasing filter.

Why obfuscate the analogy by adding an OLPF (let's guess, for one and not the other)? The ideal discrete sampling situation would include an OLPF for both types of sensor, and also incorporate the effect of lens blur (which would make it impossible to on average address only a single sensel with a spike like signal, because the Nyquist/Shannon theorem requires more than 2 sensels for a reliable reconstruction of a signal). It would also have to include the less than perfect color separation as a function of penetration depth in silicon, and the less than 100 percent transmission in the pass band of the CFA filters, even when ignoring various noise sources and optical/MTF effects.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: Foveon vs no on chip filters
« Reply #87 on: January 13, 2013, 05:45:10 am »

Hi Jack,  Actually, I'm a bit at a loss as to what it is that you are trying to ask, tell, or suggest.

Hi Bart, that makes two of us :-)  Actually I believe that we are in full agreement, with the possible exception of this statement which goes to the root of (my) confusion:

JH Wrote: 'For simplicity let's assume that ... a simple algorithm (say -h) is used to demosaic the Bayer data.  The result would be (32,32,32) ...'

No, the demosaiced result would be [128,128,128] for the 4 theoretically perfect output pixels (the same as the Foveon type of sensor) added together to the same surface area.

This makes no sense to me, unless we bring into the discussion the difference between brightness and exposure of my first post, with related consequences on SNR and IQ.

Forget about Foveon for a second and think only about the Bayer in my example.  We are looking at a square area A on the ideal Bayer sensor made up of 4 sensels in a 2x2 matrix, each of area A/4: 1 under a red , 1 under a blue and 2 under a green ideal color filter (CFA).  If 384 photons reach area A, each A/4 filter area will only see 1/4 of them or 96.  And since each filter only lets through 1/3 of them in our idealized example, the sensel underneath each filter will receive 32 photons and that's the value it would record in the raw data for each of the four sensels of our investigation.  Simple demosaicing of the raw data (for instance with dcraw -h 'half' switch, which keeps the red and blue values as 'they are' and averages the greens) would produce a single R*G*B* Pixel for the whole of area A of value (32,32,32), with a given SNR, keeping in mind the earlier proviso on the green channel - because demosaicing works off the raw data.  More complicated demosaicing would give the same result, but it would be harder to follow.  Of course we could in fact express this as any value we desired through digital post processing operations (let's call them brightness/tonal corrections) in-camera or in-computer, but the underlying information and SNR (IQ) would remain unchanged.

Now let's shrink the sensels: If area A contained 64 smaller Bayer sensels instead of 4, once downrez'd to a single ideal Pixel for area A, for the given SNR as before such a pixel would have the exact same value (32,32,32).

Is my confusion with your (128,128,128) statement above clearer now?

Jack
« Last Edit: January 13, 2013, 05:57:35 am by Jack Hogan »
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8913
Re: Foveon vs no on chip filters
« Reply #88 on: January 13, 2013, 09:01:19 am »

Actually I believe that we are in full agreement, with the possible exception of this statement which goes to the root of (my) confusion:

Quote from: BartvanderWolf
No, the demosaiced result would be [128,128,128] for the 4 theoretically perfect output pixels (the same as the Foveon type of sensor) added together to the same surface area.

This makes no sense to me, unless we bring into the discussion the difference between brightness and exposure of my first post, with related consequences on SNR and IQ.

Maybe you missed the part of my quote I've marked in italic bold here. You insisted on comparing a 4x larger photon collection area with single Bayer CFA sensels, so in order to make a valid comparison, one would have to add the four [32,32,32] interpolated Bayer CFA ones together, which gives [128,128,128].

Quote
Forget about Foveon for a second and think only about the Bayer in my example.  We are looking at a square area A on the ideal Bayer sensor made up of 4 sensels in a 2x2 matrix, each of area A/4: 1 under a red , 1 under a blue and 2 under a green ideal color filter (CFA).  If 384 photons reach area A, each A/4 filter area will only see 1/4 of them or 96.  And since each filter only lets through 1/3 of them in our idealized example, the sensel underneath each filter will receive 32 photons and that's the value it would record in the raw data for each of the four sensels of our investigation.

Correct, 384/4 sensels = 96, and with 1/3rd bandpass filtering 96/3 = 32.

Quote
Simple demosaicing of the raw data (for instance with dcraw -h 'half' switch, which keeps the red and blue values as 'they are' and averages the greens) would produce a single R*G*B* Pixel for the whole of area A of value (32,32,32), with a given SNR, keeping in mind the earlier proviso on the green channel - because demosaicing works off the raw data.

Correct, see also the above explanation, [32,32,32] is the result after demosaicing.

Quote
More complicated demosaicing would give the same result, but it would be harder to follow.  Of course we could in fact express this as any value we desired through digital post processing operations (let's call them brightness/tonal corrections) in-camera or in-computer, but the underlying information and SNR (IQ) would remain unchanged.

I'm not sure why you are mentioning the S/N ratio here but, as you can see in my demonstration earlier, the noise amplitude at the pixel level will be reduced and replaced by a lower spatial frequency noise pattern.

Quote
Now let's shrink the sensels: If area A contained 64 smaller Bayer sensels instead of 4, once downrez'd to a single ideal Pixel for area A, for the given SNR as before such a pixel would have the exact same value (32,32,32).

Not really, apart from the practical implications which do not scale down perfectly with geometry. Dividing an area that receives 384 photons in 64 will leave 6 photons on average, an after a 1/3rd bandpass filter that would become 2 photons each. But in the theoretical example there are still the same number of photons falling on the same total area, and there is still 2/3rds being filtered out. So the remaining 1/3rd times the original 384 photons for the total area still makes 128 (64 sensels times 2 photons). When you divide the same area up in smaller sample areas, then each sample will detect fewer photons but they still add up to the same number for the area, 1/3rd sampled, 2/3rd interpolated. It will have hardly any effect on the S/N ratio for the total area, none actually in our theoretical example.

Quote
Is my confusion with your (128,128,128) statement above clearer now?

I stiil think it's the 4x larger Foveon type of sensor in your original example that's the basis for any possible confusion. One needs to compare equal areas for a meaningful comparison.

I'm not sure whether there is some subconsciously nagging issue (which is understandable) with the fact that while only '1/3rd' of our photons actually are registered, 2/3rd will be supplemented by interpolation to create full RGB output pixels. Those RGB output pixels which have the (approximately) identical brightness as a full RGB sensor would give, as my demosiacing demonstrations earlier in the thread show. The Bayer CFA converted originals look darker, because 2/3rd of their RGB pixel data is zero, but after supplementing the zeros with interpolated/reconstructed data, the original average brightness is restored.

I do not think that another confusion plays a role here, namely a misconception about demosaicing where some people think that it takes 4 Bayer CFA sensels to make 1 RGB output pixel. That would be a completely wrong representation of how demosaicing works (and it would result in half of the resolution that is actually recorded, which proves that that representation is flawed). But for those who believe that's how demosaicing works, it doesn't.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

Jack Hogan

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 798
    • Hikes -more than strolls- with my dog
Re: Foveon vs no on chip filters
« Reply #89 on: January 13, 2013, 12:18:24 pm »

I'm not sure why you are mentioning the S/N ratio here...
Quote
If area A contained 64 smaller Bayer sensels instead of 4, once downrez'd to a single ideal Pixel for area A, for the given SNR as before such a pixel would have the exact same value (32,32,32)

Not really, apart from the practical implications which do not scale down perfectly with geometry.

Here I was referring to the fact that my quote above would result in 64 (2,2,2) demosaiced pixels, each with a SNR of 1.4.  After downsizing the 64 pixels into one single Pixel for area A, the resulting value for the Pixel would be (2,2,2) with an SNR of 11.3 - which could be brightness corrected in post to be (32,32,32) [or whatever] at that same SNR.

I think we are fully aligned :-)
Logged

Fine_Art

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1172
Re: Foveon vs no on chip filters
« Reply #90 on: January 13, 2013, 12:25:12 pm »

To be fair to Picture Code, their new Picture Ninja opens the file to a fairly good looking raw conversion. It was me turning their NR reduction sliders down all the way that ended up with the splotched de-bayering posted above. Clearly they did not intend for their system to be used that way. Still, having seen it, inspection of their default raw conversion does show very mild red green regions.

This would never be an issue with foveon or another full chip color capture system.
Logged
Pages: 1 ... 3 4 [5]   Go Up