The computer would then receive data from 4x as many pixels since we have now changed the mapping from 4 photosites-> 1 pixel to 1 photosite -> 1 pixel.
Or, you could just average all the luminance values, ...
…if you knew how much luminance each color of the Bayer filter absorbed, you could compensate for that. For example, assume the green bayer filter absorbs 10%, the red 20% and the blue 20%. You could adjust the luminance measured by each photosite according to the bayer filter in front of it. So, this seems straightforward…
Even if I assumed that incident light on the bayer filter was 6500 K (when in fact it is not), the net result of the calculated luminance would appear as a speckled image.
…i need to know the frequency of the light so i can adjust the luminance of what's coming through each of the individual bayer filter-lets.
Therefore each sample position will end up with a full RGB representation at each sample position and which is partially based on measured data, and partially based on advanced interpolation from many (sometimes more than 8) of the immediately surrounding data samples.
(Bart, not certain if you are aware that ticking the 'Don't use smileys.' check box under 'Additional Options…' in the Post Reply window prevents text being converted to smileys.)
thank you all for your extensive and complete replies. so, is there a piece of software that will let me see what is on the image sensor, unprocessed? I am just curious. I tried imageJ (from the NIH) but it doesnt read common raw photo file formats.
http://www.rawdigger.com/
thank you all for your extensive and complete replies. so, is there a piece of software that will let me see what is on the image sensor, unprocessed? I am just curious. I tried imageJ (from the NIH) but it doesnt read common raw photo file formats.
As my coarse understanding of digital photgraphy is still new, please be patient with me. The image sensor of a digicam sits behind a bayer filter. Each image pixel is derived from four photosites. Each photosite behind a colored filter so that a pixel is constituted by 4 photosites-2 green, 1 red and 1 blue. This pixel is then the fundamental data unit transmitted to the pc. When displaying to a screen, (which consists of red, green and blue phosphors - or other type of color element) the pixel is broken down into color elements by the video driver. These are then sent to the screen for display.
What if the raw image file was used so that the data from each photosite was converted to a pixel. Of course, all color information would be lost since the only data each photosite could yield is luminance. But for black and white photography, only luminance is needed. The computer would then receive data from 4x as many pixels since we have now changed the mapping from 4 photosites-> 1 pixel to 1 photosite -> 1 pixel.
I tried to read about raw file formats, TIFF, but the literature is rather opaque. Is this concept valid or flawed?
The thing that you are missing is that the absorption of each of the filters is dependent upon the colour of the light at each of the pixels.
But I imagine this would require an IR illminator and a wratten 87a filter. Without the illuminator, the photographs would be rather dark and require long exposures.
tnx pointing me to your raw import plugin in your sig, djjoofa.
Interestingly, photoshop creates 5 different images when i open up one raf file.Find it here:
http://dl.dropbox.com/u/29887336/fuji%20x10%20test%20image/DSCF1215.RAF
One large image is full frame, 2848x2144 pixels, wherein you can clearly see the bayer array when viewing the image at 400% magnification. Then there are 4 additional images at 1424x1072, which probably represent the individual GRGB sets of the CFA.
You won't get a quadrupling of resolution. No one has mentioned the Foveon sensor. If memory serves, it shows about a 1/3 to 1/2 increase in detail resolution over Bayer sensors (e.g., a 12MP Foveon shows resolution equivalent to about 16 to 18MP Bayer sensor).
Hi Bob,If your scene contains lots of uncorrelated information in the color channels, a Foveon-style sensor should be able to resolve more luminance information than a similar pitched Bayer-style sensor, regardless of AA-filter. If your scene is truely wide-band "colorless", then Bayer should have no additional loss of spatial resolution.
The Sigma cameras with a Foveon sensor don't use an AA-filter. That's what causes the higher signal level in the highest spatial frequencies. Resolution is determined by the sampling density and the System MTF.
Cheers,
Bart
I guess that it is possible to remove the CFA if you contact a company like maxmax, at least for some cameras? This should give a significant increase in luminance resolution on top of the increase coming from removing the aa-filter. This sounds similar to the idea that the thread starter had in mind?
Hi h,Am I right that your test is purely a computer simulation with equal modulation in all color channels?
My experiments (http://bvdwolf.home.xs4all.nl/main/foto/bayer/bayer_cfa.htm) point to the conclusion that only a modest (<7%) increase in luminance resolution can be expected.
Cheers,
Bart
Am I right that your test is purely a computer simulation with equal modulation in all color channels?
If you had repeated the test with highly decorrelated color channels, or an extremely hot/cold spectral balanced scene, I suspect that you would have gotten a larger improvement.
Sure, but how relevant would such an unlikely combination of color and(!) brightness be?And therein lies the success of the Bayer sensor, I guess: despite possible theoretical shortcomings, it tends to work well for the kind of images that we tend to care about.