The actual question is: In theory, does re-sampling downward alleviate the so-called Bayer-itis, i.e. the uncertainty of correct colors, the poor color resolution, the occasional strange coloration near high-contrast edges, and whatever other mud is thrown at the Bayer algorithm by those not in love with it? (please note that I am not personally decrying Bayer here).I believe that the CFA is the problem, not the demosaic (although the latter is needed because of the former).
I believe that the CFA is the problem, not the demosaic (although the latter is needed because of the former).
The CFA leads to aliasing. Aliasing can generally spread to any frequency (including DC). Thus, I would assume that the presence of a CFA leads to the possibility of theoretical errors, no matter how far you downsample as long as long as the scene + lens + OLPF provides sufficient color-difference high-spatial-frequency detail.
I believe that practice shows this to be a minor problem. Interestingly, any camera, including Foveon or acromatic ones (or sampling device in general) can produce aliasing if there is sufficient high-frequency content >= fs/2, and there is not sufficient pre-filtering to suppress that information. Audio recorders usually have sufficient filtering so as to for all intents avoid such aliasing, while cameras do not. Perhaps we will get there with 50MP APS-C sensors (if lenses dont do giant leaps at the same time).
The Color Filter Array has nothing to do, directly at least, with aliasing. Aliasing is the result of sampling. The indirect effects of having a CFA present are that the sampling rate is therefore different for different colors, and that helps to make moire patterns more visible and more annoying.QuoteIf you have a perfectly pre-filtered achromatic sensor with zero aliasing and suddenly insert a CFA filter, you may have aliasing. The CFA filter in that case is the direct cause of aliasing.QuoteOversampling is relatively easy for most audio applications, plus electronic filters are now very high quality, which makes audio applications relatively higher quality in that respect than are optical applications. It is true that 50 MP images will provide a greater degree of oversampling than say 24MP sensors, but realistically it won't be until sensors get up there around 150 to 200MP that anti-aliasing filters will become totally meaningless.There will be a transition phase.
-h
If you have a perfectly pre-filtered achromatic sensor with zero aliasing and suddenly insert a CFA filter, you may have aliasing. The CFA filter in that case is the direct cause of aliasing.
That reverses the cause and effect.Let us agree to disagree. Life is too short to discuss sematics.
Oversampling is relatively easy for most audio applications, plus electronic filters are now very high quality, which makes audio applications relatively higher quality in that respect than are optical applications.If cameras were more like audio A/D converters, I like to think that they would be more Nokia-like: sample roughly at crazy high rates (41MP?). Then downsample using the finest digital filters you can afford. In the process you can have practically any passband response (as long as you have sufficient stop-band attenuation), and the downsampled samples will have better signal-to-noise ratio due to averaging.
After some severe pixel-peeping I did find a reduction in noise level and more homogenous color in the patches after a reduction to 25%.You might find this thread interesting:
If the filter is correct for any given sampling rate it will not be correct for any other sampling rate. Addition of the CFA changes the sampling rate, and therefore the AA filter is no longer correct. The direct cause is sampling, not filtering.
With or without the CFA, if it is "perfectly pre-filtered" there will be no aliasing.
Floyd is correct. Only the fact that we now start subsampling the color bands, Red and Blue even more than Green, will be the cause of reintroducing aliasing. We reduce color resolution by sparser sampling, but we only reduce luminance resolution a little, because all sensels still contribute (weighted) to luminance.Is it the weapon or the person behind it who cause someone to be injured? We can get into all kinds of philosophical debates over that, but it seems obvious that both are needed. Remove the man or remove the gun, and no-one would get hurt.
I believe that Floyd here summarize the question that the OP seems to be concerned about: does the Bayer arrangement affect lower frequencies (such as would be the expected output of a high-quality downscale)?Quote from: Floyd DavidsonWith or without the CFA, if it is "perfectly pre-filtered" there will be no aliasing.Indeed, but...
With no CFA, there would be less aliasing (ideally none in the constructed example). With the CFA, there is more aliasing (depending on scene, lens, etc). The act of introducing a CFA has a causal effect on the degree of aliasing. I don't get why this should cause much debate.In practice, it isn't true.
The sampling system (sensor) and the pre-filter stays constant in both cases.The sampling rate necessarily changes when a CFA is used to encode color information. It is not reasonable to use the same AA filter for different sampling rates.
The only change is the introduction of a CFA. This CFA alters the effective sampling characteristics of the system, by making the acromatic sensor partially blind for certain parts of the spectrum at certain spatial locations. If there is significant high-frequency spatial-chromatic detail, you will get aliasing.Indeed, but...But that is also true if the sampling rate is changed for other reasons. Regardless of the reason, the proper AA filter is required, and it is necessarily optimized for the specific sampling rate.
I believe that Floyd here summarize the question that the OP seems to be concerned about: does the Bayer arrangement affect lower frequencies (such as would be the expected output of a high-quality downscale)?If the system is poorly engineered it will provide poor data. Mismatched design is not a component problem, it's a design problem.
The answer is yes, it can. If the prefiltering is unaltered when a CFA is introduced, you have the possibility of aliasing.
If you introduce the (theoretically necessary) prefilter cutoff, you will (for practically available filters) affect lower frequencies, and you will render the camera far worse than what is possible for full-resolution output.You are ignoring the fact that the "far worse" camera produces far more useful information, granted at lower resolution.
The theoretical answer is not complete without some practical consideration. I will hazard a guess that you would have to have some quite unusual scene for color aliasing to be significant/disturbing if you downsample a Bayer CFA image by a factor of 2x2 or more. Exotic bird feathers are my best bet.
Likewise not all aliasing distortion is high frequency, and therefore it is not all removed. For example if the camera does not even have any AA filtering other than the limited resolution of the lens, there is no reason not to have low frequency aliasing distortion that will not be dramatically changed by down sampling
@h, you said "Aliasing can generally spread to any frequency (including DC)" - could you explain how that happens and what does DC aliasing look like?http://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem
In practice, it isn't true.It seems that both me and you know our sampling theory. Let us leave it at that.
Floyd, please help a relative noob. What form does "low frequency aliasing distortion" take?
@h, you said "Aliasing can generally spread to any frequency (including DC)" - could you explain how that happens and what does DC aliasing look like?
Floyd, please help a relative noob. What form does "low frequency aliasing distortion" take?
@h, you said "Aliasing can generally spread to any frequency (including DC)" - could you explain how that happens and what does DC aliasing look like?
Thank you, Gentlemen for the further interesting discussion and thanks also @h for the link.
Here's some results I got with the 1951 target:
(http://kronometric.org/phot/iq/compBayerDS/compBayerFoveonPSE.jpg)
Another imaging defect results from insufficient sampling of a target with discrete lines which are thicker than the pixel width is ghosting as illustrated in your shot and below where the vertical black line has double grey ghosts due to insufficient sampling.I believe that the bandwidth of the pulse that you are trying to record in your scetch is too wide? A pulse containing infinitely sharp transitions translates into a spatial-frequency domain sinc of infinite extent, meaning that it occupies (towards) infinite bandwidth.
I believe that the bandwidth of the pulse that you are trying to record in your scetch is too wide? A pulse containing infinitely sharp transitions translates into a spatial-frequency domain sinc of infinite extent, meaning that it occupies (towards) infinite bandwidth.
A "proper"*) sampling system cannot generally recreate that pulse at that sampling rate. The options are either to:
1. Prefilter before sampling (so as to remove frequencies that cannot be reliably recreated)
2. Increase the sampling rate (for this example: towards infinity)
3. Sample with insufficient prefiltering, violating Nyquist, living with aliasing
Your (computer-generated) example seems to use a simple pre filter (relying on the area integration of each sensel). Something sinc-like would probably give more accurate results, but is not usually possible in optical systems.
If the pulse had been shifted by 1/2 sensel left or right, the apparent sampling system would seem to work: the (boxcar filtered) output would be identical to the (boxcar filtered) input. This might be the case for font designers and graphics arts that have a high degree of control over the sampling grid. It cannot be relied upon for general cameras where scene elements move unpredictably around. I think there is a unnecessary big divide between how "dsp-people" think about this and how "graphics/image" people think about this.
-h
*)Taken here to mean a close approximation to a Shannon-Nyquist sampler operating in the baseband
What you say is likely true, but in real world images (as well as your posted images) one does see the ghosting that I described. The effect is shown below at various frequencies with a line pattern. What do you think?
Hi Bill,
That's correct, but it also shows why bi-tonal targets are unsuited for evaluation of discrete sampling sensor systems. In normal scenes such extremely high spatial frequency edge transitions rarely exist, but when they do they will cause issues due to imperfect low-pass filtering of the capture system.
Cameras without an additional OLPF (like the D800E, Foveon based cameras, MFDBs) only have their optical aberrations, diffraction, and defocus, to reduce these unwanted artifacts. All solutions represent imperfect trade-offs, so it's a case of picking one's poison. Part of the solution is to increase the sampling density, because that will reduce the likelihood of such fine edge transitions to be present, or in adequate focus.
Cheers,
Bart
What you say is likely true, but in real world images (as well as your posted images) one does see the ghosting that I described. The effect is shown below at various frequencies with a line pattern. This is with the D800e, which lacks a low pass filter. What do you think?It may be similar to photon shot noise. It is there, no-one likes it, and there is no obvious cure except increasing the amount of photons.