Luminous Landscape Forum

Raw & Post Processing, Printing => Digital Image Processing => Topic started by: crames on January 12, 2010, 09:48:58 pm

Title: Is this Aliasing?
Post by: crames on January 12, 2010, 09:48:58 pm
There are a lot of threads here now in the LL forum about aliasing, anti-alias filters, foveon vs. bayer array, upsizing, downsizing, etc.

What does aliasing look like?

Here is a synthetic image of a circular sine wave that has been sampled close to the Nyquist rate.

(http://sites.google.com/site/cliffpicsmisc/home/sine204_zoom.png)

The original on the left is shown zoomed progressively larger to reveal the details in the pixel structure.

What do you see here? Is this what you would call "aliasing?"

Regards,
Cliff
Title: Is this Aliasing?
Post by: jjlphoto on January 12, 2010, 10:53:24 pm
Not exactly.
Title: Is this Aliasing?
Post by: ErikKaffehr on January 13, 2010, 01:06:06 am
Hi,

Yes, it's what I would call aliasing. I'm not a signal processing scientist, however, so don't take my word for it!

Best regards
Erik


Quote from: crames
There are a lot of threads here now in the LL forum about aliasing, anti-alias filters, foveon vs. bayer array, upsizing, downsizing, etc.

What does aliasing look like?

Here is a synthetic image of a circular sine wave that has been sampled close to the Nyquist rate.

(http://sites.google.com/site/cliffpicsmisc/home/sine204_zoom.png)

The original on the left is shown zoomed progressively larger to reveal the details in the pixel structure.

What do you see here? Is this what you would call "aliasing?"

Regards,
Cliff
Title: Is this Aliasing?
Post by: thierrylegros396 on January 13, 2010, 03:42:20 am
Have a look at A/D converter manufacturers website like Texas Instruments and others.

You will find a lot of interresting explanations about aliasing and cure used.

Thierry
Title: Is this Aliasing?
Post by: Bart_van_der_Wolf on January 13, 2010, 06:24:18 am
Quote from: crames
What do you see here? Is this what you would call "aliasing?"

Hi Clif,

Yes, that's aliasing. To put it in non-scientific terms, an alias is created when small details with a regular spacing are sampled with a regular spacing interval that's too large, AKA undersampling. When the regularly spaced detail is undersampled, larger aliases can/will appear. To reliably prevent undersampling, more than 2 samples need to be taken of each single feature.

When you increase your sampling frequency by a factor of at least Sqrt(2), you'll also satisfy the diagonal Nyquist requirements of >2 pixels per cycle:
[attachment=19431:ConcentricSine.png]

You may also like to look at this summary: http://en.wikipedia.org/wiki/Aliasing (http://en.wikipedia.org/wiki/Aliasing)

Cheers,
Bart
Title: Is this Aliasing?
Post by: EduPerez on January 13, 2010, 07:43:18 am
AFAIK, "aliasing" is the origin of the problem (BartvanderWolf explained it very well);
those artificial patterns that can be seen is called the Moiré effect (http://en.wikipedia.org/wiki/Moire_effect).
Title: Is this Aliasing?
Post by: crames on January 13, 2010, 07:18:11 pm
Ok, I apologize, it was a trick question.

There is no aliasing in the posted image.

It is a sine of 2.04 samples per cycle, which is about 98% of the Nyquist rate, and so definitely below Nyquist.

Here is a center crop of what you get if you apply a high-quality reconstruction filter to the original image:

(http://sites.google.com/site/cliffpicsmisc/home/sine204_i4yaro.png)

vs. bicubic:

(http://sites.google.com/site/cliffpicsmisc/home/sine204_i4bicubic.png)

The point is that the failure to reconstruct, or reconstruction error, can be mistaken for aliasing, and can even badly affect images that have no aliasing at all.

Just something to keep in mind when comparing cameras, AA filters, and technologies like foveon vs. bayer.

Cliff

Note to Bart: No need to increase the sampling rate for the diagonals, in this case they already have a sqrt(2) advantage over horizontal and vertical.
Title: Is this Aliasing?
Post by: joofa on January 13, 2010, 08:10:49 pm
Quote from: crames
The point is that the failure to reconstruct, or reconstruction error, can be mistaken for aliasing, and can even badly affect images that have no aliasing at all.

Cliff mentions an important and sometimes less-known fact even if the samples are alias free, "aliasing" can result in reconstruction if the bandwidth of the reconstruction filter is longer than desired. For ideal sinc filter that would mean that sinc is no longer 0 at sample locations other than the pivot.
Title: Is this Aliasing?
Post by: ejmartin on January 13, 2010, 08:31:02 pm
What is a reconstruction filter?
Title: Is this Aliasing?
Post by: joofa on January 13, 2010, 08:46:12 pm
Quote from: ejmartin
What is a reconstruction filter?

A filter that converts digital samples into continuous output again. Can be the input for resampling operations.
Title: Is this Aliasing?
Post by: crames on January 13, 2010, 09:16:43 pm
It's the "other end" of a sampling system, to reconstruct the original image from its samples. Ideally, it's a bandlimited sinc interpolator, to remove spectral replicas caused by the sampling process. You can approximate reconstruction to the continuous domain by interpolating to a higher res.

It was difficult to get a good result so close to the Nyquist rate. For the example I used a sinc interpolation method in Matlab by L. Yaroslavsky (http://www.eng.tau.ac.il/%7Eyaro/).

Cliff
Title: Is this Aliasing?
Post by: Bart_van_der_Wolf on January 15, 2010, 03:00:21 am
Quote from: crames
Ok, I appologize, it was a trick question.

There is no aliasing in the posted image.

It is a sine of 2.04 samples per cycle, which is about 98% of the Nyquist rate, and so definitely below Nyquist.

Here is a center crop of what you get if you apply a high-quality reconstruction filter to the original image:

(http://sites.google.com/site/cliffpicsmisc/home/sine204_i4yaro.png)


Hi Cliff,

You are correct that the reconstruction filter used, makes a difference.

I assume you downsampled the original image, because this image is about 8 pixels per cycle (over-sampled 4x). If so, then how does it look at 2.04 samples per cycle with the high-quality reconstruction filter?

Cheers,
Bart
Title: Is this Aliasing?
Post by: crames on January 15, 2010, 09:01:31 am
Quote from: BartvanderWolf
I assume you downsampled the original image, because this image is about 8 pixels per cycle (over-sampled 4x). If so, then how does it look at 2.04 samples per cycle with the high-quality reconstruction filter?

The problem is that reconstruction results in a more finely-sampled (bigger) image. The image marked 100% shows the original samples, is has not been downsampled. The reconstruction example is a crop of a 4x upres. If you use nearest neighbor to reduce the size of the reconstruction by 4, you get back the original (possibly with a small phase shift due to how the nearest neighbor is done).

A 4x upres is more than needed. Here is a 2x upres/reconstruction (Yaroslavsky method):
(http://sites.google.com/site/cliffpicsmisc/home/sine204_i2yaro.png)

The 100% original (2.04 pixels/cycle):
(http://sites.google.com/site/cliffpicsmisc/home/sine204.png)

Just to show it's not aliased, the FFT magnitude of the 100% original:
(http://sites.google.com/site/cliffpicsmisc/home/sine204_fftmag.png)

Except for the case of pathological examples like this, reconstruction  by a 2x bicubic upres is often enough to get rid of obvious  reconstruction errors. But it always involves some kind of  upres/interpolation.

So the moral of the story is, it's bad to  pixel-peep at 100%. 200% is better!

Rgds,
Cliff
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 15, 2010, 09:20:16 am
Quote from: crames
The problem is that reconstruction results in a more finely-sampled (bigger) image. The image marked 100% shows the original samples, is has not been downsampled. The reconstruction example is a crop of a 4x upres. If you use nearest neighbor to reduce the size of the reconstruction by 4, you get back the original (possibly with a small phase shift due to how the nearest neighbor is done).

A 4x upres is more than needed. Here is a 2x upres/reconstruction (Yaroslavsky method):

Would you mind posting the Matlab code you used on that image? I'd like to try using it for some demosaic experiments.
Title: Is this Aliasing?
Post by: crames on January 15, 2010, 10:27:15 am
Quote from: Jonathan Wienke
Would you mind posting the Matlab code you used on that image? I'd like to try using it for some demosaic experiments.

I used interp2d.m, found in the following set from Yaroslavsky's web site: http://www.eng.tau.ac.il/~yaro/adiplab/m-files.zip (http://www.eng.tau.ac.il/%7Eyaro/adiplab/m-files.zip).

On real images, it can tend to create ripples (due to it being done in the Discrete Cosine Transform domain). There are ways around that - Yaroslavsky has papers on his site that address that problem, but I didn't find any matlab code for it.

Rgds,
Cliff
Title: Is this Aliasing?
Post by: Bart_van_der_Wolf on January 15, 2010, 11:25:14 am
Quote from: crames
The problem is that reconstruction results in a more finely-sampled (bigger) image.

Maybe it's my infamiliarity with the jargon, but isn't that called interpolation instead of reconstruction?

Quote
The image marked 100% shows the original samples, is has not been downsampled.

Again, excuse me, but I find that hard to believe, unless you mean that it attempts to show the original samples. If those are the original pixels, then it is impossible to arrive at your interpolated/reconstructed version without knowledge of how the pattern was supposed to look.

Quote
Just to show it's not aliased, the FFT magnitude of the 100% original:
(http://sites.google.com/site/cliffpicsmisc/home/sine204_fftmag.png)

But that is the FFT magnitude of the mathematical model, not the aliased image shown at the top of the thread as 100%.

Quote
So the moral of the story is, it's bad to  pixel-peep at 100%. 200% is better!

Indeed, oversampling at the moment of capture/aquisition has it's benefits. Sometimes it's a luxury we cannot afford, so I'm afraid we'll be stuck with compromises, Bayer CFA compounding to it and all.

Cheers,
Bart
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 15, 2010, 11:55:00 am
Quote from: crames
On real images, it can tend to create ripples (due to it being done in the Discrete Cosine Transform domain). There are ways around that - Yaroslavsky has papers on his site that address that problem, but I didn't find any matlab code for it.

I downloaded some of the PDFs, and found them very interesting reading, especially the one about fast discrete sinc interpolation. I thought the experiments he did rotating the images in multiple small increments using various resampling techniques were very interesting. The thought I had was to use some variant of his fast discrete sinc reconstruction filter to do Bayer demosaic interpolation, then apply deconvolution-based lens blur corrections using PSFs derived from images of a special test target shot with the same camera/lens combination and demosaiced with the same algorithm. The main idea behind this is that the PSFs would not only reflect the blur characteristics of the lens, but of the camera's AA filter and the demosaic algorithm as well.
Title: Is this Aliasing?
Post by: joofa on January 15, 2010, 08:25:19 pm
Quote from: BartvanderWolf
Maybe it's my infamiliarity with the jargon, but isn't that called interpolation instead of reconstruction?

The term reconstruction has a wider usage associated with it. The usage is not restricted to interpolation. Reconstruction can apply to continuous signal obtained by downsampling the samples.  Additionally, "interpolation" has a technical meaning associated to it, viz., that the upsampled/interpolated signal has the same value at original sampling locations. Many approximation theory-based reconstruction methods on the otherhand upsample without regard to this property and may get better mse figures. More over, reconstruction methods can incorporate regularity conditions to restrict reconstructed signal smoothness, which will make them differ from interpolation on the technical grounds mentioned before.

Quote from: Jonathan Wienke
I downloaded some of the PDFs, and found them very interesting reading, especially the one about fast discrete sinc interpolation.

It is not a new thing and a technique known for decades.
Title: Is this Aliasing?
Post by: crames on January 16, 2010, 12:35:10 am
Quote from: BartvanderWolf
Maybe it's my infamiliarity with the jargon, but isn't that called interpolation instead of reconstruction?

Interpolation is one of the ways to reconstruct a function (in this case the pattern of light on the focal plane) from its samples. Reconstruction is by interpolating curves between sample points.

Quote
Again, excuse me, but I find that hard to believe, unless you mean that it attempts to show the original samples. If those are the original pixels, then it is impossible to arrive at your interpolated/reconstructed version without knowledge of how the pattern was supposed to look.

Those are the actual samples. I provided a link to the interpolating code. If you don't have access to Matlab, maybe Jonathan can confirm for you that those are the actual results.

Bart, I know it's hard to believe because it goes against intuition. Sinc interpolation can seem like magic. All it is, is strict application of the sampling theorem - surely you don't think that is invalid!

Quote
But that is the FFT magnitude of the mathematical model, not the aliased image shown at the top of the thread as 100%.

That is the FFT of the 100% image. Again, the original 100% image is not aliased!

Rgds,
Cliff
Title: Is this Aliasing?
Post by: crames on January 16, 2010, 12:47:04 am
Quote from: Jonathan Wienke
I downloaded some of the PDFs, and found them very interesting reading, especially the one about fast discrete sinc interpolation. I thought the experiments he did rotating the images in multiple small increments using various resampling techniques were very interesting.

He does seem to be a very smart guy!

Quote
The thought I had was to use some variant of his fast discrete sinc reconstruction filter to do Bayer demosaic interpolation, then apply deconvolution-based lens blur corrections using PSFs derived from images of a special test target shot with the same camera/lens combination and demosaiced with the same algorithm. The main idea behind this is that the PSFs would not only reflect the blur characteristics of the lens, but of the camera's AA filter and the demosaic algorithm as well.

A piece of cake. No, I'm kidding, that sounds really challenging.

Keep me posted, especially if you figure out how to combine reconstruction with demosaicking.

Rgds,
Cliff
Title: Is this Aliasing?
Post by: ejmartin on January 16, 2010, 09:00:19 am
Quote from: crames
I used interp2d.m, found in the following set from Yaroslavsky's web site: http://www.eng.tau.ac.il/~yaro/adiplab/m-files.zip (http://www.eng.tau.ac.il/%7Eyaro/adiplab/m-files.zip).

On real images, it can tend to create ripples (due to it being done in the Discrete Cosine Transform domain). There are ways around that - Yaroslavsky has papers on his site that address that problem, but I didn't find any matlab code for it.

Rgds,
Cliff


Which paper were you referring to (about suppressing ringing artifacts)?  I'm curious how that is done.
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 16, 2010, 09:51:03 am
Quote from: crames
Those are the actual samples. I provided a link to the interpolating code. If you don't have access to Matlab, maybe Jonathan can confirm for you that those are the actual results.

You can download a 15-day free trial of matlab from:
http://www.mathworks.com/products/matlab/tryit.html?ref=ml_b (http://www.mathworks.com/products/matlab/tryit.html?ref=ml_b)
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 16, 2010, 09:54:35 am
Quote from: ejmartin
Which paper were you referring to (about suppressing ringing artifacts)?  I'm curious how that is done.

It's discussed in http://www.eng.tau.ac.il/~yaro/RecentPubli...ion_ASTbook.pdf (http://www.eng.tau.ac.il/~yaro/RecentPublications/ps&pdf/FastSincInterpolation_ASTbook.pdf) starting on page 50.

I find it a bit curious that he's using a switchover to nearest-neighbor interpolation in conditions prone to ringing. The approach I took to prevent ringing in the natural cubic spline interpolation implementation in PixelClarity (http://luminous-landscape.com/forum/index.php?showtopic=39453) is to limit the spline's z parameter values, as defined by:
http://en.wikipedia.org/wiki/Spline_interp...e_interpolation (http://en.wikipedia.org/wiki/Spline_interpolation#Cubic_spline_interpolation)

By limiting z to ± (maximum_value - minimum_value) / n, where n is approximately 4, the tendency of the cubic spline to ring is greatly reduced, to the point where ringing artifacts are nearly impossible to find visually. Limiting the value of z causes the spline interpolation function to behave more like a linear interpolation function in cases where z has been limited; the more closely z is adjusted toward 0, the more linear the interpolation function becomes. If you force z to 0 for all sample values, spline interpolation becomes indistinguishable from linear interpolation.

I would think that a gradual blended shift toward linear interpolation would be more elegant than simply switching to nearest-neighbor in problem areas.
Title: Is this Aliasing?
Post by: crames on January 16, 2010, 10:33:40 am
At the risk of belaboring the point, here's a real-world example of aliasing that's not really aliasing.

I'm sure everyone has looked at the examples at maxmax.com, comparing the stock 5D with the maxmax HotRod modification (AA filter removed).

Crops of the famous air conditioner (only green channels shown) -

Stock:
(http://sites.google.com/site/cliffpicsmisc/home/5dstock_maxmax.png)

HotRod as posted by MaxMax:
(http://sites.google.com/site/cliffpicsmisc/home/5dhrcrop_maxmax.png)

HotRod with better raw conversion combined with sinc upres by 2:
(http://sites.google.com/site/cliffpicsmisc/home/5dhrcropi2.png)

Not quite so bad now is it? Still a little moire in the grill (possibly a demosaick fault), but the jaggies are gone. Jaggies does not mean aliasing.

Rgds,
Cliff
Title: Is this Aliasing?
Post by: ejmartin on January 16, 2010, 11:25:42 am
Quote from: crames
At the risk of belaboring the point, here's a real-world example of aliasing that's not really aliasing.

I'm sure everyone has looked at the examples at maxmax.com, comparing the stock 5D with the maxmax HotRod modification (AA filter removed).

Crops of the famous air conditioner (only green channels shown) -

Stock:
http://sites.google.com/site/cliffpicsmisc...tock_maxmax.png (http://sites.google.com/site/cliffpicsmisc/home/5dstock_maxmax.png)

HotRod as posted by MaxMax:
http://sites.google.com/site/cliffpicsmisc...crop_maxmax.png (http://sites.google.com/site/cliffpicsmisc/home/5dhrcrop_maxmax.png)

HotRod with better raw conversion combined with sinc upres by 2:
http://sites.google.com/site/cliffpicsmisc.../5dhrcropi2.png (http://sites.google.com/site/cliffpicsmisc/home/5dhrcropi2.png)

Not quite so bad now is it? Still a little moire in the grill (possibly a demosaick fault), but the jaggies are gone. Jaggies does not mean aliasing.

Rgds,
Cliff

All your examples involve upsampling to smooth the image.  Presumably the image at its native resolution will always exhibit the artifacts?

Also, where does one get the RAW file for this image?
Title: Is this Aliasing?
Post by: joofa on January 16, 2010, 12:21:54 pm
Quote from: crames
If you don't have access to Matlab, ...

For those who don't have access to Matlab, worry not. Please use the freely available Matlab clones such as Octave or SciLab. Just google them for their website. Octave .m files have the same syntax as Matlab so most of the times same code may be used between Matlab/Octave. However, I have to caution that sometimes the indexing operations are interpreted differently by Matlab and Octave and one gets different results without any warning.
Title: Is this Aliasing?
Post by: crames on January 16, 2010, 12:48:35 pm
Quote from: ejmartin
All your examples involve upsampling to smooth the image.  Presumably the image at its native resolution will always exhibit the artifacts?

Also, where does one get the RAW file for this image?

The files can be downloaded about 1/3 of the way down the page: MaxMax (http://maxmax.com/hot_rod_visible.htm)

When looking at the image in its "native resolution", you are looking at the samples of the image, not the smooth original analog input to the sampling system. The upsampling is not to smooth the image - none of the information below Nyquist if affected (or hardly). It is smoothed only to eliminate the spectral replicas that are a side-effect of sampling, to restore the original smooth image. These artifacts at native resolution are not part of the image - upsampling/interpolation/reconstruction is a way to separate the underlying image from the spectral artifacts. This is possible because the reconstruction artifacts aren't the result of higher frequencies that have been folded down and mixed with lower frequencies, as happens with "real" aliasing.

The image at its native resolution will not always exhibit the artifacts, it depends on the image content. The artifacts are mostly associated with frequencies in the upper half of the spatial frequency range.

By the way, for the air conditioner example I used the latest AMML demosaicking version of dcraw found at ojodigital.com. Nice work!

Rgds,
Cliff
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 16, 2010, 01:01:20 pm
Quote from: crames
The upsampling is not to smooth the image - none of the information below Nyquist if affected (or hardly). It is smoothed only to eliminate the spectral replicas that are a side-effect of sampling, to restore the original smooth image. These artifacts at native resolution are not part of the image - upsampling/interpolation/reconstruction is a way to separate the underlying image from the spectral artifacts. This is possible because the reconstruction artifacts aren't the result of higher frequencies that have been folded down and mixed with lower frequencies, as happens with "real" aliasing.

What is the minimum reconstruction filter expansion ratio necessary to remove reconstruction artifacts? You've shown that a ratio of 2:1 works well; would a ratio of 3:2 work equally well? My intent is to incorporate this into my image processing program, and the smaller the expansion ratio, the less RAM and CPU time needed to process the image. What are your thoughts  regarding the smallest effective reconstruction expansion ratio?
Title: Is this Aliasing?
Post by: crames on January 16, 2010, 01:11:06 pm
Quote from: Jonathan Wienke
What is the minimum reconstruction filter expansion ratio necessary to remove reconstruction artifacts? You've shown that a ratio of 2:1 works well; would a ratio of 3:2 work equally well? My intent is to incorporate this into my image processing program, and the smaller the expansion ratio, the less RAM and CPU time needed to process the image. What are your thoughts  regarding the smallest effective reconstruction expansion ratio?

Good question - I was wondering that myself, but haven't looked into it. I think I will try it now.


Things start to smooth out pretty well at about 1.4 with the sine 2.04 image:

(http://sites.google.com/site/cliffpicsmisc/home/Upres_204_Series_sGray.png)
The above should show up with gamma compensation. All processing was done linear then converted to sGray in PS. Below is how it looks without gamma compensation - the defects are exaggerated:
(http://sites.google.com/site/cliffpicsmisc/_/rsrc/1263675232353/home/linear_series.png)
Should probably check with some other images, too.

Rgds,
Cliff
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 16, 2010, 04:11:13 pm
I got Octave downloaded, installed, and running. Where do I copy the matlab .m files so I can access them from octave's command prompt?
Title: Is this Aliasing?
Post by: joofa on January 16, 2010, 04:23:56 pm
Quote from: Jonathan Wienke
I got Octave downloaded, installed, and running. Where do I copy the matlab .m files so I can access them from octave's command prompt?

Hi, welcome to the world of Octave. If you are on Mac, please download the optional packages such as image processing, neural networks, optimization, control, etc. For windows users I think the default package has that.

There is an environment variable you can define for locating the .m files, which I am trying to remember now, but it must be in the help somewhere. However, an easier solution, which I do, is to just "cd DIRECTORY_NAME" to where your .m files are located and just use the function on the prompt.

EDIT: I did a quick web search and it seems like you might try the "addpath" in Octave.
Title: Is this Aliasing?
Post by: nma on January 16, 2010, 07:05:41 pm
Quote from: crames
The files can be downloaded about 1/3 of the way down the page: MaxMax (http://maxmax.com/hot_rod_visible.htm)

When looking at the image in its "native resolution", you are looking at the samples of the image, not the smooth original analog input to the sampling system. The upsampling is not to smooth the image - none of the information below Nyquist if affected (or hardly). It is smoothed only to eliminate the spectral replicas that are a side-effect of sampling, to restore the original smooth image. These artifacts at native resolution are not part of the image - upsampling/interpolation/reconstruction is a way to separate the underlying image from the spectral artifacts. This is possible because the reconstruction artifacts aren't the result of higher frequencies that have been folded down and mixed with lower frequencies, as happens with "real" aliasing.

The image at its native resolution will not always exhibit the artifacts, it depends on the image content. The artifacts are mostly associated with frequencies in the upper half of the spatial frequency range.

By the way, for the air conditioner example I used the latest AMML demosaicking version of dcraw found at ojodigital.com. Nice work!

Rgds,
Cliff

Cliff,

Thanks, this is the first post that really sheds a clear light on a problem that has been discussed ad nauseum, without agreement.

It would be useful if you or someone else could discuss how raw images are really processed. When I first thought about this stuff, I naively thought that something approximating sinc reconstruction was done separately on the rgb channels, resulting in a red, blur and green image all resampled to the same grid. Of course each channel will exhibit different resolution, consistent with its sampling. I was surprised to learn (somewhere) that more ad hoc techniques were employed that supposedly yielded higher resolution, while accepting a bit of aliasing. Why isn't the basic sampling theory used in practice?
Title: Is this Aliasing?
Post by: ejmartin on January 17, 2010, 12:32:22 am
Quote from: nma
Cliff,

Thanks, this is the first post that really sheds a clear light on a problem that has been discussed ad nauseum, without agreement.

It would be useful if you or someone else could discuss how raw images are really processed. When I first thought about this stuff, I naively thought that something approximating sinc reconstruction was done separately on the rgb channels, resulting in a red, blur and green image all resampled to the same grid. Of course each channel will exhibit different resolution, consistent with its sampling. I was surprised to learn (somewhere) that more ad hoc techniques were employed that supposedly yielded higher resolution, while accepting a bit of aliasing. Why isn't the basic sampling theory used in practice?

If one used a simple linear filter to do the interpolation of color channels, resolution would be limited by the sampling frequency, and so would be particularly poor for R and B channels on a Bayer RGGB array.  However, image data is correlated between the color channels, and if those correlations are used one can achieve much higher resolution with much less artifacting.  No good demosaic just does a linear interpolation; and the better ones make some use of the correlations between R,G, and B data to achieve resolution near Nyquist for the full array rather than the individual color subsampled arrays.

Cliff, thanks for the vote of confidence.
Title: Is this Aliasing?
Post by: Bart_van_der_Wolf on January 17, 2010, 05:49:34 am
Quote from: crames
Things start to smooth out pretty well at about 1.4 with the sine 2.04 image:

Hi Cliff,

Thanks for the examples (and gamma reminder).

Yes, 1.4 or 1.5 would start to prevent most visually disturbing artifacts. That's why I mentioned the Sqrt(2) factor, which for a "sine 2.0" image results in a diagonal sampling of 2 pixels/cycle and more oversampling at other angles. If Jonathan wants to use a 3:2 ratio, even better. The only visually disturbing artifact I get at Sqrt(2) are from a very slight variation in my LCD display pixel pitch on one side of the screen.

Quote
Should probably check with some other images, too.

On that note, could you give an estimate about the processing speed of this resampling filter on an RGB image? In the code sample I saw that Fourier transforms are involved, so processing time will increase significantly with size. I understand that an optimized code in e.g. C++ would also give different results than in MatLab, but even the MatLab version would give an indication (e.g. single channel pixels/second) of what to expect for an RGB image such as the MaxMax example.

Cheers,
Bart
Title: Is this Aliasing?
Post by: Dick Roadnight on January 17, 2010, 07:36:12 am
Quote from: crames
There are a lot of threads here now in the LL forum about aliasing, anti-alias filters, foveon vs. bayer array, upsizing, downsizing, etc.

What does aliasing look like?

Regards,
Cliff
[a href=\'index.php?act=findpost&pid=324841\']I think this is a good example[/a]
Title: Is this Aliasing?
Post by: crames on January 17, 2010, 11:14:01 am
Quote from: Dick Roadnight
[a href=\'index.php?act=findpost&pid=324841\']I think this is a good example[/a]

Your example shows what Emil was just talking about.

There is definitely some color aliasing going on there. It's possible that the luminance is unaliased and could be used to restore the color without aliasing. Is the raw file available?.

Cliff
Title: Is this Aliasing?
Post by: crames on January 17, 2010, 12:22:24 pm
Quote from: BartvanderWolf
That's why I mentioned the Sqrt(2) factor, which for a "sine 2.0" image results in a diagonal sampling of 2 pixels/cycle and more oversampling at other angles.

Actually the diagonals are oversampled in comparison to horizontal and vertical. If you look at the FFT, you will see that there is more room in the corners for higher frequencies.

Quote
On that note, could you give an estimate about the processing speed of this re-sampling filter on an RGB image? In the code sample I saw that Fourier transforms are involved, so processing time will increase significantly with size. I understand that an optimized code in e.g. C++ would also give different results than in MatLab, but even the MatLab version would give an indication (e.g. single channel pixels/second) of what to expect for an RGB image such as the MaxMax example.

The largest image I tried was your rings1 image at 1000x1000 pixels, which took about 6.5 seconds to do a 2x interpolation (on my not-state-of-the-art pc). A limitation of the interp2d code is that it only accepts integer ratios. So to do the above series at 0.1 intervals, I had to over-interpolate by a factor of 10 (for example, interpolated by 14x for the 1.4 example), and then at the end decimate by nearest neighbor by a factor of 10. This complicates things and makes the memory requirements impractical for this kind of processing, unless you break it down and process it in blocks.

As I mentioned once before, for practical purposes a good bicubic interpolater might be good enough. In the maxmax example, PS bicubic does nearly as well as the sinc filter. But then again, it depends on what kind of other processing you will be doing later in the work flow. Reconstruction errors look really bad when you sharpen them.

Here is the uninterpolated MaxMax hot rod air conditioner to play with (in sGray):
(http://sites.google.com/site/cliffpicsmisc/home/5DHRcrop_sGray.png)

Here is a "classic" paper by Mitchell on reconstruction: Rconstruction Filters in Computer Graphics (http://www.cs.utexas.edu/%7Efussell/courses/cs384g/lectures/mitchell/Mitchell.pdf)

Cliff
Title: Is this Aliasing?
Post by: Dick Roadnight on January 17, 2010, 05:33:04 pm
Quote from: crames
Your example shows what Emil was just talking about.

There is definitely some color aliasing going on there. It's possible that the luminance is unaliased and could be used to restore the color without aliasing. Is the raw file available?.

Cliff
It would be nice if I could upload the raw file to the forum... I will e-mail it to anyone who sends me their e-mail address. The raw file is .3FR, Hasselblad Phocus format.

I suppose I could up load it somewhere and post a link to it.
Title: Is this Aliasing?
Post by: Bart_van_der_Wolf on January 17, 2010, 06:06:48 pm
Quote from: crames
Actually the diagonals are oversampled in comparison to horizontal and vertical. If you look at the FFT, you will see that there is more room in the corners for higher frequencies.

In the sample I provided earlier (post 5), there is no room in the FT beyond 64 x Sqrt(2) distance from DC. Here are the image, its top right corner zoomed in, and the FFT (the yellow rectangle in the next image is 64x64 pixels):
[attachment=19521:SineSqrt.png]
Of course there is more room in the FT, but we also know what happens with the image when we push detail to the extreme (the image turns gray at given frequencies).

Quote
The largest image I tried was your rings1 image at 1000x1000 pixels, which took about 6.5 seconds to do a 2x interpolation (on my not-state-of-the-art pc). A limitation of the interp2d code is that it only accepts integer ratios. So to do the above series at 0.1 intervals, I had to over-interpolate by a factor of 10 (for example, interpolated by 14x for the 1.4 example), and then at the end decimate by nearest neighbor by a factor of 10. This complicates things and makes the memory requirements impractical for this kind of processing, unless you break it down and process it in blocks.

Yes, or just stick to 2x. Hmm 6.5 seconds for a single 1Mpixel is not too bad, but it will become much slower at larger sizes and with 3 channels.

Quote
As I mentioned once before, for practical purposes a good bicubic interpolater might be good enough. In the maxmax example, PS bicubic does nearly as well as the sinc filter. But then again, it depends on what kind of other processing you will be doing later in the work flow. Reconstruction errors look really bad when you sharpen them.

Yes, but it wouldn't do that good with your original image (as you showed in post no. 7), which kicked of this thread and the interest in good reconstruction filters.

Quote
Here is a "classic" paper by Mitchell on reconstruction: Rconstruction Filters in Computer Graphics (http://www.cs.utexas.edu/%7Efussell/courses/cs384g/lectures/mitchell/Mitchell.pdf)

A classic indeed, and the Mitchell/Netravali filter is the default for upsampling of regular images in ImageMagick.

Cheers,
Bart
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 17, 2010, 07:33:13 pm
Quote from: crames
Here is a "classic" paper by Mitchell on reconstruction: Rconstruction Filters in Computer Graphics (http://www.cs.utexas.edu/%7Efussell/courses/cs384g/lectures/mitchell/Mitchell.pdf)

My favorite quote from that paper:
Quote
In Figure 10, a typical problem is seen where portions of the image near an edge have become negative and have been clamped to zero. This results in pronounced black spots (e.g., at the top of the statue's head). Similar clamping occurs to white, but is less noticeable because of the eye's nonlinear response to contrast. Schreiber and Troxel have suggested that subjectively even sharpening can only be produced by introducing ringing transients in a suitably nonlinear fashion [SCH85]. These conspicuous clamping effects could also be eliminated by reducing the dynamic range of the image or raising the DC level of the image.

This is exactly why interpolation of a linear image (at least with spline-based algorithms) is a bad idea. Interpolating after a a perceptually even gamma has been applied (approximately 2) gives much better results.
Title: Is this Aliasing?
Post by: crames on January 17, 2010, 08:37:32 pm
Quote from: Jonathan Wienke
This is exactly why interpolation of a linear image (at least with spline-based algorithms) is a bad idea. Interpolating after a a perceptually even gamma has been applied (approximately 2) gives much better results.

I get your point, sometimes working in the "perceptual space" is better. My concern is that in a gamma space, 2+2 does not equal 4, and depending on the operation that could cause visible distortion (like a harmonic generator).

Some time ago I was inspired by Bart's down-sampling page (http://www.xs4all.nl/%7Ebvdwolf/main/foto/down_sample/down_sample.htm)  to try down-sampling in linear, and I recall seeing better results. Do you have some examples?

Cliff
Title: Is this Aliasing?
Post by: nma on January 17, 2010, 09:38:50 pm
Quote from: ejmartin
If one used a simple linear filter to do the interpolation of color channels, resolution would be limited by the sampling frequency, and so would be particularly poor for R and B channels on a Bayer RGGB array.  However, image data is correlated between the color channels, and if those correlations are used one can achieve much higher resolution with much less artifacting.  No good demosaic just does a linear interpolation; and the better ones make some use of the correlations between R,G, and B data to achieve resolution near Nyquist for the full array rather than the individual color subsampled arrays.

Cliff, thanks for the vote of confidence.
Emil,

Thanks for your thoughtful response about the spatial correlations between channels. While I do not dispute anything you wrote, it does raise more questions in my mind. For example, if the red and blue channels were over sampled, relying on the partial correlations would add distortion.
As the sampling of the red and blue channel improves without limit, the need for relying on the correlation diminishes. Certainly using the correlation was much more important when we had 6 mPix cameras than now at 24 mPix. The degree of correlation must be dependent on the scene. How is that taken into account? What is the criteria for including the correlation?  

Thanks for your comments
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 17, 2010, 10:29:17 pm
Quote from: crames
I get your point, sometimes working in the "perceptual space" is better. My concern is that in a gamma space, 2+2 does not equal 4, and depending on the operation that could cause visible distortion (like a harmonic generator).

Some time ago I was inspired by Bart's down-sampling page (http://www.xs4all.nl/%7Ebvdwolf/main/foto/down_sample/down_sample.htm)  to try down-sampling in linear, and I recall seeing better results. Do you have some examples?

Interpolation in gamma 2:
(http://visual-vacations.com/media/int-gamma2.jpg)

Interpolation in linear, with nearest-neighbor for reference:
(http://visual-vacations.com/media/int-linear.jpg)

The ringing effects in the dark area are much more noticeable in the linear image, and most unforgivably, the rendering of the white dot on black background is not the perceptual inverse of the black dot on white. The gamma 2 interpolation is better on both counts.
Title: Is this Aliasing?
Post by: crames on January 17, 2010, 11:09:36 pm
Quote from: Jonathan Wienke
The ringing effects in the dark area are much more noticeable in the linear image, and most unforgivably, the rendering of the white dot on black background is not the perceptual inverse of the black dot on white. The gamma 2 interpolation is better on both counts.

Those are some funky colors! I'm seeing different results than you - for the gamma version there is a dark gray band between the green and magenta patches, and the little black square looks noticeably larger.

It could be a display problem - either yours, mine, or both. Maybe it's because channels are clipped.

I would think that linear would better emulate the additive mixing of light in the eye.

Cliff
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 17, 2010, 11:13:05 pm
Try looking at them in Photoshop side by side with a calibrated/profiled monitor. Comparisons on an uncalibrated monitor aren't going to be very useful. I see a light band on the edge of the green that fades into a narrower dark band on the edge of the magenta in the linear, as opposed to a wider dark band between green and magenta in the gamma-2 image. For real-world images, the gamma-2 is generally better because of the perceptual uniformity of sharpening/ringing artifacts, and black on white more closely matches a negative image of white on black.

Think of it this way: when transitioning from black (0) to white (1) in a series such as 0,0,0,1,1,1, the middle interpolated value is going to be (0.5). (0.5) should correspond perceptually to "halfway between white and black". In a linear image, (0.5) is only 1 stop below clipped white, and thus is on the brighter end of the luminance scale perceptually. As a result, light colors will bleed into dark colors, as shown by the second image I posted, causing bright objects on a dark background to become somewhat larger than a dark object on a light background having the same pixel dimensions.
Title: Is this Aliasing?
Post by: ejmartin on January 17, 2010, 11:27:51 pm
Quote from: crames
I would think that linear would better emulate the additive mixing of light in the eye.

Cliff

Why not a perceptually uniform space such as Lab?  I thought that would be the space where one would want to interpolate, since color differences of adjacent pixels in the source image are, well, perceptually uniform.
Title: Is this Aliasing?
Post by: ejmartin on January 17, 2010, 11:28:28 pm
duplicate post.
Title: Is this Aliasing?
Post by: crames on January 17, 2010, 11:29:40 pm
Quote from: Jonathan Wienke
Try looking at them in Photoshop side by side with a calibrated/profiled monitor. Comparisons on an uncalibrated monitor aren't going to be very useful.

That's what I did. And I confirmed that L* goes from 88 in the green, dips to 49 in the seam, then goes up to 60 in the magenta.
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 17, 2010, 11:47:11 pm
Quote from: crames
That's what I did. And I confirmed that L* goes from 88 in the green, dips to 49 in the seam, then goes up to 60 in the magenta.

I don't dispute the existence of the dark band between the green and magenta. Try counting how many pixels it takes to get to L*50 from the edge the clipped part of the black square vs the white square in the linear image vs the gamma-2 image. IMO accurately rendering the perceptual transition between white and black is more important than perfectly handling oddball hard-edged color transitions like the green/magenta example that only rarely occur in real-world images.

Converting to LAB before resampling would solve both issues simultaneously, though.
Title: Is this Aliasing?
Post by: crames on January 17, 2010, 11:53:10 pm
Quote from: ejmartin
Why not a perceptually uniform space such as Lab?  I thought that would be the space where one would want to interpolate, since color differences of adjacent pixels in the source image are, well, perceptually uniform.

It depends on what you want to do. If you want to reconstruct the pattern of light that fell on the focal plane, I think you want to stay linear.

If you want to make a perceptual enhancement of some kind, use a perceptual space. I just think that the perceptual tweaks should occur later in the pipeline.

Colors don't mix in Lab like they do in say XYZ tristimulus space.
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 18, 2010, 01:21:14 am
Quote from: crames
OK, I think I know what's happening. My guess is that you are viewing your monitor in a dark room. I've been looking at my monitor in a more "average surround" environment, with a CRT with a low black point. That's enough to explain the differences in what we are seeing, since contrast perception is affected by the surround.

When interpolating in a linear space, ringing artifacts are accentuated in dark areas and somewhat de-emphasized in bright areas, perceptually speaking. Interpolating with a non-linear gamma evens out the visibility of the artifacts so that the artifacts are perceptually similar in highlights and shadows. As a result, a lesser degree of mathematical gymnastics is needed to reduce ringing artifacts to the point where they are no longer visible.
Title: Is this Aliasing?
Post by: crames on January 18, 2010, 01:32:36 am
Quote from: Jonathan Wienke
I don't dispute the existence of the dark band between the green and magenta. Try counting how many pixels it takes to get to L*50 from the edge the clipped part of the black square vs the white square in the linear image vs the gamma-2 image. IMO accurately rendering the perceptual transition between white and black is more important than perfectly handling oddball hard-edged color transitions like the green/magenta example that only rarely occur in real-world images.

OK, I think I know what's happening. My guess is that you are viewing  your monitor in a dark room. I've been looking at my monitor in a more  "average surround" environment, with a CRT with a low black point.  That's enough to explain the differences in what we are seeing, since  contrast perception is affected by the surround.

I would try it with more usual colors like those on a Color Checker.

Quote
Converting to LAB before resampling would solve both issues simultaneously, though.

You should try it, but I have my doubts.
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 18, 2010, 01:45:46 am
Quote from: crames
OK, I think I know what's happening. My guess is that you are viewing  your monitor in a dark room. I've been looking at my monitor in a more  "average surround" environment, with a CRT with a low black point.  That's enough to explain the differences in what we are seeing, since  contrast perception is affected by the surround.

The ringing artifacts in the black area of the linear image (between the white dot and magenta square) have peak RGB values between 25 and 35. The same area in the gamma-2 image peaks between 10 and 14. That's a pretty significant difference, IMO, not just monitor viewing conditions.
Title: Is this Aliasing?
Post by: crames on January 18, 2010, 02:01:04 am
Quote from: BartvanderWolf
In the sample I provided earlier (post 5), there is no room in the FT beyond 64 x Sqrt(2) distance from DC. Here are the image, its top right corner zoomed in, and the FFT (the yellow rectangle in the next image is 64x64 pixels):

Of course there is more room in the FT, but we also know what happens with the image when we push detail to the extreme (the image turns gray at given frequencies).

You have room to go right out to the edge of the frequency space. Here's an example of what you get when you approach the diagonal corners. First, what was created in the frequency domain:
(http://sites.google.com/site/cliffpicsmisc/_/rsrc/1263794889216/home/fine_diagonals_FT.png)

Next, what you get after inverse FT to spatial domain:
(http://sites.google.com/site/cliffpicsmisc/home/fine_diagonals_iFFT.png)
If you look closely, it's all checkerboard with just the 3 white lines.

Amazingly, the above reconstructs to the following (1.5 interpolated, then zoomed 2x, center crop):
(http://sites.google.com/site/cliffpicsmisc/home/fine_diagonals_iFFTi15_NNx2.png)

So you can reconstruct up to sqrt(2) higher resolution, super-fine detail along the diagonals. Wasn't this what Fuji exploited in their Super CCD sensor?
Title: Is this Aliasing?
Post by: crames on January 18, 2010, 02:28:49 am
Quote from: Jonathan Wienke
The ringing artifacts in the black area of the linear image (between the white dot and magenta square) have peak RGB values between 25 and 35. The same area in the gamma-2 image peaks between 10 and 14. That's a pretty significant difference, IMO, not just monitor viewing conditions.

The apparent difference in gamma between average surround viewing and dark surround viewing is about a factor of 1.5. If you're in a dark surround, you have to increase the gamma to 1.5 for the image to appear the same as in an average surround.

Try it in PS Levels or Exposure - set gamma to 1/1.5 = .67 (PS uses reciprocal gamma) to simulate what I see in an average surround.

If I do the opposite and set (reciprocal) gamma to 1.5 in my average surround, I can see exactly what you are describing.

I think your test target is too sensitive to viewing contrast. Plus it responds strangely to changes in Levels, I guess because channels are maxed out. You might end up optimizing your routines so the results look good only under limited conditions.
Title: Is this Aliasing?
Post by: crames on January 18, 2010, 03:04:08 am
Quote from: Dick Roadnight
It would be nice if I could upload the raw file to the forum... I will e-mail it to anyone who sends me their e-mail address. The raw file is .3FR, Hasselblad Phocus format.

I suppose I could up load it somewhere and post a link to it.

Dick,
I sent you my email address in a message through the LL message system. If you want, I can put the raw file where others will be able to download it.

Thanks,
Cliff
Title: Is this Aliasing?
Post by: Dick Roadnight on January 18, 2010, 07:24:31 am
Quote from: crames
Dick,
I sent you my email address in a message through the LL message system. If you want, I can put the raw file where others will be able to download it.

Thanks,
Cliff
Hi...

I have had four requests for the raw file...

The file is 80Mpx, and my e-mail service says that the limit it 25Mpx.

When I crop the file in Phocus, it does not reduce the file size, or hide the identity of the young lady.

Reducing the size by cropping would be the best way if it was possible, as it would save me having to get permission from the subject, and compression might reduce the moire effect
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 18, 2010, 07:30:51 am
You can't send a crop of the RAW file; all Phocus is doing is setting some metadata tags indicating the preferred cropping parameters when converting. Try converting the file to losslessly compressed DNG; that should at least bring down the file size considerably. If there are permission/usage issues with the subject, you'll simply have to get the appropriate permissions, use a different RAW that shows the same problem, or send a TIFF crop of the problem area instead of the RAW.
Title: Is this Aliasing?
Post by: ejmartin on January 18, 2010, 08:43:38 am
Quote from: Dick Roadnight
Hi...

I have had four requests for the raw file...

The file is 80Mpx, and my e-mail service says that the limit it 25Mpx.

When I crop the file in Phocus, it does not reduce the file size, or hide the identity of the young lady.

Reducing the size by cropping would be the best way if it was possible, as it would save me having to get permission from the subject, and compression might reduce the moire effect


Try uploading it to a file hosting service, eg yousendit.com
Title: Is this Aliasing?
Post by: crames on January 18, 2010, 08:50:19 am
There is a quick and dirty way to try reconstruction in Photoshop - no Matlab required. You just need a plugin that does forward and inverse FFTs. I have the freeware KamLex FFT Plugin (http://www.kamlex.com/index.php?option=com_content&view=article&id=54&Itemid=55)  (for Windows), that works well.

Here is the the proceedure:

1. Do the forward FFT.
2. Increase the canvas size on all sides. The ratio of the new size/original size is your interpolation factor.
3. Do the inverse FFT.

That's it! The color of the increased canvas area will affect edge effects and rippling. There are standard methods to minimize edge effects by mirroring, etc.

There are limitations to the plugin, but actually I think this might work as well or better than the Yaroslavsky Matlab code.

That's it!
Cliff
Title: Is this Aliasing?
Post by: ejmartin on January 18, 2010, 09:10:22 am
Quote from: nma
Emil,

Thanks for your thoughtful response about the spatial correlations between channels. While I do not dispute anything you wrote, it does raise more questions in my mind. For example, if the red and blue channels were over sampled, relying on the partial correlations would add distortion.

Note sure what you have in mind here.  What do you mean by distortion?

Quote
As the sampling of the red and blue channel improves without limit, the need for relying on the correlation diminishes. Certainly using the correlation was much more important when we had 6 mPix cameras than now at 24 mPix. The degree of correlation must be dependent on the scene. How is that taken into account? What is the criteria for including the correlation?

If the sensor outresolved any lens you could put in front of it by a sufficient factor, then one could do away with AA filters and use very simple interpolation algorithms.  Apparently 24MP on a FF DSLR, or 50MP on MFDB is not enough, as Dick's example shows.

As for how it is done, one can for example look at the main algorithm used in dcraw, called Adaptive Homogeneity-Directed (AHD) demosaicing.  The idea is to try interpolating the missing information both vertically and horizontally in a local region, then select the interpolation direction which leads to the smoothest result according to Lab color differences among the adjacent pixels.
Title: Is this Aliasing?
Post by: ejmartin on January 18, 2010, 09:12:46 am
Wish someone would write a similar FFT PS plugin for Mac.  grrr.
Title: Is this Aliasing?
Post by: Bart_van_der_Wolf on January 18, 2010, 11:41:52 am
Quote from: crames
If you look closely, it's all checkerboard with just the 3 white lines.

Yes, and not very useful at final output resolution to display even a suggestion of a sine wave pattern.

Cliff, with all due respect (and I do mean that), there is a difference between theoretical reconstruction capability, and useful output resolution. What you have shown is that one can interpolate and thus reconstruct a signal when a good filter is used, but you cannot make the signal/image look good (like the real structure, e.g. a sine wave) at its native size. One needs to enlarge/interpolate to avoid visual artifacts, but at the same time one reduces the apparent - angular, from a fixed viewing distance - resolution.

IOW, we are faced with a trade-off, especially for on screen display, not just a single choice. That's why I raised the subject of down sampling on my web page about that subject.

Quote
So you can reconstruct up to sqrt(2) higher resolution, super-fine detail along the diagonals. Wasn't this what Fuji exploited in their Super CCD sensor?

Fuji had no option to record it otherwise without throwing out recorded resolution. They rotated the sensor layout by 45 degrees, and thus were able to expoit the higher diagonal resolution capability of a regular grid, at the expense of diagonal resolution in the rotated sampling. I think it is a good choice from a resolution/capture perspective because most natural (and many man-made) objects have a more dominant horizontal/vertical frequency content, due to (resistance to) gravity. Unfortunately it also requires a 2x larger file size to hold the data without loss.

What Fuji understood is that for a reliable visual representation, one has to sacrifice some (in their case diagonal) resolution at the native image size/orientation. That is not a bad choice for an image that ultimately needs to be printed (one of the Fuji goals) even if it is most likely at reduced size.

For a reliable visual presentation of 'problematic structures' we need to sacrifice some potential resolution when viewing at 100% zoom setting on e.g. an LCD or similar hor/ver grid, as you have implicitly demonstrated. Luckily, many real life structures are chaotic enough to allow for some visual artifacts to go undetected at small reproduction sizes. Enlargements need all the help they can get though.

Cheers,
Bart
Title: Is this Aliasing?
Post by: crames on January 18, 2010, 12:33:09 pm
Quote from: BartvanderWolf
Yes, and not very useful at final output resolution to display even a suggestion of a sine wave pattern.

No question, one is forced to enlarge the image, but when you do, you see the hidden information.

Quote
Cliff, with all due respect (and I do mean that), there is a difference between theoretical reconstruction capability, and useful output resolution. What you have shown is that one can interpolate and thus reconstruct a signal when a good filter is used, but you cannot make the signal/image look good (like the real structure, e.g. a sine wave) at its native size. One needs to enlarge/interpolate to avoid visual artifacts, but at the same time one reduces the apparent - angular, from a fixed viewing distance - resolution.

IOW, we are faced with a trade-off, especially for on screen display, not just a single choice. That's why I raised the subject of down sampling on my web page about that subject.

I'm just exploring possibilities, not suggesting that all images should be subjected to reconstruction.

The original point I intended to make is that it allows you to see whether aliasing is really present in an image or not. So hopefully our eyeball "aliasing detectors" have now been re-calibrated. But I think it's also clear that it shows a potential to recover more detail and quality.

Yes, there is a resolution trade-off. But maybe it's not so much a problem when printing, for example, because it's possible to reconstruct/interpolate by 2, then print at 720 dpi instead of 360 dpi, and thereby maintain angular resolution along with the potentially-reduced artifacts.

Cliff
Title: Is this Aliasing?
Post by: Bart_van_der_Wolf on January 18, 2010, 01:06:02 pm
Quote from: ejmartin
Wish someone would write a similar FFT PS plugin for Mac.  grrr.
Hi Emil,

You could use ImageJ (http://rsb.info.nih.gov/ij/) which is all JAVA, and operates in 32bit FP accuracy, not just 8bit.

Cheers,
Bart
Title: Is this Aliasing?
Post by: ejmartin on January 18, 2010, 01:24:14 pm
Quote from: BartvanderWolf
Hi Emil,

You could use ImageJ (http://rsb.info.nih.gov/ij/) which is all JAVA, and operates in 32bit FP accuracy, not just 8bit.

Cheers,
Bart


I do use ImageJ quite a bit for analysis, along with IRIS and Mathematica (the latter has been very handy for algorithm development).  But I would love to have something that is easily integrated into an image processing workflow in Photoshop.
Title: Is this Aliasing?
Post by: Bart_van_der_Wolf on January 18, 2010, 02:35:50 pm
Quote from: ejmartin
I do use ImageJ quite a bit for analysis, along with IRIS and Mathematica (the latter has been very handy for algorithm development).

I expected you did, but I wasn't sure. I know you also use Mathematica, but that's to be expected in an academic environment.

Quote
But I would love to have something that is easily integrated into an image processing workflow in Photoshop.

I see, but then wouldn't we all ...  

Cheers,
Bart
Title: Is this Aliasing?
Post by: joofa on January 18, 2010, 03:48:21 pm
Quote from: crames
Here is the the proceedure:

1. Do the forward FFT.
2. Increase the canvas size on all sides. The ratio of the new size/original size is your interpolation factor.
3. Do the inverse FFT.

Zero-padded DFT-based techniques have been used successfully for higher frequency resolution for a long time, for e.g., for the separation of sinusoids that are close in frequency. In this case you are applying them in the reverse direction. Zero-padding in frequency to get higher resolution in the spatial domain. However, insertion of zeros in one domain only lets you have more resolution in the other domain by interpolation with existing samples in that domain. No new detail is created, and the signal in the other domain only gets "stretched", and in-between points are filled by information from the neighboring samples.

And this process is not to be confused with the "aliasing" of the data that can be had during reconstruction by using a reconstruction filter width larger than necessary on the otherwise alias-free data obtained during sampling.
Title: Is this Aliasing?
Post by: crames on January 18, 2010, 10:09:43 pm
Quote from: joofa
Zero-padded DFT-based techniques have been used successfully for higher frequency resolution for a long time, for e.g., for the separation of sinusoids that are close in frequency. In this case you are applying them in the reverse direction. Zero-padding in frequency to get higher resolution in the spatial domain. However, insertion of zeros in one domain only lets you have more resolution in the other domain by interpolation with existing samples in that domain. No new detail is created, and the signal in the other domain only gets "stretched", and in-between points are filled by information from the neighboring samples.

Yes, none of this is new. Do you have any suggestions for eliminating the ripples?
Title: Is this Aliasing?
Post by: joofa on January 18, 2010, 11:19:38 pm
Quote from: crames
Do you have any suggestions for eliminating the ripples?

Top of the head the following methods may be used:

(1) In the approximation-based reconstruction, as opposed to interpolation-based reconstruction, the coefficients in the linear combination (c_i * phi_i), where phi_i are basis functions represented by reconstruction kernel, are typically derived for the l_2 space (Hilbert space) for several reasons. However, in the more general Banach space setting, the l_p norm (p >=1), the error between reconstructed signal and actual signal is a convex function of coefficients c_i. Please note there is no reason to restrict p to integers, and values such as p=1.4, ,etc., are fine. It is observed that l_p with p around 1 has given a better performance on ringing suppression. This is a powerful approach, however, in general, computing the coefficients in spaces other than Hilbert space is not computationally easy.

(2) Local pre-smoothing of signal discontinuity (e.g., sharp edge) before interpolating.

(3) A strictly positive reconstruction function. No negative lobes. If the reconstruction filter is approximating, then, error may be larger.

(4) A hybrid approach, similar to that suggested by Yaroslavsky may be used.

Note: Local-Windowing-based schemes, which are otherwise good for error reduction between reconstructed and original signal, can help with smoothing of the block discontinuity at the end of each segment of data, however, some ringing may remain, because of presence of signal discontinuity (e.g., a sharp edge) elsewhere.
Title: Is this Aliasing?
Post by: joofa on January 19, 2010, 12:22:05 am
Duplicate.
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 19, 2010, 12:44:16 am
Quote from: crames
Yes, none of this is new. Do you have any suggestions for eliminating the ripples?

The solution I devised for suppressing ringing with cubic splines is simple, but seems to be very effective. I established a limit for the spline's z coefficients (as defined here (http://en.wikipedia.org/wiki/Spline_interpolation#Cubic_spline_interpolation)). By limiting z to ± (maximum - minimum) / N, where N is between 8 and 32, ringing and ripples are greatly reduced without affecting spline values in conditions where ringing or ripples are not an issue. When its z coefficients are clamped to zero, the values returned by a cubic spline are identical to linear interpolation, which of course has no issues with ringing. By intelligently limiting z coefficient values, you can alter the behavior of the spline so that it interpolates quasi-linearly in conditions where ringing is problematic (high-contrast edges), without affecting the spline's behavior in conditions where ringing is not an issue.
Title: Is this Aliasing?
Post by: crames on January 19, 2010, 07:07:26 am
Quote from: Jonathan Wienke
The solution I devised for suppressing ringing with cubic splines is simple, but seems to be very effective. I established a limit for the spline's z coefficients (as defined here (http://en.wikipedia.org/wiki/Spline_interpolation#Cubic_spline_interpolation)). By limiting z to ± (maximum - minimum) / N, where N is between 8 and 32, ringing and ripples are greatly reduced without affecting spline values in conditions where ringing or ripples are not an issue. When its z coefficients are clamped to zero, the values returned by a cubic spline are identical to linear interpolation, which of course has no issues with ringing. By intelligently limiting z coefficient values, you can alter the behavior of the spline so that it interpolates quasi-linearly in conditions where ringing is problematic (high-contrast edges), without affecting the spline's behavior in conditions where ringing is not an issue.

How well does your algorithm do at avoiding reconstruction error?
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 19, 2010, 09:13:34 am
Quote from: crames
How well does your algorithm do at avoiding reconstruction error?

Not quite as well as sinc; it has trouble with your synthetic image. But it does a pretty good job handling the air conditioner image:

[attachment=19565:reconstruction.png]

My algorithm is on the left, nearest-neighbor is on the right.


I have a question for you about Yaroslavsky's discrete-sinc interpolation algorithm. In his fast-sinc interpolation paper on page 15, he states:
Quote
Discrete sinc functions sincd and sincd defined by (8.10) and (8.24) are discrete point spread functions of the ideal digital lowpass filter, whose discrete frequency response is a rectangular function. Depending on whether the number of signal samples N is odd or even number, they are periodic or antiperiodic with period N, as it is illustrated in Figures 8.2( a ) and 8.2( b ).

If one were to do the sinc-interpolation twice, once with an odd number of samples and once with an even number of samples (perhaps by deliberately padding the ends of the data with a different number of samples), wouldn't it be possible to use this periodic/antiperiodic property to combine the two interpolations and cancel out the ripples? In Yaroslavsky's diagrams, it looks like the ripple patterns surrounding the signal impulses are approximately mirror images of each other. If so, wouldn't this be a more mathematically elegant approach than Yaroslavsky's adaptive switch-to-nearest-neighbor-in-trouble-spots approach?
Title: Is this Aliasing?
Post by: crames on January 19, 2010, 08:40:12 pm
Quote from: Jonathan Wienke
If one were to do the sinc-interpolation twice, once with an odd number of samples and once with an even number of samples (perhaps by deliberately padding the ends of the data with a different number of samples), wouldn't it be possible to use this periodic/antiperiodic property to combine the two interpolations and cancel out the ripples? In Yaroslavsky's diagrams, it looks like the ripple patterns surrounding the signal impulses are approximately mirror images of each other. If so, wouldn't this be a more mathematically elegant approach than Yaroslavsky's adaptive switch-to-nearest-neighbor-in-trouble-spots approach?

My guess (without spending the huge amount of time it would take a dabbler like me to really understand it) is that the odd number samples have a symmetrical spectrum, while the even number spectrum is asymmetrical. Because of the asymmetrical spectrum, filtering in the even case is a compromise because the effect of the filter won't be symmetrical. There might be a problem getting things to cancel out the way you would want.

This is what I'm getting from looking looking at pages 3 and 7 of his Lecture 4 Selected Topics (http://www.eng.tau.ac.il/%7Eyaro/lectnotes/pdf/L4_TICSP_SelTop_PefResamImplement&Appl.pdf).

Maybe Joofa can shed some light on it.
Title: Is this Aliasing?
Post by: ejmartin on January 19, 2010, 09:26:26 pm
I don't think changing the number of samples from N to N+1 is going to make much difference as far as ringing/pattern artifacts are concerned.  The sinc interpolation is predicated on the assumption that the signal being reconstructed is band-limited -- that it has no signal power on frequencies beyond Nyquist of the sampling.  This works nicely for oscillating patterns like Bart's rings, but not so well for a step edge which has spectrum at all frequencies; the missing spectrum is what would cancel the ringing, and its absence in the reconstruction leads to the overshoots and undershoots near the edge.  Changing the samples from N to N+1 will have very little effect for large N -- it's just adding an extra row/column of pixels to the image.

So the rings image will work well because its power spectrum matches the one assumed by the sinc filter; but natural images have a quite different power spectrum and so it's not clear that a sinc filter based reconstruction is going to be optimal.  It certainly won't be in images with step edges and similar structures.
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 20, 2010, 12:40:43 am
Quote from: ejmartin
So the rings image will work well because its power spectrum matches the one assumed by the sinc filter; but natural images have a quite different power spectrum and so it's not clear that a sinc filter based reconstruction is going to be optimal.  It certainly won't be in images with step edges and similar structures.

Yaroslavsky has a discrete-sinc interpolation algorithm that handles multiple rotations of a scanned text image better than several other common interpolation algorithms. Check out pages 17-21 of http://www.eng.tau.ac.il/~yaro/RecentPubli...ion_ASTbook.pdf (http://www.eng.tau.ac.il/~yaro/RecentPublications/ps&pdf/FastSincInterpolation_ASTbook.pdf) for a comparison test.
Title: Is this Aliasing?
Post by: crames on January 20, 2010, 09:26:42 am
I found some more code, by one of Yaroslavsky's co-authors: Annti Happonen (http://www.elisanet.fi/antti.happonen/fastint/). In addition to malab code, this includes c source and a compiled windows dll. The various refinements to sinc interpolation (http://www.eng.tau.ac.il/%7Eyaro/RecentPublications/ps&pdf/sinc_interp.pdf) appear to be included, such as minimized boundary effects, adaptive sliding windows, etc. Also simultaneous denoise/interpolation, rotating, zooming, etc.

I will run some tests on real images when I get a chance.
Title: Is this Aliasing?
Post by: joofa on January 20, 2010, 05:29:37 pm
Hi Cliff, I haven't verified, but I think you are right regarding the canceling of ringing using even/odd samples in your post above. On  Annti Happonen, I'm not sure about his assertion that the best approximation (least square) of a possibly non-bandlimited continuous function (in L_2 space) in the space of bandlimited functions represented by shifted sinc functions as basis, i.e., sum_i (c_i * sinc_i), is given by taking coefficients c_i to be the sampled values of the actual function in L_2. It is not difficult to show that for best approximation the coefficients are actually obtained by filtering the original function with an ideal lowpass filter (sinc) and then sampling. The only way these statements may be reconciled is that samples of the original function and samples obtained after lowpass filtering that function with sinc are the same, which may not be the case for a general L_2 function.
Title: Is this Aliasing?
Post by: crames on January 24, 2010, 11:07:17 am
I tried out the Yaroslavsky/Happonen sinc interpolation routines on a few images. It turns out that his sliding window method performs as advertised and is very effective at avoiding ripple artifacts. Unfortunately there is a trade-off in that the sliding window method rolls off the higher frequencies enough that, in the end, it was no better in my tests than convention methods like bicubic. (The sliding window has other attractive features such as optional noise reduction, but I did not test that.)

While looking for images to test I visited an old thread here that was recently revived, http://luminous-landscape.com/forum/index....showtopic=20242 (http://luminous-landscape.com/forum/index.php?showtopic=20242) In that thread there is a comparison between the Sigma SD14 and the Canon 50D where various image defects are blamed on the lack of AA filter in the Sigma. The bridge comparison is a perfect example to illustrate the point of the OP: that you need to apply a reconstruction filter if you are seeing jaggies, "aliasing," blockiness, etc.

Links to the raw files were provided here: http://luminous-landscape.com/forum/index....st&p=338080 (http://luminous-landscape.com/forum/index.php?s=&showtopic=20242&view=findpost&p=338080).

Here is a crop from the Sigma, before and after simple reconstruction by bicubic. (Note that Sinc interpolation was not used for any of the following examples.)

(http://sites.google.com/site/cliffpicsmisc/home/steps-6x-NN-vs-bicubic.jpg)

Clearly, before reconstruction the Sigma is showing all the defects usually attributed to the lack of AA filter: jaggies, "false detail", blockiness like "tetris pieces", "grid effect," etc. After reconstruction those defects are almost completely eliminated, as we are now closer to seeing the analog truth within the image.

Finally, here are matching pairs of crops comparing the 50D to the SD14 after both have been interpolated to remove sampling artifacts. The Canon by 2x bicubic, and the Sigma by 3.33x (to bring it up to the same scale as the Canon). The Sigma does well considering it has no AA filter and less than 1/3 the pixels.

(http://sites.google.com/site/cliffpicsmisc/home/steps_bicubic.jpg)

(http://sites.google.com/site/cliffpicsmisc/home/Top_bicubic.jpg)

(http://sites.google.com/site/cliffpicsmisc/home/Ladder_Bicubic.jpg)

(http://sites.google.com/site/cliffpicsmisc/home/Rivets-bicubic.jpg)
Title: Is this Aliasing?
Post by: joofa on January 25, 2010, 04:48:07 pm
Quote from: crames
I tried out the Yaroslavsky/Happonen sinc interpolation routines on a few images. It turns out that his sliding window method performs as advertised and is very effective at avoiding ripple artifacts. Unfortunately there is a trade-off in that the sliding window method rolls off the higher frequencies enough that, in the end, it was no better in my tests than convention methods like bicubic. (The sliding window has other attractive features such as optional noise reduction, but I did not test that.)

Thanks for your experiments. They are really instructive. Do you know if the sliding window method as implemented applies a windows (such as Kaiser, etc.) to the blocks/segments of data? It is not surprising that windowing would give a better visual output. Some of the drawbacks of using full image data for interpolation, etc., are well-known. It is one of the reasons that one has interpolation schemes limited to say cubic (x^3), etc., but you won't normally see (x^100, etc.), especially if noise is a concern. Polynomial interpolation on short segments is more useful than on whole data.

DCT can certainly help as compared to DFT since its energy compaction is greater. That is why you see DCT in compression stuff such as MPEG, etc., but not DFT.

As far as ringing is concerned, it is a product of both the reconstruction filter kernel and also the method used to measure it. I mentioned before that l_1 space is better for ringing suppression, but it appears people are more interested in l_2.
Title: Is this Aliasing?
Post by: crames on January 25, 2010, 08:39:23 pm
Quote from: joofa
Thanks for your experiments. They are really instructive. Do you know if the sliding window method as implemented applies a windows (such as Kaiser, etc.) to the blocks/segments of data? It is not surprising that windowing would give a better visual output. Some of the drawbacks of using full image data for interpolation, etc., are well-known. It is one of the reasons that one has interpolation schemes limited to say cubic (x^3), etc., but you won't normally see (x^100, etc.), especially if noise is a concern. Polynomial interpolation on short segments is more useful than on whole data.
He shows a windowed sinc but I haven't found any reference to the type of window. See the figure on page 48 here. (http://www.eng.tau.ac.il/%7Eyaro/RecentPublications/ps&pdf/FastSincInterpolation_ASTbook.pdf) The answer might be in Happonen's code.

Quote
As far as ringing is concerned, it is a product of both the reconstruction filter kernel and also the method used to measure it. I mentioned before that l_1 space is better for ringing suppression, but it appears people are more interested in l_2.
I don't follow what you mean here, please explain.
Title: Is this Aliasing?
Post by: joofa on January 25, 2010, 09:06:20 pm
Quote from: crames
I don't follow what you mean here, please explain.

Hi,

In reconstruction by approximation both filter kernel shape and how the error between original and reconstructed is measured is important. If you keep the method of measuring error fixed, then the reconstruction kernel determines the amount of ringing. Similarly, the method of measuring error may also be varied to reduce ringing. l_2 is least squares approximation; it is popular because it is straightforward, and differentiable. l_1 is absolute error. The reconstruction error is given as ||f-sum_i(c_i*phi_i)||_p, where, f is the original function, c_i are some coefficients we need to determine, phi_i are reconstruction filter kernel, and p>=1 are the l_p space parameter, ||.|| is the norm, and sum_i(c_i*phi_i) is the reconstructed signal. If c_i are determined by measuring error as l_1, then it is observed that at signal discontinuity the effect of ringing is less, some what reduced; in classical treatment by 1/3 the amount in l_2 case.

There are workarounds in l_2. Stuff such as windowing the sinc function -> lanczos filter, number of negative lobes of reconstruction kernel, etc.
Title: Is this Aliasing?
Post by: EsbenHR on January 26, 2010, 05:44:29 am
Hi guys,

I love this discussion from a theoretical standpoint, but I think it miss the mark a bit from a practical perspective.

Using the sinc() interpolation supports nifty features such as "I can rotate the image umpteen times and get back to the original image"*.
If this is important to you, then yes - modeling the image in terms of set of basis functions that can be fully reconstructed from its samples is a great idea.

In practice, since you also want to be spatially invariant, the simplest model is to use sinosoids** as your basis functions, which after doing a ton of math leads to the sinc() interpolation.
That is, you model the image as having a limited bandwidth.

However... are you sure this is actually what you want?

You give up a lot of things if you restrict your images to have a limited bandwidth:
* Pixel-level sharpness (you can not have a pixel-sharp edge)
* Halos are inherent (you can not avoid an overshoot; this is a fundamental result)
* Images will, in the general case, contain negative values (otherwise you get a bias)

Perhaps we will eventually accept that we want soft images in 100% because it gives us more latitude and we can apply the same sampling theory we use so successfully everywhere else in engineering. While a CD (sampled at 44100 Hz) can theoretically represent signals up to 22050Hz that is not what we do. We use a nice cut-off filter so the upper frequencies are effectively zero. That is, we actually do not want steep sample-to-sample variations.

However, for now what I see around me is that people compare 100% views and want to see really crisp images. A lit window on a distant office building at dusk should show up as a nicely rendered bright pixel. If this is what we want we have to work around the limitations of the sinc() interpolation.

Perhaps we simply need to figure out what we really want. Given that everyone and their dog have their own private ways to resize images (in particular uprezzing!) I'm not convinced that we really know what goals we are trying to attain.


Regards,

Esben Høgh-Rasmussen Myosotis


*: Up to rounding errors and after the resolution has been limited to the resolution of the horizontal/vertical direction.
**: This is a theoretical result - the sinusoids are the only real-valued eigenfunctions to a linear spatial-invariant linear operation. Those fortunate enough not to know what the heck this geek is talking about can ignore this. However this is the dirty mathematical nitty gritty reason that sinusoids are so special that most of our theory is placed on top if it.
Title: Is this Aliasing?
Post by: crames on January 26, 2010, 07:48:08 pm
Quote from: EsbenHR
Perhaps we will eventually accept that we want soft images in 100% because it gives us more latitude and we can apply the same sampling theory we use so successfully everywhere else in engineering. While a CD (sampled at 44100 Hz) can theoretically represent signals up to 22050Hz that is not what we do. We use a nice cut-off filter so the upper frequencies are effectively zero. That is, we actually do not want steep sample-to-sample variations.

However, for now what I see around me is that people compare 100% views and want to see really crisp images. A lit window on a distant office building at dusk should show up as a nicely rendered bright pixel. If this is what we want we have to work around the limitations of the sinc() interpolation.
 People don't want soft images at 100%, nor do they want images with jagged edges at 100%. How about sharp, jaggie-free images at 50%?
 
  I'm not advocating the sinc, with its drawbacks -  there are many other kinds of interpolation that can do the job. The point is that  the representation of pixels as jagged little squares is not the only option.
 
Quote
Perhaps we simply need to figure out what we really want. Given that everyone and their dog have their own private ways to resize images (in particular uprezzing!) I'm not convinced that we really know what goals we are trying to attain.
True enough.
Title: Is this Aliasing?
Post by: crames on January 27, 2010, 08:06:58 pm
Forgive me while I continue to beat this horse. Here is something practical to try.

Start with an image that you want to sharpen.

1. Up-res by 2x using bicubic (or bicubic smoother).
2. Sharpen the image with 2x the radius you would normally use.
3. Down-res by 2x using bicubic (or bicubic sharper).

If you are sharpening for print, you can preferably skip step 3 and just print at 2x the pixels/inch (for example, 720ppi instead of 360ppi).

By interpolating ("reconstructing") before sharpening, we can minimize the sharpening of spectral replicas that cause jaggies. The real image data gets sharpened, but not the jaggies. Potentially a higher amount of sharpening can be achieved before the image starts to fall apart.

Here is a 100% crop from the Atkinson "Lab Test Page.tif".

[attachment=19798:upres_sharpen.jpg]

The left side was sharpened with USM amount 500%, radius 0.8, threshold 0. The right side was interpolated 2x, sharpened with USM amount 500%, radius 1.6, threshold 0, then down-sampled by 1/2.

The difference is clearly visible in prints: jaggies are reduced, and there is less accentuation of noise. Smoother and cleaner looking, while just as sharp.

Undoubtedly, there are other image processing operations that will benefit from interpolation in this way.
Title: Is this Aliasing?
Post by: ejmartin on January 27, 2010, 10:21:37 pm
Cliff,

I'm confused about your use of the term "spectral replicas".  The upsampled image is generated (I would think) with the property that the spectral content is band-limited, with no information beyond Nyquist of the original image.  So what is one gaining by upsampling?  And furthermore, to the extent that upsampling is done by a linear filter, USM is a linear filter, downsampling is a linear filter, then why isn't the sharpening equivalent to a linear filter with slightly different kernel than USM?
Title: Is this Aliasing?
Post by: Bart_van_der_Wolf on January 28, 2010, 06:41:07 am
Quote from: crames
Forgive me while I continue to beat this horse. Here is something practical to try.

There's a difference in trying to beat some sense into people, and beating a dead horse. This subject should not be seen as a dead horse. Which category the readers fit in is up to themselves. I welcome a good discussion, and thank you for your contributions.

Quote
By interpolating ("reconstructing") before sharpening, we can minimize the sharpening of spectral replicas that cause jaggies. The real image data gets sharpened, but not the jaggies. Potentially a higher amount of sharpening can be achieved before the image starts to fall apart.

Indeed, a good point to stress, and it is another practical reason why one should interpolate to the native resolution of a printer with a good algorithm, instead of an unknown (most likely sub-optimal bilinear) filter in firmware, and then sharpen. BTW it's one of the reasons why people like the Qimage output better than the same print directly from Photoshop.

I have made very sharp (virtually artifact free) enlarged output by deconvolution sharpening of interpolated image data. The results are much better than sharpening at the native capture size and then letting the printer do the interpolation, as is advocated by some 'authorities'.

Cheers,
Bart
Title: Is this Aliasing?
Post by: crames on January 28, 2010, 07:00:10 am
Quote from: ejmartin
I'm confused about your use of the term "spectral replicas".  The upsampled image is generated (I would think) with the property that the spectral content is band-limited, with no information beyond Nyquist of the original image.  So what is one gaining by upsampling?  And furthermore, to the extent that upsampling is done by a linear filter, USM is a linear filter, downsampling is a linear filter, then why isn't the sharpening equivalent to a linear filter with slightly different kernel than USM?
The following illustration is from Pratt's "Digital Image Processing" (http://www3.interscience.wiley.com/cgi-bin/bookhome/112654057) shows the 1-dimensional case:
[attachment=19803:Pratt_Di...ng_pg118.png]
The original image is ( a ), with the base-band spectrum around ws=0, and the spectral replicas at +/- ws, +/- 2ws, etc.

By interpolating, we're trying to make the spectrum more like ( c ) the sinc filtered case, but with bicubic the spectrum will be somewhere between ( c ) and ( e ). What I'm saying is that when sharpening we should sharpen ( c ) the base-band image, rather than ( a ) which has all the spectral replicas that appear as jaggies.

You're right that equivalent sharpening could be done with a linear filter, but to do it in Photoshop you would have to have some way to design it and have it fit within the limits of the 5x5 Custom Filter.

(edited to fix typos)
Title: Is this Aliasing?
Post by: crames on January 28, 2010, 08:01:26 am
Quote from: BartvanderWolf
Indeed, a good point to stress, and it is another practical reason why one should interpolate to the native resolution of a printer with a good algorithm, instead of an unknown (most likely sub-optimal bilinear) filter in firmware, and then sharpen. BTW it's one of the reasons why people like the Qimage output better than the same print directly from Photoshop.

I have made very sharp (virtually artifact free) enlarged output by deconvolution sharpening of interpolated image data. The results are much better than sharpening at the native capture size and then letting the printer do the interpolation, as is advocated by some 'authorities'.
Thank you for your comments, Bart.

Yes, that is the best way to sharpen for enlarged print output. Less obvious I think, is that interpolation can improve the results of sharpening even in the case where the image is not being enlarged.

Rgds,
Cliff
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 28, 2010, 09:53:33 am
Once I get PixelClarity (http://visual-vacations.com/media/PixelClarity.zip)'s deconvolution capability fully functional (I'm currently finishing PSF generation code, the database code needed to store and retrieve the PSF data, and the code needed to merge 5 dimensions worth of data down to the 3 dimensions needed for a given image), I'm going to experiment with upsizing before deconvolution. I've also have a few interesting theoretical thoughts regarding using a point spread function for interpolation vs sinc:

Imagine you have a row of 100 point light sources, a lens, and a sensor with 100 photosites in a row, arranged so that if the lens had no aberrations, the light from each point source would be focused exactly on one and only one photosite on the sensor. In such an ideal arrangement, altering the intensity of any given point source would affect the output of one and only one photosite, completely independent of all the others. With a real-world lens and diffraction, this ideal is not achieved; altering the intensity of one point source will affect the outputs of several photosites.

The issue I see with sinc, bicubic, and other interpolation algorithms is this: increasing a single value in a data series can cause neighboring interpolated values to decrease. But this is contrary to what happens in real life; increasing the intensity of a single point light source will increase the outputs of its associated photosite and some of its neighbors, but can never cause any photosite output value to decrease. Conversely, decreasing the intensity of a single point source will cause the outputs of its associated photosite and some of its neighbors to decrease, but can never cause any photosite output value to increase. Therefore, sinc and other common interpolation algorithms work in a way that at times is opposite of the behavior of the real world (because increasing one output value can cause some interpolated values to decrease, and vice versa), and alternative methods should be investigated.

Given these principles, the thought that occurred to me is this: if one could devise a way to interpolate using a curve derived from the appropriate point spread function instead of the sinc function, one could simultaneously perform reconstruction/upsizing AND correct for lens blur, while reducing or completely eliminating clipping/ringing artifacts. Thoughts?
Title: Is this Aliasing?
Post by: crames on January 28, 2010, 05:15:29 pm
Quote from: Jonathan Wienke
Imagine you have a row of 100 point light sources, a lens, and a sensor with 100 photosites in a row, arranged so that if the lens had no aberrations, the light from each point source would be focused exactly on one and only one photosite on the sensor. In such an ideal arrangement, altering the intensity of any given point source would affect the output of one and only one photosite, completely independent of all the others. With a real-world lens and diffraction, this ideal is not achieved; altering the intensity of one point source will affect the outputs of several photosites.

The issue I see with sinc, bicubic, and other interpolation algorithms is this: increasing a single value in a data series can cause neighboring interpolated values to decrease. But this is contrary to what happens in real life; increasing the intensity of a single point light source will increase the outputs of its associated photosite and some of its neighbors, but can never cause any photosite output value to decrease. Conversely, decreasing the intensity of a single point source will cause the outputs of its associated photosite and some of its neighbors to decrease, but can never cause any photosite output value to increase. Therefore, sinc and other common interpolation algorithms work in a way that at times is opposite of the behavior of the real world (because increasing one output value can cause some interpolated values to decrease, and vice versa), and alternative methods should be investigated.
Jonathan,

I don't follow the part about how increasing a photosite value causes a decrease in neighboring photosites. If the input to your hypothetical system is band-limited (as required for sampling), interpolation should work as expected. A point-source won't light up a single pixel in band-limited system. After passing through an optical system, a focused point source will be an Airy disk (jinc^2), spread around among local pixels.
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 28, 2010, 06:11:49 pm
Quote from: crames
Jonathan,

I don't follow the part about how increasing a photosite value causes a decrease in neighboring photosites. If the input to your hypothetical system is band-limited (as required for sampling), interpolation should work as expected. A point-source won't light up a single pixel in band-limited system. After passing through an optical system, a focused point source will be an Airy disk (jinc^2), spread around among local pixels.

Increasing a value doesn't decrease neighboring sampled values, but depending on the interpolation algorithm, it can cause interpolated values to decrease. The general point I'm making is that increasing the intensity of one of the point light sources cannot result in the decrease of any sampled value, for obvious reasons. It shouldn't be allowed to decrease any interpolated values between samples, either.

If you have a natural cubic spline with a row of data points of 0.2, with one data point in the middle of 0.5, the interpolated spline value will drop to about 0.16 ±1.5 samples from the 0.5 data point. If you increase the peak data value to 0.9, the interpolated spline values will drop to about 0.11 ±1.5 samples from the 0.9 data point. If the interpolation algorithm accurately reflected the realities of optics, the interpolated values ±1.5 samples from the peak value should be ≥0.2 in both cases, with the interpolated value near the 0.9 data point being greater than the interpolated value near the 0.5 data point.
Title: Is this Aliasing?
Post by: crames on January 28, 2010, 11:48:53 pm
Quote from: Jonathan Wienke
Increasing a value doesn't decrease neighboring sampled values, but depending on the interpolation algorithm, it can cause interpolated values to decrease. The general point I'm making is that increasing the intensity of one of the point light sources cannot result in the decrease of any sampled value, for obvious reasons. It shouldn't be allowed to decrease any interpolated values between samples, either.

If you have a natural cubic spline with a row of data points of 0.2, with one data point in the middle of 0.5, the interpolated spline value will drop to about 0.16 ±1.5 samples from the 0.5 data point. If you increase the peak data value to 0.9, the interpolated spline values will drop to about 0.11 ±1.5 samples from the 0.9 data point. If the interpolation algorithm accurately reflected the realities of optics, the interpolated values ±1.5 samples from the peak value should be ≥0.2 in both cases, with the interpolated value near the 0.9 data point being greater than the interpolated value near the 0.5 data point.
Ok, I think what's happening is that changing the middle value while keeping the other samples constant does not model an optical system without aliasing. If you were to convolve your samples with a realistic psf/otf before interpolating, it should work in a more realistic way. The blur of the optical system will cause a point source to affect more than a single pixel, so if the intensity of a point source changes by factor x, all the pixels within the influence of the psf should change by factor x.

The mtf of an Airy disk is cone shaped in 2D. To be un-aliased, the width of the cone should not exceed half the sampling frequency. This corresponds to an Airy disk a few pixels wide.
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 29, 2010, 07:44:15 pm
Quote from: crames
If you were to convolve your samples with a realistic psf/otf before interpolating, it should work in a more realistic way.

That's where I'm going with this line of thought, though I don't have a clear idea of exactly how to do this. I'd like to combine the deconvolution-with-PSF and upsampling if possible, with the goals of preserving the greatest possible amount of true image detail, avoiding jaggies and reconstruction artifacts, and minimizing interpolation errors and distortions like clipping and ringing. Has there been any research published along those lines, or is this a new idea?
Title: Is this Aliasing?
Post by: crames on January 30, 2010, 12:15:28 am
Quote from: Jonathan Wienke
That's where I'm going with this line of thought, though I don't have a clear idea of exactly how to do this. I'd like to combine the deconvolution-with-PSF and upsampling if possible, with the goals of preserving the greatest possible amount of true image detail, avoiding jaggies and reconstruction artifacts, and minimizing interpolation errors and distortions like clipping and ringing. Has there been any research published along those lines, or is this a new idea?
I'd be surprised if it's a new idea. There is a vast amount of scientific literature on image reconstruction. For example a quick Google found this. (http://www.informaworld.com/smpp/jump%7Ejumptype=banner%7Efrompagename=content%7Efrommainurifile=content%7Efromdb=all%7Efromtitle=%7Efromvnxs=%7Econs=?dropin=dxdoiorg_101080_01431169308904292&to_url=http:/%2fdx%2edoi%2eorg%2f10%2e1080%2f01431169308904292) I hope you have a university library nearby so you don't have to pay to download a lot of papers! Or if you're lucky you belong to an alumni association that provides free online access to journals.

You might want to take another look at Yaroslavsky's sliding window which, along with interpolation, also allows modification of the DCT coefficients in each window for filtering and de-noising.
Title: Is this Aliasing?
Post by: EsbenHR on January 30, 2010, 04:37:46 am
Quote from: Jonathan Wienke
That's where I'm going with this line of thought, though I don't have a clear idea of exactly how to do this. I'd like to combine the deconvolution-with-PSF and upsampling if possible, with the goals of preserving the greatest possible amount of true image detail, avoiding jaggies and reconstruction artifacts, and minimizing interpolation errors and distortions like clipping and ringing. Has there been any research published along those lines, or is this a new idea?

Yes, this is possible.

In Matlab you can constrain a spline to be piecewise monotonic between samples:
http://www.mathworks.com/access/helpdesk/h.../ref/pchip.html (http://www.mathworks.com/access/helpdesk/help/techdoc/ref/pchip.html)

[ One of the best features of Matlab is that they often include references in the documentation, so check out the bottom. ]

To do this you give up a smooth 2nd derivative in that case here you would have an overshoot. That should be fine I would think.
You also give up the ability to put a peak between pixels, which is more problematic.
I think it should be possible to modify the method so it only suppress undershoots but leaves the spline behavior at a peak.
This should fit well into your spline-based interpolation as far as I understand your algorithm.

However, over- and under-shooting in the interpolated samples is not a bad thing per se. If you assume, crudely, that the value of a pixel represents the integral over the area (i.e. the number of photons per second over a very long exposure, and the light is a continuous function, then it follows that you will have values above and below that value (well it could be constant). Now, if the neighbor has an increased value, then a continuous interpolating function should increase near it also. To keep the integral constant the undershoot would need to decrease.

The alternative is a discontinuous interpolation, where nearest neighbor is the simplest example. I think this approach has a lot going for it but it is hard - try to Google for "weak membrane". The idea is to have a piecewise smooth interpolation that can "break" at discontinuous edges.
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 30, 2010, 09:52:02 am
Quote from: EsbenHR
The alternative is a discontinuous interpolation, where nearest neighbor is the simplest example. I think this approach has a lot going for it but it is hard - try to Google for "weak membrane". The idea is to have a piecewise smooth interpolation that can "break" at discontinuous edges.

I'm already doing something along those lines with my modified 2-D spline interpolation code. I use the standard method to calculate the z coefficients for each spline knot, where z is the parameter used in this equation:

(http://upload.wikimedia.org/math/4/a/4/4a4d877b56b5612d814fee92797fc059.png)

Once z has been calculated for a given spline knot, I clip it to ±((MaxValue - MinValue) / N), where N is chosen to minimize ringing artifacts without affecting the spline's behavior away from high-contrast edges:

.                    ZLimit = (MaxValue - MinValue) / N

.                    If z < -ZLimit Then
.                        z = -ZLimit
.                    ElseIf z > ZLimit Then
.                        z = ZLimit
.                    Else
.                        'Do nothing, z is within acceptable range of values unlikely to cause over/undershoot, ringing, clipping, etc.
.                    End If

So far, an N value of 8 seems to be about optimal. Over/undershoot near high-contrast edges is effectively limited without altering the behavior of the spline away from such high-contrast edges. And the effect of clipping this parameter is paradoxically gradual; if the clipped value is only slightly different from the unclipped value, the effect on the spline is slight.
Title: Is this Aliasing?
Post by: bjanes on January 30, 2010, 11:42:44 am
Quote from: BartvanderWolf
Indeed, a good point to stress, and it is another practical reason why one should interpolate to the native resolution of a printer with a good algorithm, instead of an unknown (most likely sub-optimal bilinear) filter in firmware, and then sharpen. BTW it's one of the reasons why people like the Qimage output better than the same print directly from Photoshop.

I have made very sharp (virtually artifact free) enlarged output by deconvolution sharpening of interpolated image data. The results are much better than sharpening at the native capture size and then letting the printer do the interpolation, as is advocated by some 'authorities'.

Bart,

Your posts are always informative and thought provoking, and your statement about sending the file to the printer at the native resolution of the device makes sense, notwithstanding the advice of certain "authorities". The native resolution of the printer can be difficult to determine in the case of an inkjet using diffusion dither. The "native resolution" of Epson printers is often stated to be 360 dpi or multiples thereof; however, Rags Gardner (http://www.rags-int-inc.com/PhotoTechStuff/Epson2200/) has done some experiments using interference patterns with the Epson 2200 and found that the "native resolution" was 288 dpi. He also reported that the hardware resolution may differ from that used in the software for resampling. Have you seen this?

Sometimes, a little blurring of the image is desirable. Some time ago I had some images printed on the Fuji Pictrography device, which was a high resolution contone device. I sized the images to the native resolution of the printer and every little defect in my image stood out in great detail.

Also, what deconvolution algorithm do you use for image restoration and how do you determine the PSP? Jonathan Wienke, who is also contributing to this thread, has used FocusMagic with good results for capture sharpening. I have read that deconvolution can correct to some extent the softening due to the blur filter.

Regards,

Bill
Title: Is this Aliasing?
Post by: Bart_van_der_Wolf on January 30, 2010, 02:14:29 pm
Quote from: bjanes
Bart,

Your posts are always informative and thought provoking, and your statement about sending the file to the printer at the native resolution of the device makes sense, notwithstanding the advice of certain "authorities". The native resolution of the printer can be difficult to determine in the case of an inkjet using diffusion dither. The "native resolution" of Epson printers is often stated to be 360 dpi or multiples thereof; however, Rags Gardner (http://www.rags-int-inc.com/PhotoTechStuff/Epson2200/) has done some experiments using interference patterns with the Epson 2200 and found that the "native resolution" was 288 dpi. He also reported that the hardware resolution may differ from that used in the software for resampling. Have you seen this?

Hi Bill,

Thanks for the kind words. I hadn't seen Rags' article but I just did read it. What he demonstrated IMHO is more due to (limitations in) the dithering algorithm than to the native resolution of the printer itself. A different test pattern (parallel lines) would probably have revealed something else, as in this test (http://www.ddisoftware.com/qimage/quality/). The native resolution is apparently reported back by the printer(driver) after an interrogation from a function call to the printer driver. The reported number can differ depending on the paper (settings) used in the driver (e.g. matte paper may cause to report a lower resolution than glossy paper). The significance of a lines target is because of the human eye's vernier acuity which is perhaps 4x higher than its angular resolution capability would suggest, and inkjet printers have a better capability to offer such resolutions.

The way I understand it, the dithering algorithm assumes a certain PPI input (e.g. 720 PPI for Epson, and 600 PPI for Canon/HP, if glossy paper is selected). Very Large Format printers may have lower PPIs (lower resolution requirements and increase of print speed). If the image size requested and the number of pixels supplied do not result in that native resolution in PPI, then the data supplied will be (bilinearly) interpolated/decimated, after which the dithering engine can do its stuff to mix colors and droplet sizes and use error diffusion to hide color inaccuracies and abrupt boundaries between head passes.

A program like Qimage does such interrogation of the printer driver, and displays it in the user interface. It also resamples the incoming pixels on the fly to match that native resolution, and sharpens at that resolution (to compensate for printer/paper/ink losses of micro-contrast)!

Quote
Sometimes, a little blurring of the image is desirable. Some time ago I had some images printed on the Fuji Pictrography device, which was a high resolution contone device. I sized the images to the native resolution of the printer and every little defect in my image stood out in great detail.

Sometimes we get more than we bargained for ...

Quote
Also, what deconvolution algorithm do you use for image restoration and how do you determine the PSP? Jonathan Wienke, who is also contributing to this thread, has used FocusMagic with good results for capture sharpening. I have read that deconvolution can correct to some extent the softening due to the blur filter.

Yes, I also use FocusMagic, but it is not ready for 64-bit computer platforms (yet), although a friend of mine has it running on a 64-bit Win7 machine, albeit as a plug-in of a 32-bit version of Photoshop CS4. I occasionally also use an implementation of the adaptive Richardson Lucy deconvolution which allows to balance between restoration of noise and enhancing detail. Both are capable of restoring resolution from blur instead of boosting acutance or edge contrast. How successful that restoration is depends on the quality of the image (RL uses a maximum likelyhood approach, so multiple solutions can be chosen from an ill posed problem, not necessarily the best).

I approximate the PSFs of my lens+OLPF+sensel pitch/aperture of my camera (at various lens apertures) with the use of Imatest's SFR (=MTF) determination, and a proprietary method I devised, but I must admit that FocusMagic does an amazing job with just a single radius parameter input (+ an amount setting). It deals with defocus blur and also reasonably well with diffraction blur, despite the different shapes of their PSF. The OLPF/AA-filter is just part of the total system MTF. If only FocusMagic would be recoded for 64-bit hardware.

Cheers,
Bart
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 30, 2010, 06:26:05 pm
Quote from: Jonathan Wienke
I'm already doing something along those lines with my modified 2-D spline interpolation code. I use the standard method to calculate the z coefficients for each spline knot, where z is the parameter used in this equation:

...

Over/undershoot near high-contrast edges is effectively limited without altering the behavior of the spline away from such high-contrast edges. And the effect of clipping this parameter is paradoxically gradual; if the clipped value is only slightly different from the unclipped value, the effect on the spline is slight.

Here's a 1-D comparison between a standard natural cublic spline (blue) and my modified cubic spline (red):

[attachment=19872:SplineComparison.gif]

The data points in this curve are random values, highlighted in white. Although this example shows one instance where clipping is not prevented, you can see several areas where the modified spline avoids ringing/overshoot much better than the unmodified spline.
Title: Is this Aliasing?
Post by: Bart_van_der_Wolf on January 30, 2010, 07:07:22 pm
Quote from: Jonathan Wienke
Here's a 1-D comparison between a standard natural cublic spline (blue) and my modified cubic spline (red):

[attachment=19872:SplineComparison.gif]

The data points in this curve are random values, highlighted in white. Although this example shows one instance where clipping is not prevented, you can see several areas where the modified spline avoids ringing/overshoot much better than the unmodified spline.

Hi Jonathan,

The issue with splines of course remains to find a balance between unjustified overshoot, and justified interpolation (inventing probably useful data). Maybe a more advanced heuristics is needed, only when a potential overshoot is detected, based on more (e.g. 2D perpendicularly oriented vs sampling direction) local data input? Just thinking aloud ...

Cheers,
Bart
Title: Is this Aliasing?
Post by: EsbenHR on January 30, 2010, 07:18:48 pm
Quote from: Jonathan Wienke
Here's a 1-D comparison between a standard natural cublic spline (blue) and my modified cubic spline (red):

[attachment=19872:SplineComparison.gif]

The data points in this curve are random values, highlighted in white. Although this example shows one instance where clipping is not prevented, you can see several areas where the modified spline avoids ringing/overshoot much better than the unmodified spline.

I found it hard to see exactly where the reference points were, but I do get the idea.

It looks a bit too simple for me, but if it looks good it is hard to argue about it :-)

In any case, I would not think the points on one side of a discontinuity should affect the interpolated values on the other. The whole idea about the discontinuity is that there is a separation. If I understand you correctly, you start by solving the system to find the spline and then modify the result locally. I would think a discontinuity would be more properly modeled as two splines. (You could get the same effect by zeroing the coupling elements in the system matrix before solving the spline parameters).
Title: Is this Aliasing?
Post by: Jonathan Wienke on January 30, 2010, 09:32:46 pm
The first example I posted is a bit extreme; pretty much every sample in the series constitutes a "high-contrast edge". Here's a side-by-side comparison with a second series that doesn't have such extreme variation:

[attachment=19875:SplineComparison.gif]

The only reason there's any difference between the splines in the second series is because the modified spline uses a 16-bit integer variable to store the z parameter instead of a 32-bit floating-point variable, to save space.
Title: Is this Aliasing?
Post by: joofa on February 01, 2010, 08:30:40 pm
Quote from: BartvanderWolf
The issue with splines of course remains to find a balance between unjustified overshoot, and justified interpolation (inventing probably useful data).

Technically, the overshoot is not "unjustified", and in fact arises because how we measure the error. Here is the intent of the interpolation in signal processing philosophy: If we are to reconstruct a continuous signal from its samples then how can the samples be obtained that the reconstructed signal using these samples is the "best" representation of that original possibly non-bandlimited function. The "best" part will be measured by error between original, possibly unknown, function and the reconstructed function. For ideal sinc reconstruction it turns out that the best way to obtain the samples is to use the actual samples of the original signal itself. However, if we are going to reconstruct using other methods, is that still the best way to obtain samples? E.g.:- Lets pick linear interpolation, and one would find that even linear interpolation would exhibit ringing. Normally, we don't see that in linear interpolation, because we use the actual samples. However, if we use the actual samples, then linear interpolation is not the doing the best it can do. We need to derive an alternate set of samples and then interpolate between them for min. error.

Unfortunately, we are typically given samples and can't change the way we acquired them. However, even in such cases optimizations exist that shall try to mitigate the effect of information loss by considering the way samples were acquired and the reconstruction kernel jointly.

Now as suggested by EsbenHR, one can use a Hermite polynomial instead of a cubic spline for getting rid of "ringing-type" stuff. However, that should be done with this understanding that the goal is to generate a visually pleasing image and not necessarily the technically best reconstruction from the original image.

On the other hand, if we keep the reconstruction kernel fixed, then different ways of measuring error will produce different level of suppression on the ringing phenomenon. As I mentioned in an earlier post that the L1 norm is less sensitive to the presence of outliers (a sharp step causing ringing in our case here).

For example: consider the numbers {1, 2, 5, 9, 112=outlier}. If we want to approximate them by a constant then using:

L1 norm: gives the estimator as 5,
L2 norm (least squares): gives the estimator as 25.8.
L_infinity (max) norm: gives the estimator as 56.5.

As seen L2 and L_infinity are way off, while L1 is justified, and it also gave another satisfying result that the estimator, i.e., 5, was actually present in the original set of numbers.

However, optimization in spaces other than L2 is challenging and not easy and is something to be wary of.
Title: Is this Aliasing?
Post by: Jonathan Wienke on February 01, 2010, 11:40:25 pm
Quote from: joofa
Technically, the overshoot is not "unjustified"

Technically, it is, for the reasons I described earlier. When working with images, increasing one sampled value in a series should never cause a nearby interpolated value decrease, nor should decreasing one sampled value in a series cause any nearby interpolated value to increase. This "seesaw" behavior may make sense in the context of digital audio, where positive and negative values are equally valid, but makes no sense whatsoever in the context of images, where one can never have a negative number of photons, or a sampled value less than zero. Interpolation algorithms intended for images should take into account the asymmetric sample structure of images (only values >=0 are valid) vs the symmetric sample structure of digital audio (positive, zero, and negative values are equally valid).
Title: Is this Aliasing?
Post by: joofa on February 01, 2010, 11:57:29 pm
Quote from: Jonathan Wienke
Technically, it is, for the reasons I described earlier. When working with images, increasing one sampled value in a series should never cause a nearby interpolated value decrease, nor should decreasing one sampled value in a series cause any nearby interpolated value to increase.

You can't take a set of sampled values and then interpolate them using a reconstruction kernel that was not designed to do something and then claim that it was the fault of the reconstruction kernel. Cubic splines follow some minimization criterion (i.e., minimization of second derivative) and are not well suited for sudden impulses. If you want a monotonically increasing and/or decreasing function in interpolation in such situations then as suggested by EsbenHR use cubic Hermite interpolation so that you don't see that (excessive) overshoot or dipping.

However, that may not give you the best reconstruction in the sense of being close to the original function. What I am saying in my earlier post is that in engineering community words such as "optimal", "best", etc. have a certain implied meaning. In typical cases it is the minimization of a certain norm, and most of the times it is the L2 norm. When you use L2 norm for approximation then ringing will happen for typical reconstruction kernels, except some, such as the nearest neighbor. Interestingly enough, even linear interpolation will cause ringing in this case for the reasons I mentioned in my earlier message.
Title: Is this Aliasing?
Post by: Jonathan Wienke on February 02, 2010, 12:06:18 am
Quote from: joofa
Interestingly enough, even linear interpolation will cause ringing in this case for the reasons I mentioned in my earlier message.

Exactly how can linear interpolation ever cause ringing? You simply draw a straight line from one sample to the next. Each line between samples is completely independent; you can do whatever you link to a value in the series, and the only lines affected are those between the value you just changed and its immediate neighbors. No other lines are affected in any way, shape, or form. Linear interpolation may not always be very accurate, but it's never going to cause overshoot, ringing, or clipping of interpolated values.
Title: Is this Aliasing?
Post by: joofa on February 02, 2010, 12:14:52 am
Quote from: Jonathan Wienke
Exactly how can linear interpolation ever cause ringing? You simply draw a straight line from one sample to the next. Each line between samples is completely independent; you can do whatever you link to a value in the series, and the only lines affected are those between the value you just changed and its immediate neighbors. No other lines are affected in any way, shape, or form. Linear interpolation may not always be very accurate, but it's never going to cause overshoot, ringing, or clipping of interpolated values.

I mentioned in my previous messages when you throw in linear interpolation with L2 norm then the samples to be interpolated are the not the actual samples but a certain derived set of samples from the original continuous function. I think it is about time you took Cliff's advice more seriously and visited a good academic library   .

Title: Is this Aliasing?
Post by: Jonathan Wienke on February 02, 2010, 07:53:28 am
Quote from: joofa
I mentioned in my previous messages when you throw in linear interpolation with L2 norm then the samples to be interpolated are the not the actual samples but a certain derived set of samples from the original continuous function.

No s*&t, Sherlock. That still doesn't explain how you can get ringing with linear interpolation, regardless of whether you're doing it with the original gamma 1.0 samples or after you've converted them to gamma 2.0. With linear interpolation, the interpolation calculation only considers the two nearest sample values; all other sample values are ignored and irrelevant. Ringing, as I understand the term, is when a large change in a sampled value affects interpolated values up to several samples away, as shown in this image:

[attachment=19927:Ringing.gif]

The red line shows linear interpolation, the blue line shows cubic spline interpolation. The cubic spline exhibits ringing--an oscillation above and below the baseline that diminishes with distance from the spike. The linear interpolation does not. The linear interpolated value will change depending on what gamma you convert the samples to before interpolating, but claiming that is ringing is comparing apples to aliens.
Title: Is this Aliasing?
Post by: joofa on February 02, 2010, 09:08:29 am
Quote from: Jonathan Wienke
No s*&t, Sherlock. That still doesn't explain how you can get ringing with linear interpolation, regardless of whether you're doing it with the original gamma 1.0 samples or after you've converted them to gamma 2.0. .... The linear interpolated value will change depending on what gamma you convert the samples to before interpolating, but claiming that is ringing is comparing apples to aliens.

Jonathan, I don't think you are reading my posts carefully. BTW, how did the gamma thingy creep in?  And when did I "claim" that gamma causes ringing as you say above? I never alluded anything regarding gamma. Additionally, in your example where did you use the L2 norm, which I have repeatedly stressed causes ringing. Lets take the linear case. I mentioned before that for best linear least squares approximation you won't use the samples derived by passing a L2 function (possibly non-bandlimited) through a sinc filter; rather a filter of the following form:

Click here for linear sampling filter. (http://www.djjoofa.com/data/images/linear_correction.jpg)

And, I convolved this filter with a step edge and here is what I get:

Click here for linear ringing. (http://www.djjoofa.com/data/images/ringing_linear_interp.jpg)

Do you see the ringing?
Title: Is this Aliasing?
Post by: Jonathan Wienke on February 02, 2010, 09:32:54 am
Quote from: joofa
Jonathan, I don't think you are reading my posts carefully. BTW, how did the gamma thingy creep in?  And when did I "claim" that gamma causes ringing as you say above? I never alluded anything regarding gamma. Additionally, in your example where did you use the L2 norm I have repeatedly stressed which causes ringing. Lets take the linear case. I mentioned before that for best linear least squares approximation you won't use the samples derived by passing a L2 function (possibly non-bandlimited) through a sinc filter; rather a filter of the following form:

Click here for linear sampling filter. (http://www.djjoofa.com/data/images/linear_correction.jpg)

And, I convolved this filter with a step edge and here is what I get:

Click here for linear ringing. (http://www.djjoofa.com/data/images/ringing_linear_interp.jpg)

Do you see the ringing?

Yes, but you're NOT using "linear interpolation" in that example, you're convolving with a multi-sample linear approximation of a sinc function to interpolate, which is an apples-to-aliens comparison. As I said before, "linear interpolation" as defined everywhere else in the universe, is a simple a(1-d) + bd calculation, where a and b are the two nearest sample values, and d is the proportion of the distance between a and b for which you're calculating the interpolated value. Like nearest-neighbor interpolation, it can never overshoot, it can never ring, and it will never cause clipping. The interpolation function you showed can do all of those things, but it is NOT "linear interpolation".
Title: Is this Aliasing?
Post by: joofa on February 02, 2010, 09:39:35 am
Quote from: Jonathan Wienke
Yes, but you're NOT using "linear interpolation" in that example, you're using a multi-sample linear approximation of a sinc function to interpolate, which is an apples-to-aliens comparison. As I said before, "linear interpolation" as defined everywhere else in the universe, is a simple a(1-d) + bd calculation, where a and b are the two nearest sample values, and d is the proportion of the distance between a and b for which you're calculating the interpolated value. Like nearest-neighbor interpolation, it can never overshoot, it can never ring, and it will never cause clipping. The interpolation function you showed can do all of those things, but it is NOT "linear interpolation".

Once you convolve your analog function with the sampling kernel I provided in my earlier message, and sample, then, you WILL reconstruct using, borrowing your words, "a(1-d) + bd calculation". So the linear approximation is there, but I am repeating for the Nth time that the samples fed into it are derived using a different kernel than sinc for the best least squares approximation.

Title: Is this Aliasing?
Post by: Jonathan Wienke on February 02, 2010, 10:49:59 am
Quote from: joofa
Once you convolve your analog function with the sampling kernel I provided in my earlier message, and sample, then, you WILL reconstruct using, borrowing your words, "a(1-d) + bd calculation".

No you will not, and the diagram you posted proves it. Linear interpolation, "a(1-d) + bd" only considers the two nearest sample values when calculating an interpolated value; all other sampled values are ignored. Your "linear" convolution kernel convolves 7 sample values into the interpolated value, not 2. Here's another example of linear interpolation:

(http://upload.wikimedia.org/wikipedia/commons/thumb/6/67/Interpolation_example_linear.svg/300px-Interpolation_example_linear.svg.png)

Note that the interpolated values between samples plot as straight lines. There are no spikes or overshoots or oscillations, just straight lines. Hence the term "linear interpolation".
Title: Is this Aliasing?
Post by: EsbenHR on February 02, 2010, 11:02:26 am
Hey - stop fighting.

@Jonathan: actually, "everywhere else in the literature" is a pretty large place and it is in fact true that Joofa's view is common in e.g. machine learning.

@Joofa: actually, Jonathan is right that connecting the samples with straight lines is how the term is used for in image processing.


Come on guys, "bilinear interpolation" is only a separable process in image processing! The explanation on Wikipedia is what everyone else use (e.g. for topological maps).

Like it or not, but different fields use the terms differently; and this is an image processing forum.
Title: Is this Aliasing?
Post by: Jonathan Wienke on February 02, 2010, 12:07:14 pm
Quote from: EsbenHR
Hey - stop fighting.

@Jonathan: actually, "everywhere else in the literature" is a pretty large place and it is in fact true that Joofa's view is common in e.g. machine learning.

It's not very common when doing a Google search; every link I looked at in the search results for "linear interpolation" used the term the same way I have. And I was clear from the beginning what I was calling "linear interpolation".
Title: Is this Aliasing?
Post by: joofa on February 02, 2010, 12:17:56 pm
Quote from: EsbenHR
@Joofa: actually, Jonathan is right that connecting the samples with straight lines is how the term is used for in image processing.

EsbenHR, both Jonathan and I are in agreement that linear interpolation is connecting the samples with straight lines. What Jonathan is not realizing is that I have stressed for L2 approximation what is the best way to get these samples. It is not the same as in ideal sinc interpolation case, i.e., taking the analog function convolving with sinc and then sampling. Rather convolving with a different kernel (not to be confused with reconstruction kernel, which as we are in agreement is simply the linear connecting the dots one). You can of course take the samples of the original function (after appropriately bandlimiting it) and just connect the samples and get the lines. However, that is not necessarily the best approximation to the original function in L2. If you want better then we have to change the way we acquired the samples. I had all of this in my first message on this topic and I repeat it here with appropriate highlighting:

Quote from: joofa
E.g.:- Lets pick linear interpolation, and one would find that even linear interpolation would exhibit ringing. Normally, we don't see that in linear interpolation, because we use the actual samples. However, if we use the actual samples, then linear interpolation is not the doing the best it can do. We need to derive an alternate set of samples and then interpolate between them for min. error.

However, there is an important practical issue here also. We are normally just given the samples and have no control over how they were acquired. I even covered that in my same message and I am just repeating it here again:

Quote from: joofa
Unfortunately, we are typically given samples and can't change the way we acquired them. However, even in such cases optimizations exist that shall try to mitigate the effect of information loss by considering the way samples were acquired and the reconstruction kernel jointly.
Title: Is this Aliasing?
Post by: Jonathan Wienke on February 02, 2010, 01:19:02 pm
Quote from: joofa
If you want better then we have to change the way we acquired the samples. I had all of this in my first message on this topic and I repeat it here with appropriate highlighting:



However, there is an important practical issue here also. We are normally just given the samples and have no control over how they were acquired. I even covered that in my same message and I am just repeating it here again:

This sounds just like your claims that you can remove aliasing without distorting signal; possibly workable in theory if you have access to the signal prior to sampling, but of no relevance whatsoever to the real-world scenario of having only the sampled values from the sensor to work with. Any algorithm that requires one to "change the way we acquired the samples" is useless in the context of processing image data that has already been sampled.
Title: Is this Aliasing?
Post by: joofa on February 02, 2010, 02:05:48 pm
Quote from: Jonathan Wienke
... possibly workable in theory if you have access to the signal prior to sampling, but of no relevance whatsoever to the real-world scenario of having only the sampled values from the sensor to work with. Any algorithm that requires one to "change the way we acquired the samples" is useless in the context of processing image data that has already been sampled.

Firstly, there is the correct interpretation of theory. Secondly, please don't put words in my mouth that I have not said or implied, just like you said above that I claimed something about gamma. I don't know why I have to repeat so much stuff here but I did say the following in my first message on this topic:

Quote from: joofa
Unfortunately, we are typically given samples and can't change the way we acquired them. However, even in such cases optimizations exist that shall try to mitigate the effect of information loss by considering the way samples were acquired and the reconstruction kernel jointly.

There are techniques that exist that shall let you jointly consider the reconstruction and sampling kernels. Please refer to any advanced book on signal processing.
Title: Is this Aliasing?
Post by: EsbenHR on February 02, 2010, 02:08:00 pm
Quote from: joofa
EsbenHR, both Jonathan and I are in agreement that linear interpolation is connecting the samples with straight lines.
Well, you do not appear to agree that "linear interpolation" means, by definition, connecting the original samples with straight lines.
I would say that the most common use of the term "interpolating" is that the interpolating function, f(x,y), satisfies f(x_i,y_i) = z_i for the sample set (i.e. the original samples).

Ignoring the terminology sideshow, what you want to do is quite different: you want to approximate the originally function (from which the samples are obtained) by approximating it with a function expressed as a sum of basis functions.
When you say "linear interpolation", this means that the basis functions are triangular.

Could we call this "reconstruction" to avoid the word war? Or is that too overloaded too in this context?


Quote from: Jonathan Wienke
This sounds just like your claims that you can remove aliasing without distorting signal; possibly workable in theory if you have access to the signal prior to sampling, but of no relevance whatsoever to the real-world scenario of having only the sampled values from the sensor to work with. Any algorithm that requires one to "change the way we acquired the samples" is useless in the context of processing image data that has already been sampled.
I think it is a useful to know that if we did know the original function, then the best we could do (using an L2-norm, RMS, energy, least-squares or whatever you want to call it) would be ringing under quite general conditions.
If you want to prevent that, then you need some rather specialized assumptions about your image.

The trivial example is to assume the original function is a sum of the same basis functions you want to use for the approximation.
A better example is to assume the original function is piecewise linear where the length (using 1D for simplicity) have a probability distribution that have most of its mass above the sampling rate (i.e. 1px).

Or something along those lines. If you do that, then it is possible to avoid ringing even using an L2-norm and even if you do not know the original function.
You do need to introduce some extra knowledge though.

What these assumptions should be is a lot more fun to discuss than what terminology we use ;-)
Title: Is this Aliasing?
Post by: joofa on February 02, 2010, 02:16:07 pm
Quote from: EsbenHR
Well, you do not appear to agree that "linear interpolation" means, by definition, connecting the original samples with straight lines.
I would say that the most common use of the term "interpolating" is that the interpolating function, f(x,y), satisfies f(x_i,y_i) = z_i for the sample set (i.e. the original samples).

Ignoring the terminology sideshow, what you want to do is quite different: you want to approximate the originally function (from which the samples are obtained) by approximating it with a function expressed as a sum of basis functions.

Hi, I think I tried to make my assumptions and definitions clear in my first message. I'm sorry if they were not fully clear. However, some of them are repeated below:

Quote from: joofa
Here is the intent of the interpolation in signal processing philosophy: If we are to reconstruct a continuous signal from its samples then how can the samples be obtained that the reconstructed signal using these samples is the "best" representation of that original possibly non-bandlimited function. The "best" part will be measured by error between original, possibly unknown, function and the reconstructed function.

Quote from: EsbenHR
When you say "linear interpolation", this means that the basis functions are triangular.

Yes, and that is linear interpolation.

Quote from: EsbenHR
Could we call this "reconstruction" to avoid the word war? Or is that too overloaded too in this context?

I have used the word "reconstructed signal" in my quoted text above.
Title: Is this Aliasing?
Post by: Jonathan Wienke on February 03, 2010, 08:26:07 am
Quote from: joofa
Firstly, there is the correct interpretation of theory. Secondly, please don't put words in my mouth that I have not said or implied, just like you said above that I claimed something about gamma.

I'm not putting words in your mouth; "change the way we acquired the samples" is a direct quote from the last line of the first paragraph of YOUR post #117. In my universe, "acquiring the samples" is accomplished via an ADC chip taking some analog input signal and converting it to a series of numeric values. These original numeric values as output bu the ADC are "the samples". Values calculated by a reconstruction or interpolation algorithm are not "samples", they are educated guesses. Increasing the number of samples can increase the maximum amount of fine detail an image can contain, increasing the number of interpolated/reconstructed values cannot. The best they can do is aid us in more accurately interpreting the meaning of the samples we have.

Given that, the only way to "change the way we acquired the samples" in my world is to use a different ADC chip, or alter the mechanism feeding the analog signal to the ADC chip. This is obviously not practical in the context of processing images captured with whatever camera one has; it is unrealistic to expect someone to buy a new camera just so some flavor of "linear interpolation" will work better.

Similarly, when you claim "linear interpolation" can cause ringing, if you fail to define how your definition of "linear interpolation" differs from the most common definition of "linear interpolation", you risk looking like an idiot because the commonly-defined version of "linear interpolation" can never cause ringing, clipping, or overshoot.

I'm not trying to be a dick here; I'm trying to figure out exactly what you are saying. You seem to be a bit like Humpty Dumpty, taking the position that words mean whatever you say they mean, and I'm taking the position that words have specific, and generally well-established meanings, and using words contrary to those meanings causes confusion. What you call "linear interpolation" is very different from the notion of "linear interpolation" as commonly used in image processing, digital signal processing, and many other scientific fields. Your notion of a "sample" seems to similarly deviate from common usage.
Title: Is this Aliasing?
Post by: joofa on February 03, 2010, 09:35:36 am
Quote from: Jonathan Wienke
Given that, the only way to "change the way we acquired the samples" in my world is to use a different ADC chip, or alter the mechanism feeding the analog signal to the ADC chip. This is obviously not practical in the context of processing images captured with whatever camera one has; it is unrealistic to expect someone to buy a new camera just so some flavor of "linear interpolation" will work better.

I don't know why are you constantly ignoring what I have repeatedly said that there exist techniques to account for the fact that the samples were not acquired in the way mandated by the reconstructed kernel. I have said that a number of times but you keep on skipping  that. There  is NO need to use a different ADC.

I just worked an example for you where I took the samples as acquired by whatever "ADC" and still be able to accomplish the linear approximation in L2 sense. Please note that I have worked out this example very quickly, it is not the best, and there are certain hacks in there, and it can be made better, but it is just to illustrate that you don't need to change "ADC's".

Click here to see an actual image of Lena.  (http://www.djjoofa.com/data/images/lena.jpg)

Click here to see image doubled in size with "straight" linear interpolation. (http://www.djjoofa.com/data/images/lena_lin.jpg)

Click here to see image doubled in size with linear approximation in L2 sense. (http://www.djjoofa.com/data/images/lena_lin_ls.jpg)

The L2 approximated one looks sharper than straight linear interpolation with a lot of ringing artifacts. But as I said it was just done in a "shortcut" way and there is hope to make it better. But the point is that I did not change any ADCs.
Title: Is this Aliasing?
Post by: Jonathan Wienke on February 03, 2010, 09:53:24 am
Quote from: joofa
But the point is that I did not change any ADCs.

Nor did you acquire any "new" samples, you simply processed the same set of samples (the original Lena image) in different ways using different algorithms. You didn't change the way you acquired the samples, you changed the way you processed them. There's a big difference, kind of like the difference between lightning and a lightning bug.
Title: Is this Aliasing?
Post by: joofa on February 03, 2010, 11:07:28 am
Quote from: Jonathan Wienke
Nor did you acquire any "new" samples, you simply processed the same set of samples (the original Lena image) in different ways using different algorithms. You didn't change the way you acquired the samples, you changed the way you processed them. There's a big difference, kind of like the difference between lightning and a lightning bug.

Are you nitpicking or really not following all of this stuff? Here is what was done: I have the regular samples, and know the reconstruction and sampling kernels so I derived a fast and somewhat crude approximation to what would have been the samples if they were acquired with the application of the right sampling kernel in the analog domain. With this new set of samples I did the standard linear interpolation.

However, one does not even need to do all of this. There are more sophisticated methods rooted in theory on how to deal with such situations.

Quote from: Jonathan Wienke
... you risk looking like an idiot ....  You seem to be a bit like Humpty Dumpty, taking the position that words mean whatever you say they mean, ...

I guess this conversation is over. As I mentioned before it takes a lot of time for me to research and prepare my responses for online forums. I wrote a small program only to illustrate my point to you. And, I can't continue doing this effort if we tread into issues of civility.
Title: Is this Aliasing?
Post by: joofa on February 03, 2010, 11:25:12 am
Deleted.
Title: Is this Aliasing?
Post by: crames on February 03, 2010, 01:29:41 pm
Quote from: joofa
I just worked an example for you where I took the samples as acquired by whatever "ADC" and still be able to accomplish the linear approximation in L2 sense. Please note that I have worked out this example very quickly, it is not the best, and there are certain hacks in there, and it can be made better, but it is just to illustrate that you don't need to change "ADC's".

Click here to see an actual image of Lena.  (http://www.djjoofa.com/data/images/lena.jpg)

Click here to see image doubled in size with "straight" linear interpolation. (http://www.djjoofa.com/data/images/lena_lin.jpg)

Click here to see image doubled in size with linear approximation in L2 sense. (http://www.djjoofa.com/data/images/lena_lin_ls.jpg)

The L2 approximated one looks sharper than straight linear interpolation with a lot of ringing artifacts. But as I said it was just done in a "shortcut" way and there is hope to make it better. But the point is that I did not change any ADCs.
Joofa,

The extra sharpness of the L2 version is interesting (although it looks like some jpeg artifacts got in the way).

Do you have a reference that describes that method?

Cliff
Title: Is this Aliasing?
Post by: Bart_van_der_Wolf on February 03, 2010, 02:48:21 pm
Quote from: crames
The extra sharpness of the L2 version is interesting (although it looks like some jpeg artifacts got in the way).

Do you have a reference that describes that method?

I'm also a bit puzzled by the 'actual' Lena image (an image which I know as a larger color image) since it's smaller than any version I've seen used as a standard and there seem to be different versions around. Here (http://www.ece.rice.edu/~wakin/images/) are a few versions and we could adopt one as a relevant version. The lena512.bmp (http://www.ece.rice.edu/~wakin/images/lena512.bmp) version could serve as one that could presumably be used by all who wish to experiment on something without introducing another variable, a different source image. Of course it becomes a lot bigger when it gets resized 2x linearly, 4x in number of pixels (to explain my use of the term linear  ), so one might prefer to compare crops.

Cheers,
Bart
Title: Is this Aliasing?
Post by: ejmartin on February 03, 2010, 04:40:16 pm
Quote from: BartvanderWolf
I'm also a bit puzzled by the 'actual' Lena image [...] since it's smaller than any version I've seen

Obviously he changed the way the samples were acquired  
Title: Is this Aliasing?
Post by: joofa on February 03, 2010, 05:30:31 pm
Quote from: crames
Joofa,

The extra sharpness of the L2 version is interesting (although it looks like some jpeg artifacts got in the way).

Do you have a reference that describes that method?

Cliff, I did not follow any published procedure. As I said I did a quick and dirty job at approximating the samples I wanted. Basically I took a cue from how the convergence coefficients are determined for Gabor expansions and derived a biorthogonal filter to the linear interpolation and used that with Lena image.

I also noticed the jpeg compression artifacts. I did not mess with quality parameters etc., as I was not aiming for anything fancy.

Quote from: ejmartin
Obviously he changed the way the samples were acquired  

Emil, that is very funny indeed!  
Title: Is this Aliasing?
Post by: crames on February 03, 2010, 06:36:30 pm
Quote from: joofa
I also noticed the jpeg compression artifacts. I did not mess with quality parameters etc., as I was not aiming for anything fancy.
It seems that your Lena had the jpeg artifacts before going through your routine. I wonder how it would look on (a reduced version of) Bart's "clean" Lena?
Title: Is this Aliasing?
Post by: EsbenHR on February 03, 2010, 07:11:27 pm
Quote from: joofa
I just worked an example for you where I took the samples as acquired by whatever "ADC" and still be able to accomplish the linear approximation in L2 sense. Please note that I have worked out this example very quickly, it is not the best, and there are certain hacks in there, and it can be made better, but it is just to illustrate that you don't need to change "ADC's".

The L2 approximated one looks sharper than straight linear interpolation with a lot of ringing artifacts. But as I said it was just done in a "shortcut" way and there is hope to make it better. But the point is that I did not change any ADCs.


OK, I'll bite!

So, let's forget the L2-norm and say, instead, that we assume the (continuous) image on the sensor (yeah, Lenna was scanned, work with me here) is piecewise linear and a remarkable piece of luck would have it that the pieces are all between the nearest pixel centers :-)

So, what would that look like, if we assume that the acquired pixels represents the average of the function? That is we assume pixels are noise-free, the fill-factor is 100% etc.?
As attached I would think. I hope you can see which is which ;-)


Here is the code which should work in Matlab or any halfway decent clone (the above was an ancient Octave):
% Load image.
img1 = double(imread('lena512.png'));

% Truncate an inverse (use a window to suck less).
F = 1024;
t = [zeros(1,100), [1, 6, 1]/8, zeros(1,F - 103)];
T = real(ifft(1./fft(t)));
m = F - 100;
f = T(m-5:m+5);

% Apply inverse.
img2 = conv2(img1, f,  'same');
img2 = conv2(img2, f', 'same');

% Upscale N times (original image in the example).
N = 4;
up = img1;
[H,W] = size(up);
up = reshape(up, [1, H, 1, W]);
up(N,1,N,1) = 0;
up = reshape(up, [H*N, W*N]);

% Filter using desired kernel (bilinear kernel in example).
k = [1, 2, 3, 4, 3, 2, 1]/4;   % Bilinear
%k = [1, 1, 1, 1];             % Nearest neigbour
up = conv2(up, k'*k, 'same');

% Write result back.
imwrite('lena-out1.png', uint8(up));
[attachment=19971:lena_test.png]
Title: Is this Aliasing?
Post by: joofa on February 03, 2010, 07:21:50 pm
Quote from: crames
It seems that your Lena had the jpeg artifacts before going through your routine. I wonder how it would look on (a reduced version of) Bart's "clean" Lena?

Hi Cliff,

You are right the Lena I was working with was apparently not very faithful.

I downloaded the Bart's "clean" Lena (thanks Bart). I had to convert the bmp to jpeg before I could do anything as I was working in jpeg. I upscaled from 512x512->1024x1024. Here are two crops out of the 1024x1024 ones:

Click here for regular linear interpolation. (http://www.djjoofa.com/data/images/lena_crop_lin.jpg)

Click here for L2 linear approximation. (http://www.djjoofa.com/data/images/lena_crop_lin_ls.jpg)

The L2 ones seems sharper and those bad ringings are suppressed a lot.

Title: Is this Aliasing?
Post by: Jonathan Wienke on February 03, 2010, 07:40:50 pm
Quote from: joofa
Are you nitpicking or really not following all of this stuff?

I suppose I'm nitpicking, in the interest of coming to a common definition of the meanings of terms. You can't have a meaningful discussion without having a common understanding of the meaning of the terms used in the conversation. As an example, your usage of the terms "linear interpolation" and "sample" are very different than their most common definitions, and as a result, it is sometimes difficult to to tell whether what you are saying is insightful or gibberish.
Title: Is this Aliasing?
Post by: crames on February 04, 2010, 08:21:19 am
Quote from: EsbenHR
OK, I'll bite!

So, let's forget the L2-norm and say, instead, that we assume the (continuous) image on the sensor (yeah, Lenna was scanned, work with me here) is piecewise linear and a remarkable piece of luck would have it that the pieces are all between the nearest pixel centers :-)

So, what would that look like, if we assume that the acquired pixels represents the average of the function? That is we assume pixels are noise-free, the fill-factor is 100% etc.?
As attached I would think. I hope you can see which is which ;-)
If I'm reading your code correctly, you're filtering with the inverse of a triangular psf ([1 6 1]/8), then interpolating?

It might be better to interpolate first, then inverse filter, to reduce the jaggies on the diagonal hat brim (as in my sharpening demo, above). Otherwise, the inverse filtering seems to help a lot.
Title: Is this Aliasing?
Post by: EsbenHR on February 05, 2010, 05:21:02 am
Quote from: crames
If I'm reading your code correctly, you're filtering with the inverse of a triangular psf ([1 6 1]/8), then interpolating?

It might be better to interpolate first, then inverse filter, to reduce the jaggies on the diagonal hat brim (as in my sharpening demo, above). Otherwise, the inverse filtering seems to help a lot.
Yes, you are correct that is exactly what it does.
Certainly your suggestion would result in better images.

However, the job I set out to do was to create a continuous image which a sensor (ideal, no noise, 100% fill factor etc.) would return the values we were given in the file.

So it is not at attempt to create a good image; it is an attempt to create an image which would theoretically measure the image in the file.
The [1 6 1], which is inverted, express the fact that a triangular basis function has 1/8 of its mass on each of the neighbor pixels.

The point was to show that if we have an image model (in this case a very simple-minded model: "the image is piecewise linear") and a sensor model (here: each pixel measures the light exposed on its area) then we can construct a plausible image. If we accept those terms, then we get ringing.

[ Actually, I guess the sensor integral is in gamma 1.0 so actually we would need to transform the image before and after load. It would likely be even uglier but be more true to the stated assumptions.]
Title: Is this Aliasing?
Post by: ejmartin on February 05, 2010, 08:56:53 am
It would seem that the image formation model one wants is that in a region of steep gradient, that the second derivative of the image does not change sign for a given distance from the steep gradient.  This would prevent or at least dampen ringing (which I for one find more objectionable than a little softness).
Title: Is this Aliasing?
Post by: EsbenHR on February 05, 2010, 12:03:21 pm
Quote from: ejmartin
It would seem that the image formation model one wants is that in a region of steep gradient, that the second derivative of the image does not change sign for a given distance from the steep gradient.  This would prevent or at least dampen ringing (which I for one find more objectionable than a little softness).

Yes, I believe that is the kind of thinking we need. Ups, we just entered the non-linear territory :-)

Anyway, I personally lean towards the idea that we might not have a derivative everywhere.
Title: Is this Aliasing?
Post by: joofa on February 05, 2010, 12:18:37 pm
Quote from: Jonathan Wienke
... your usage of the terms "linear interpolation" and "sample" are very different than their most common definitions, and as a result, it is sometimes difficult to to tell whether what you are saying is insightful or gibberish.

In the signal processing literature there is a technical difference between interpolation and approximation. Interpolation means that the signal value at original location is kept the same while filling in-between detail. Approximation means that signal value at original locations is not necessarily kept the same.

I have tried to maintain this difference in my posts. I have talked about 2 sets of data we are going to interpolate. One is the actual samples and other a derived set of samples. When we fill in values in both of these separate sets I have been using linear interpolation, since at original locations in each set of samples I am keeping the data values intact.

However, once everything is done and when we compare the results of both sets then, of course, the interpolated values at the original locations in the derived set is going to differ from the first set, because the samples in the two sets were different to start with before any interpolation in each of them was done,  and in this sense it may be called approximation now.

If you notice I have used the words "linear approximation" in this final comparison phase in my posts #123  (http://luminous-landscape.com/forum/index.php?s=&showtopic=40809&view=findpost&p=344555)and #133 (http://luminous-landscape.com/forum/index.php?s=&showtopic=40809&view=findpost&p=344707) in those "Click here ..." sentences.
Title: Is this Aliasing?
Post by: joofa on February 05, 2010, 03:28:06 pm
Quote from: EsbenHR
The point was to show that if we have an image model (in this case a very simple-minded model: "the image is piecewise linear") and a sensor model (here: each pixel measures the light exposed on its area) then we can construct a plausible image. If we accept those terms, then we get ringing.

It would appear to me that the joint effect of inverse-triangular-linear model may not be too simple. If I understood it correctly, here is the overall system response in this case:

(http://djjoofa.com/data/images/qlsys.jpg)

It moves a little upwards around Nyquist, which causes the sharpening, but it is close to the ideal flat response. Below is the response of inverse-triangular-nearest-neighbor:

(http://djjoofa.com/data/images/qnsys.jpg)

As expected, it has higher overshoots and would cause more of those nasty jaggies effect.