Jonathan Wienke


« Reply #100 on: January 30, 2010, 06:26:05 PM » 
Reply

I'm already doing something along those lines with my modified 2D spline interpolation code. I use the standard method to calculate the z coefficients for each spline knot, where z is the parameter used in this equation:
...
Over/undershoot near highcontrast edges is effectively limited without altering the behavior of the spline away from such highcontrast edges. And the effect of clipping this parameter is paradoxically gradual; if the clipped value is only slightly different from the unclipped value, the effect on the spline is slight. Here's a 1D comparison between a standard natural cublic spline (blue) and my modified cubic spline (red): [attachment=19872:SplineComparison.gif] The data points in this curve are random values, highlighted in white. Although this example shows one instance where clipping is not prevented, you can see several areas where the modified spline avoids ringing/overshoot much better than the unmodified spline.



Logged




BartvanderWolf


« Reply #101 on: January 30, 2010, 07:07:22 PM » 
Reply

Here's a 1D comparison between a standard natural cublic spline (blue) and my modified cubic spline (red):
[attachment=19872:SplineComparison.gif]
The data points in this curve are random values, highlighted in white. Although this example shows one instance where clipping is not prevented, you can see several areas where the modified spline avoids ringing/overshoot much better than the unmodified spline. Hi Jonathan, The issue with splines of course remains to find a balance between unjustified overshoot, and justified interpolation (inventing probably useful data). Maybe a more advanced heuristics is needed, only when a potential overshoot is detected, based on more (e.g. 2D perpendicularly oriented vs sampling direction) local data input? Just thinking aloud ... Cheers, Bart



Logged

== If you do what you did, you'll get what you got. ==



EsbenHR
Newbie
Offline
Posts: 39


« Reply #102 on: January 30, 2010, 07:18:48 PM » 
Reply

Here's a 1D comparison between a standard natural cublic spline (blue) and my modified cubic spline (red):
[attachment=19872:SplineComparison.gif]
The data points in this curve are random values, highlighted in white. Although this example shows one instance where clipping is not prevented, you can see several areas where the modified spline avoids ringing/overshoot much better than the unmodified spline. I found it hard to see exactly where the reference points were, but I do get the idea. It looks a bit too simple for me, but if it looks good it is hard to argue about it :) In any case, I would not think the points on one side of a discontinuity should affect the interpolated values on the other. The whole idea about the discontinuity is that there is a separation. If I understand you correctly, you start by solving the system to find the spline and then modify the result locally. I would think a discontinuity would be more properly modeled as two splines. (You could get the same effect by zeroing the coupling elements in the system matrix before solving the spline parameters).



Logged




Jonathan Wienke


« Reply #103 on: January 30, 2010, 09:32:46 PM » 
Reply

The first example I posted is a bit extreme; pretty much every sample in the series constitutes a "highcontrast edge". Here's a sidebyside comparison with a second series that doesn't have such extreme variation:
[attachment=19875:SplineComparison.gif]
The only reason there's any difference between the splines in the second series is because the modified spline uses a 16bit integer variable to store the z parameter instead of a 32bit floatingpoint variable, to save space.



Logged




joofa


« Reply #104 on: February 01, 2010, 08:30:40 PM » 
Reply

The issue with splines of course remains to find a balance between unjustified overshoot, and justified interpolation (inventing probably useful data). Technically, the overshoot is not "unjustified", and in fact arises because how we measure the error. Here is the intent of the interpolation in signal processing philosophy: If we are to reconstruct a continuous signal from its samples then how can the samples be obtained that the reconstructed signal using these samples is the "best" representation of that original possibly nonbandlimited function. The "best" part will be measured by error between original, possibly unknown, function and the reconstructed function. For ideal sinc reconstruction it turns out that the best way to obtain the samples is to use the actual samples of the original signal itself. However, if we are going to reconstruct using other methods, is that still the best way to obtain samples? E.g.: Lets pick linear interpolation, and one would find that even linear interpolation would exhibit ringing. Normally, we don't see that in linear interpolation, because we use the actual samples. However, if we use the actual samples, then linear interpolation is not the doing the best it can do. We need to derive an alternate set of samples and then interpolate between them for min. error. Unfortunately, we are typically given samples and can't change the way we acquired them. However, even in such cases optimizations exist that shall try to mitigate the effect of information loss by considering the way samples were acquired and the reconstruction kernel jointly. Now as suggested by EsbenHR, one can use a Hermite polynomial instead of a cubic spline for getting rid of "ringingtype" stuff. However, that should be done with this understanding that the goal is to generate a visually pleasing image and not necessarily the technically best reconstruction from the original image. On the other hand, if we keep the reconstruction kernel fixed, then different ways of measuring error will produce different level of suppression on the ringing phenomenon. As I mentioned in an earlier post that the L1 norm is less sensitive to the presence of outliers (a sharp step causing ringing in our case here). For example: consider the numbers {1, 2, 5, 9, 112=outlier}. If we want to approximate them by a constant then using: L1 norm: gives the estimator as 5, L2 norm (least squares): gives the estimator as 25.8. L_infinity (max) norm: gives the estimator as 56.5. As seen L2 and L_infinity are way off, while L1 is justified, and it also gave another satisfying result that the estimator, i.e., 5, was actually present in the original set of numbers. However, optimization in spaces other than L2 is challenging and not easy and is something to be wary of.


« Last Edit: February 01, 2010, 11:28:50 PM by joofa »

Logged




Jonathan Wienke


« Reply #105 on: February 01, 2010, 11:40:25 PM » 
Reply

Technically, the overshoot is not "unjustified" Technically, it is, for the reasons I described earlier. When working with images, increasing one sampled value in a series should never cause a nearby interpolated value decrease, nor should decreasing one sampled value in a series cause any nearby interpolated value to increase. This "seesaw" behavior may make sense in the context of digital audio, where positive and negative values are equally valid, but makes no sense whatsoever in the context of images, where one can never have a negative number of photons, or a sampled value less than zero. Interpolation algorithms intended for images should take into account the asymmetric sample structure of images (only values >=0 are valid) vs the symmetric sample structure of digital audio (positive, zero, and negative values are equally valid).



Logged




joofa


« Reply #106 on: February 01, 2010, 11:57:29 PM » 
Reply

Technically, it is, for the reasons I described earlier. When working with images, increasing one sampled value in a series should never cause a nearby interpolated value decrease, nor should decreasing one sampled value in a series cause any nearby interpolated value to increase. You can't take a set of sampled values and then interpolate them using a reconstruction kernel that was not designed to do something and then claim that it was the fault of the reconstruction kernel. Cubic splines follow some minimization criterion (i.e., minimization of second derivative) and are not well suited for sudden impulses. If you want a monotonically increasing and/or decreasing function in interpolation in such situations then as suggested by EsbenHR use cubic Hermite interpolation so that you don't see that (excessive) overshoot or dipping. However, that may not give you the best reconstruction in the sense of being close to the original function. What I am saying in my earlier post is that in engineering community words such as "optimal", "best", etc. have a certain implied meaning. In typical cases it is the minimization of a certain norm, and most of the times it is the L2 norm. When you use L2 norm for approximation then ringing will happen for typical reconstruction kernels, except some, such as the nearest neighbor. Interestingly enough, even linear interpolation will cause ringing in this case for the reasons I mentioned in my earlier message.


« Last Edit: February 02, 2010, 09:09:51 AM by joofa »

Logged




Jonathan Wienke


« Reply #107 on: February 02, 2010, 12:06:18 AM » 
Reply

Interestingly enough, even linear interpolation will cause ringing in this case for the reasons I mentioned in my earlier message. Exactly how can linear interpolation ever cause ringing? You simply draw a straight line from one sample to the next. Each line between samples is completely independent; you can do whatever you link to a value in the series, and the only lines affected are those between the value you just changed and its immediate neighbors. No other lines are affected in any way, shape, or form. Linear interpolation may not always be very accurate, but it's never going to cause overshoot, ringing, or clipping of interpolated values.


« Last Edit: February 02, 2010, 12:07:09 AM by Jonathan Wienke »

Logged




joofa


« Reply #108 on: February 02, 2010, 12:14:52 AM » 
Reply

Exactly how can linear interpolation ever cause ringing? You simply draw a straight line from one sample to the next. Each line between samples is completely independent; you can do whatever you link to a value in the series, and the only lines affected are those between the value you just changed and its immediate neighbors. No other lines are affected in any way, shape, or form. Linear interpolation may not always be very accurate, but it's never going to cause overshoot, ringing, or clipping of interpolated values. I mentioned in my previous messages when you throw in linear interpolation with L2 norm then the samples to be interpolated are the not the actual samples but a certain derived set of samples from the original continuous function. I think it is about time you took Cliff's advice more seriously and visited a good academic library .



Logged




Jonathan Wienke


« Reply #109 on: February 02, 2010, 07:53:28 AM » 
Reply

I mentioned in my previous messages when you throw in linear interpolation with L2 norm then the samples to be interpolated are the not the actual samples but a certain derived set of samples from the original continuous function. No s*&t, Sherlock. That still doesn't explain how you can get ringing with linear interpolation, regardless of whether you're doing it with the original gamma 1.0 samples or after you've converted them to gamma 2.0. With linear interpolation, the interpolation calculation only considers the two nearest sample values; all other sample values are ignored and irrelevant. Ringing, as I understand the term, is when a large change in a sampled value affects interpolated values up to several samples away, as shown in this image: [attachment=19927:Ringing.gif] The red line shows linear interpolation, the blue line shows cubic spline interpolation. The cubic spline exhibits ringingan oscillation above and below the baseline that diminishes with distance from the spike. The linear interpolation does not. The linear interpolated value will change depending on what gamma you convert the samples to before interpolating, but claiming that is ringing is comparing apples to aliens.



Logged




joofa


« Reply #110 on: February 02, 2010, 09:08:29 AM » 
Reply

No s*&t, Sherlock. That still doesn't explain how you can get ringing with linear interpolation, regardless of whether you're doing it with the original gamma 1.0 samples or after you've converted them to gamma 2.0. .... The linear interpolated value will change depending on what gamma you convert the samples to before interpolating, but claiming that is ringing is comparing apples to aliens. Jonathan, I don't think you are reading my posts carefully. BTW, how did the gamma thingy creep in? And when did I "claim" that gamma causes ringing as you say above? I never alluded anything regarding gamma. Additionally, in your example where did you use the L2 norm, which I have repeatedly stressed causes ringing. Lets take the linear case. I mentioned before that for best linear least squares approximation you won't use the samples derived by passing a L2 function (possibly nonbandlimited) through a sinc filter; rather a filter of the following form: Click here for linear sampling filter.And, I convolved this filter with a step edge and here is what I get: Click here for linear ringing.Do you see the ringing?


« Last Edit: February 02, 2010, 09:24:55 AM by joofa »

Logged




Jonathan Wienke


« Reply #111 on: February 02, 2010, 09:32:54 AM » 
Reply

Jonathan, I don't think you are reading my posts carefully. BTW, how did the gamma thingy creep in? And when did I "claim" that gamma causes ringing as you say above? I never alluded anything regarding gamma. Additionally, in your example where did you use the L2 norm I have repeatedly stressed which causes ringing. Lets take the linear case. I mentioned before that for best linear least squares approximation you won't use the samples derived by passing a L2 function (possibly nonbandlimited) through a sinc filter; rather a filter of the following form: Click here for linear sampling filter.And, I convolved this filter with a step edge and here is what I get: Click here for linear ringing.Do you see the ringing? Yes, but you're NOT using "linear interpolation" in that example, you're convolving with a multisample linear approximation of a sinc function to interpolate, which is an applestoaliens comparison. As I said before, "linear interpolation" as defined everywhere else in the universe, is a simple a(1d) + bd calculation, where a and b are the two nearest sample values, and d is the proportion of the distance between a and b for which you're calculating the interpolated value. Like nearestneighbor interpolation, it can never overshoot, it can never ring, and it will never cause clipping. The interpolation function you showed can do all of those things, but it is NOT "linear interpolation".


« Last Edit: February 02, 2010, 09:35:46 AM by Jonathan Wienke »

Logged




joofa


« Reply #112 on: February 02, 2010, 09:39:35 AM » 
Reply

Yes, but you're NOT using "linear interpolation" in that example, you're using a multisample linear approximation of a sinc function to interpolate, which is an applestoaliens comparison. As I said before, "linear interpolation" as defined everywhere else in the universe, is a simple a(1d) + bd calculation, where a and b are the two nearest sample values, and d is the proportion of the distance between a and b for which you're calculating the interpolated value. Like nearestneighbor interpolation, it can never overshoot, it can never ring, and it will never cause clipping. The interpolation function you showed can do all of those things, but it is NOT "linear interpolation". Once you convolve your analog function with the sampling kernel I provided in my earlier message, and sample, then, you WILL reconstruct using, borrowing your words, "a(1d) + bd calculation". So the linear approximation is there, but I am repeating for the Nth time that the samples fed into it are derived using a different kernel than sinc for the best least squares approximation.


« Last Edit: February 02, 2010, 09:39:45 AM by joofa »

Logged




Jonathan Wienke


« Reply #113 on: February 02, 2010, 10:49:59 AM » 
Reply

Once you convolve your analog function with the sampling kernel I provided in my earlier message, and sample, then, you WILL reconstruct using, borrowing your words, "a(1d) + bd calculation". No you will not, and the diagram you posted proves it. Linear interpolation, "a(1d) + bd" only considers the two nearest sample values when calculating an interpolated value; all other sampled values are ignored. Your "linear" convolution kernel convolves 7 sample values into the interpolated value, not 2. Here's another example of linear interpolation: Note that the interpolated values between samples plot as straight lines. There are no spikes or overshoots or oscillations, just straight lines. Hence the term "linear interpolation".



Logged




EsbenHR
Newbie
Offline
Posts: 39


« Reply #114 on: February 02, 2010, 11:02:26 AM » 
Reply

Hey  stop fighting.
@Jonathan: actually, "everywhere else in the literature" is a pretty large place and it is in fact true that Joofa's view is common in e.g. machine learning.
@Joofa: actually, Jonathan is right that connecting the samples with straight lines is how the term is used for in image processing.
Come on guys, "bilinear interpolation" is only a separable process in image processing! The explanation on Wikipedia is what everyone else use (e.g. for topological maps).
Like it or not, but different fields use the terms differently; and this is an image processing forum.



Logged




Jonathan Wienke


« Reply #115 on: February 02, 2010, 12:07:14 PM » 
Reply

Hey  stop fighting.
@Jonathan: actually, "everywhere else in the literature" is a pretty large place and it is in fact true that Joofa's view is common in e.g. machine learning. It's not very common when doing a Google search; every link I looked at in the search results for "linear interpolation" used the term the same way I have. And I was clear from the beginning what I was calling "linear interpolation".



Logged




joofa


« Reply #116 on: February 02, 2010, 12:17:56 PM » 
Reply

@Joofa: actually, Jonathan is right that connecting the samples with straight lines is how the term is used for in image processing. EsbenHR, both Jonathan and I are in agreement that linear interpolation is connecting the samples with straight lines. What Jonathan is not realizing is that I have stressed for L2 approximation what is the best way to get these samples. It is not the same as in ideal sinc interpolation case, i.e., taking the analog function convolving with sinc and then sampling. Rather convolving with a different kernel (not to be confused with reconstruction kernel, which as we are in agreement is simply the linear connecting the dots one). You can of course take the samples of the original function (after appropriately bandlimiting it) and just connect the samples and get the lines. However, that is not necessarily the best approximation to the original function in L2. If you want better then we have to change the way we acquired the samples. I had all of this in my first message on this topic and I repeat it here with appropriate highlighting: E.g.: Lets pick linear interpolation, and one would find that even linear interpolation would exhibit ringing. Normally, we don't see that in linear interpolation, because we use the actual samples. However, if we use the actual samples, then linear interpolation is not the doing the best it can do. We need to derive an alternate set of samples and then interpolate between them for min. error. However, there is an important practical issue here also. We are normally just given the samples and have no control over how they were acquired. I even covered that in my same message and I am just repeating it here again: Unfortunately, we are typically given samples and can't change the way we acquired them. However, even in such cases optimizations exist that shall try to mitigate the effect of information loss by considering the way samples were acquired and the reconstruction kernel jointly.


« Last Edit: February 02, 2010, 12:20:00 PM by joofa »

Logged




Jonathan Wienke


« Reply #117 on: February 02, 2010, 01:19:02 PM » 
Reply

If you want better then we have to change the way we acquired the samples. I had all of this in my first message on this topic and I repeat it here with appropriate highlighting:
However, there is an important practical issue here also. We are normally just given the samples and have no control over how they were acquired. I even covered that in my same message and I am just repeating it here again: This sounds just like your claims that you can remove aliasing without distorting signal; possibly workable in theory if you have access to the signal prior to sampling, but of no relevance whatsoever to the realworld scenario of having only the sampled values from the sensor to work with. Any algorithm that requires one to "change the way we acquired the samples" is useless in the context of processing image data that has already been sampled.



Logged




joofa


« Reply #118 on: February 02, 2010, 02:05:48 PM » 
Reply

... possibly workable in theory if you have access to the signal prior to sampling, but of no relevance whatsoever to the realworld scenario of having only the sampled values from the sensor to work with. Any algorithm that requires one to "change the way we acquired the samples" is useless in the context of processing image data that has already been sampled. Firstly, there is the correct interpretation of theory. Secondly, please don't put words in my mouth that I have not said or implied, just like you said above that I claimed something about gamma. I don't know why I have to repeat so much stuff here but I did say the following in my first message on this topic: Unfortunately, we are typically given samples and can't change the way we acquired them. However, even in such cases optimizations exist that shall try to mitigate the effect of information loss by considering the way samples were acquired and the reconstruction kernel jointly. There are techniques that exist that shall let you jointly consider the reconstruction and sampling kernels. Please refer to any advanced book on signal processing.


« Last Edit: February 02, 2010, 02:07:27 PM by joofa »

Logged




EsbenHR
Newbie
Offline
Posts: 39


« Reply #119 on: February 02, 2010, 02:08:00 PM » 
Reply

EsbenHR, both Jonathan and I are in agreement that linear interpolation is connecting the samples with straight lines. Well, you do not appear to agree that "linear interpolation" means, by definition, connecting the original samples with straight lines. I would say that the most common use of the term "interpolating" is that the interpolating function, f(x,y), satisfies f(x_i,y_i) = z_i for the sample set (i.e. the original samples). Ignoring the terminology sideshow, what you want to do is quite different: you want to approximate the originally function (from which the samples are obtained) by approximating it with a function expressed as a sum of basis functions. When you say "linear interpolation", this means that the basis functions are triangular. Could we call this "reconstruction" to avoid the word war? Or is that too overloaded too in this context? This sounds just like your claims that you can remove aliasing without distorting signal; possibly workable in theory if you have access to the signal prior to sampling, but of no relevance whatsoever to the realworld scenario of having only the sampled values from the sensor to work with. Any algorithm that requires one to "change the way we acquired the samples" is useless in the context of processing image data that has already been sampled. I think it is a useful to know that if we did know the original function, then the best we could do (using an L2norm, RMS, energy, leastsquares or whatever you want to call it) would be ringing under quite general conditions. If you want to prevent that, then you need some rather specialized assumptions about your image. The trivial example is to assume the original function is a sum of the same basis functions you want to use for the approximation. A better example is to assume the original function is piecewise linear where the length (using 1D for simplicity) have a probability distribution that have most of its mass above the sampling rate (i.e. 1px). Or something along those lines. If you do that, then it is possible to avoid ringing even using an L2norm and even if you do not know the original function. You do need to introduce some extra knowledge though. What these assumptions should be is a lot more fun to discuss than what terminology we use ;)



Logged




