Pages: [1] 2   Go Down

Author Topic: CS6 Bicubic Automatic: Dissatisfied User  (Read 18771 times)

fike

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1413
  • Hiker Photographer
    • trailpixie.net
CS6 Bicubic Automatic: Dissatisfied User
« on: March 04, 2013, 03:08:08 pm »

Am I the only one who doesn't like CS6's new resizing algorithm called Bicubic Automatic.  I am always downsampling and think it oversharpens the results. I have noticed the problem being more pronounced with my Olympus OM-D than my Canon 7D.  This could be because the Olympus has weaker anti-aliasing and this downsampling algorithm assumes a less-sharp image to begin with, but I am almost always changing to bicubic because it is softer.

(Yes I do know I can change the default.)
Logged
Fike, Trailpixie, or Marc Shaffer

Schewe

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 6229
    • http:www.schewephoto.com
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #1 on: March 04, 2013, 03:19:05 pm »

You mean, you don't like Bicubic Sharper, right? The Auto has nothing to do with anything other than it auto selects sharper for downsampling, smoother for upsampling. And yes, if you downsample from a really large image to a really small image you can tend to see artifacting with diagonal lines and circles...nature of the beast. You might try a couple of rounds of no more than 50% down using regular Bicubic and finish with a final downsample using Sharper. You can create an action based on %'s.
Logged

fike

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1413
  • Hiker Photographer
    • trailpixie.net
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #2 on: March 04, 2013, 03:30:08 pm »

It's useful to know that photoshop is making that decision.  Yes, before CS6 I rarely used bicubic sharper for downsampling.  As you describe, sometimes I would reduce part way using bicubic (smoother) and then make the last step using bicubic sharper instead of final output sharpening. 

I did notice that bicubic automatic (sharper) is very good with scenes of artifice like homes, buildings and etc....  It seems to be the fine detail of nature that vexes it while straight lines of civilization are more to its liking. 
Logged
Fike, Trailpixie, or Marc Shaffer

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #3 on: March 04, 2013, 03:49:37 pm »

Aside from the sharper/smoother question, there's an issue with bicubic (and bilinear) for large amounts of downsizing; they don't consider as many pixels in the original image as they ideally should. You can fix this by applying some Gaussian blur to the image before downsizing. The radius depends on the downsizing ratio -- the greater the ratio, the larger the radius. Now each pixel in the image has contributions from its neighbors. Then downsize. If the ratio is high, and thus the radius, you'll find that bicubic, bilinear, and nearest neighbor all produce similar results. As has been pointed out, repeated lower-ratio downsizings can also let pixels relatively far away in the original image affect contiguous pixels in the downsized result.

Jim

fike

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1413
  • Hiker Photographer
    • trailpixie.net
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #4 on: March 04, 2013, 04:00:40 pm »

Aside from the sharper/smoother question, there's an issue with bicubic (and bilinear) for large amounts of downsizing; they don't consider as many pixels in the original image as they ideally should. You can fix this by applying some Gaussian blur to the image before downsizing. The radius depends on the downsizing ratio -- the greater the ratio, the larger the radius. Now each pixel in the image has contributions from its neighbors. Then downsize. If the ratio is high, and thus the radius, you'll find that bicubic, bilinear, and nearest neighbor all produce similar results. As has been pointed out, repeated lower-ratio downsizings can also let pixels relatively far away in the original image affect contiguous pixels in the downsized result.

Jim

I comprehend what you are saying about the blur, but WOW! That is counter intuitive.  I'll need to experiment with that.  The problem is that the with stuff like this the experimentation can overtake the enjoyment of making a great photo.   
Logged
Fike, Trailpixie, or Marc Shaffer

Vladimirovich

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1311
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #5 on: March 04, 2013, 04:26:13 pm »

try ImageJ + resize plugin

http://rsbweb.nih.gov/ij/download.html + http://bigwww.epfl.ch/algorithms/ijplugins/resize/

it is painful to use, but it is a very good resizing, PS takes a rest.

Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #6 on: March 04, 2013, 07:01:18 pm »

I comprehend what you are saying about the blur, but WOW! That is counter intuitive.  I'll need to experiment with that.  The problem is that the with stuff like this the experimentation can overtake the enjoyment of making a great photo.   

Hi Marc,

I've been warning about the use of BiCubic sharper for a long time (almost 9 years). Downsampling requires a controlled amount of blur in order to reduce the chance of aliasing. The better algorithms are Lanczos and Mitchel-Netravali types of filters, preferably with an intermediate gamma linearization.

A quick and dirty method for when you want to stay within a Photoshop workflow, is to apply a Gaussian blur in proportion to the amount of downsampling. A starting point could be a blur radius of 0.25 / downsampling factor. If you e.g. downsample to 1/7th of the original size, you first use a 0.25 * 7 = 1.75 radius Gaussian blur. Then you downsample with the regular (or rather Photoshop's version) Bicubic filter, and Smart sharpen with a very small radius afterwards. This is not an optimal recipe, but it's better than straight downsampling.

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==

fike

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1413
  • Hiker Photographer
    • trailpixie.net
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #7 on: March 04, 2013, 07:40:58 pm »

Hi Marc,
...
A quick and dirty method for when you want to stay within a Photoshop workflow, is to apply a Gaussian blur in proportion to the amount of downsampling. A starting point could be a blur radius of 0.25 / downsampling factor. If you e.g. downsample to 1/7th of the original size, you first use a 0.25 * 7 = 1.75 radius Gaussian blur. Then you downsample with the regular (or rather Photoshop's version) Bicubic filter, and Smart sharpen with a very small radius afterwards. This is not an optimal recipe, but it's better than straight downsampling.

I haven't used bicubic sharper much for years.  I think in this case what tripped me up was that I didn't know what automatic meant.  I need to try your formula.  That gives me a good start to get in the ballpark. thanks.
Logged
Fike, Trailpixie, or Marc Shaffer

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #8 on: March 04, 2013, 07:54:29 pm »

...I've been warning about the use of BiCubic sharper for a long time (almost 9 years).

Bart, thanks for the link. I'd spent half an hour setting up some examples to display the downsizing artifacts in an instructive way, when I took another look at the thread and saw your post. You did much better job than I was going to do.

Jim

digitaldog

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 20651
  • Andrew Rodney
    • http://www.digitaldog.net/
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #9 on: March 05, 2013, 10:23:45 am »

Automatic is long overdue! Keep in mind that you could have had your old preferences set for say Bicubic Sharpener which would work everywhere, even if you used Free Transform and was moving the bounding boxes up or down (sizing up or down). You were stuck with the original sharpening algorithm in preference which was silly. Now Photoshop knows if you are sizing up or down and applies the appropriate algorithm without you having to remember what you set, every place such a sampling occurs.
Logged
http://www.digitaldog.net/
Author "Color Management for Photographers".

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #10 on: March 05, 2013, 08:01:28 pm »

I comprehend what you are saying about the blur, but WOW! That is counter intuitive.

Marc, Bart did a great job with the explanation and the examples, so maybe you don't need any help with the intuitive part any more. If you do, consider this quite (maybe over) simplified line of reasoning. There's usually a filter over your camera sensor that blurs the image a bit so that details that the sensor can't resolve don't get to it. If you had a sensor with great big pixels, it would take more blurring to keep those unresolvable details away from the sensor. (Bigger pixels = less resolution). When you do a high-ratio down-res, you want the equivalent of that filter that you would have had on the camera with the big pixels. Bicubic and bilinear don't do enough filtering at high ratios, and therefore, you may need to do it yourself.

If you don't have a lot of detail in the original image, you may not get the artifacts even without the filter. Some people may prefer to down-res without the filter and deal with the artifacts later. That's why some cameras are made without the blurring filter.

Does that help? Is it too simplified (If that's the case, I can do it again with more rigor, but I'll probably have to mention Dr. Nyquist)?

Jim

fike

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1413
  • Hiker Photographer
    • trailpixie.net
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #11 on: March 05, 2013, 08:15:56 pm »

Marc, Bart did a great job with the explanation and the examples, so maybe you don't need any help with the intuitive part any more. If you do, consider this quite (maybe over) simplified line of reasoning. There's usually a filter over your camera sensor that blurs the image a bit so that details that the sensor can't resolve don't get to it. If you had a sensor with great big pixels, it would take more blurring to keep those unresolvable details away from the sensor. (Bigger pixels = less resolution). When you do a high-ratio down-res, you want the equivalent of that filter that you would have had on the camera with the big pixels. Bicubic and bilinear don't do enough filtering at high ratios, and therefore, you may need to do it yourself.

If you don't have a lot of detail in the original image, you may not get the artifacts even without the filter. Some people may prefer to down-res without the filter and deal with the artifacts later. That's why some cameras are made without the blurring filter.

Does that help? Is it too simplified (If that's the case, I can do it again with more rigor, but I'll probably have to mention Dr. Nyquist)?

Jim
that is an interesting comparison, and it does help me understand the situation.  With that said, I wouldn't mind hearing the grownup version with Mr. Nyquist included.
Logged
Fike, Trailpixie, or Marc Shaffer

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #12 on: March 05, 2013, 11:23:43 pm »

...I wouldn't mind hearing the grownup version with Mr. Nyquist included.

Marc, I will get to this, but it may take me a few days. I want to do it justice, and I have an exhibition opening on Saturday, and I'm pretty busy right how.

Please be patient.

Thanks,

Jim

ErikKaffehr

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 11311
    • Echophoto
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #13 on: March 06, 2013, 01:23:24 am »

Hi,

While waiting for Mr. Kassons writeup you can read this:

http://www.normankoren.com/Tutorials/MTF2.html#Lovisolo

http://en.wikipedia.org/wiki/Spatial_anti-aliasing

And this one: http://bvdwolf.home.xs4all.nl/main/foto/down_sample/down_sample.htm

Also this one: http://bvdwolf.home.xs4all.nl/main/foto/down_sample/example1.htm

Best regards
Erik


Marc, I will get to this, but it may take me a few days. I want to do it justice, and I have an exhibition opening on Saturday, and I'm pretty busy right how.

Please be patient.

Thanks,

Jim
« Last Edit: March 06, 2013, 01:25:28 am by ErikKaffehr »
Logged
Erik Kaffehr
 

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #14 on: March 06, 2013, 11:56:17 am »

... I wouldn't mind hearing the grownup version with Mr. Nyquist included.

Marc,

I found an old blog post I wrote on a similar subject some time ago, so, with generous self-plagiarism, here's my shot at your answer. Warning: it's long. Mark Twain is supposed to have said, "I didn't have time to write a short letter, so I wrote a long one instead." This post is kinda like that.

Before we talk about antialiasing, we need to understand what aliasing is. For that, we go to AT&T in 1924, where a guy named Harry Nyquist came up with a surprising idea, later generalized by the father of information theory, Claude Shannon.

Even though his ideas had broader applicability, Nyquist worked only with signals that varied with time, and I’m going to consider only those kinds of signals at first. Nyquist was interested in the voice signals that AT&T got paid to transmit from place to place, and surprisingly to me, telegraph signals. These days, almost everything you hear has been sampled: CDs, wireline and cellphone telephone calls, satellite and HD radio, Internet audio, and iPod music.

What came out of Nyquist’s insight (which, in 1924, looked at the problem backwards from the way I'm stating it here) was that, given an idealized sampling system (perfect amplitude precision, no sampling jitter, infinitely small sampling window, etc.), if you took regularly spaced samples of the signal at a rate faster than twice the highest frequency in the signal, you had enough information to perfectly reconstruct the original signal. This came to be known as the Nyquist Criterion, and although I’ve worked with it for 45 years (first in data acquisition and process control, then in telephone switching systems, and finally in image processing), I still find it pretty amazing. It turns out that the Nyquist Criterion’s path from theory to practice has been pretty smooth; real systems, which don’t obey all the idealizing assumptions (some of which are pretty severe: pick any frequency you like, and any signal of finite duration has some frequency content above it) come very close to acting like the ideal case.

What if there is significant signal content at frequencies above half the sampling frequency? Let’s imagine a system which samples the input signal 20,000 times a second. We say the sampling frequency is 20 kilo Hertz, or 20 KHz. Let’s further say that this system is connected to a system to reconstruct the signal from the samples. Now let’s put a single frequency in the input, as see what we get at the output. If we put in 5 KHz, we get the same thing out. We turn the dial up towards 10 KHz, and we still get the same signal out as we put in. As we go above 10 KHz, a strange thing happens: the output frequency begins to drop. When we put 11 KHz in, we get a 9 KHz output. 12 KHz give us an output of 8 KHz. This continues to happen all the way to 20 KHz, where we get 0 Hz (dc) signal. 21 KHz in gives us 1 KHz out, 22 KHz in gives us 2 KHz out, etc.

Engineers, trying to relate this situation to everyday life, said that, at over half the sampling frequency, the input signal appears at the output, but “under an alias”. Thus, since in English there is no noun that cannot be verbed, we get aliased signals and aliasing.

Aliasing is almost always a bad thing. If aliasing is present, it’s impossible to tell whether the signals at the output of the reconstructive device were part of the original signal, or were aliased down in frequency from somewhere else in the spectrum. Therefore, in systems that sample, transmit or store, and reproduce time-varying signals in the real world, such as CDs, the audio part of DVDs, or telephone systems, place a filter in front of the sampler to diminish (attenuate is the engineering word) signal content at frequencies above half the sampling frequency. This filter is called an antialiasing (or AA, or, if you’re an engineer, “A-squared”) filter.

Now let’s generalize the one-dimensional sampling I talked about above to instantaneous two-dimensional continuous spatial signals, such as images produced by a lens, sampled by idealized and actual image capture chips.

In order to do this, I’m going to have to get into spatial frequency. I’m going to punt on two-dimensional spectra, because I don’t know of any way to deal with them at all rigorously without a lot of math, but fortunately, for the purposes of understanding how to use digital cameras, rather than how to design them, we can think in terms of one-dimensional frequency at various places and at various angles.

If you’re a serious photographer and have a technical bent, you’ve probably been looking at modulation transfer function (MTF) charts as part of your evaluation of lenses. If you haven’t seen one in a while, go to http://us.leica-camera.com/photography/m_system/lenses/6291.html and click on the link on the right hand side labeled “Technical Data”. Open the Acrobat-encoded data sheet for the (spectacular, as far as I’m concerned) 18mm f/3.8 Leica Super Elmar-M ASPH lens. Page down to the resolution charts. You’ll notice that they give contrast data for 5, 10, 20, and 40 line-pair per millimeter test targets at various places across the frame, at two orientations of the target. Those four targets, while nearly square waves rather than sinusoids, amount to inputs of four different sets of spatial frequencies to the imaging system. The units will seem more analogous to the units of temporal frequency, which were cycles per second before Dr. Hertz was honored, if we replace “line pair” (one black and one white line) with “cycle”, and talk about cycles per mm.

Another place you may have encountered spatial frequency is in doing your own lens testing using a test target. These targets usually have groups of lines at various pitches, angles, and positions, and you look at the image of the target and figure out what places you can’t make out the lines any more. Then you divide the spatial frequencies (measured in line pairs per mm) by the reduction ratio from the chart to the film plane, and that’s your ultimate (contrast equals zero) resolution. This kind of testing doesn’t give the richness of information obtainable from MTF curves, but you can do it at home with no special computer program. It is also more relevant to aliasing, since the frequency at which the lens just starts to deliver zero contrast is the frequency above which there will be no aliasing, no matter what the resolution of the image sensor or the presence or absence of an antialiasing filter.

In order to discuss spatial sampling frequency, we have to turn most of what you probably know about digital sensors on its head. Most manufactures and photographers talk about sensor resolution in terms of pixel pitch: the distance between the centers of adjacent pixels. The Nikon D3x has a pixel pitch of 5.94 micrometers; the Pentax D645 virtually the same; the Hasselblad H3D, 6.8; and the Nikon D3s, 8.45. We can turn the pixel pitch into the sampling frequency by inverting it, so the D3x has a sampling frequency of 169.2 K samples/meter, or 169.2 samples/mm. A crude analysis says we won’t have any aliasing if the D3x optical system (lens and antialiasing filter) zero contrast frequency is half of the sensor sampling frequency, or 85 cycles/mm.

Alas, things aren’t that simple. I have bad news, good news and very bad news.

The bad news is that the kind of sensor upon which all the above sampling discussion is based is completely impractical. Nyquist’s original time-based work assumed sampling the signal at an instant in time. The extensions discussed in this post have so far assumed sampling the image at infinitesimally small points on the sensor. In the real world, as we make the light-sensitive area in a pixel smaller and smaller, it gets slower. Turning up the amplifier gain to compensate for this reduction in sensitivity increases noise. We’ve all seen the results of tiny sensors in inexpensive high-pixel-count point and shoot cameras, and we surely don’t want that in our big cameras, which would be what we’d get if we spread tiny little light receptors thinly across a big image sensor. The result of making the image sensor larger than optimal is that we don’t get images that are as sharp as they should be; the larger receptor area causes image spatial frequencies near the Nyquist limit of half the sampling frequency to be attenuated.

The good news is that increasing the area of the sensor receptors reduces aliasing, and does it fairly efficiently. William Pratt, in his book Digital Image Processing, 2nd Edition, on pages 110 and 111, compares a square receptor with a diffraction-limited ideal lens and finds that, for the same amount of aliasing error, the lens provides greater resolution loss. He asserts, but does not provide data, that a defocused ideal lens would perform even more poorly than the diffraction-limited lens. In digital cameras, this kind of antialiasing filtering, which comes for free, is called fill-factor filtering, since it is related to how much of the grid allocated to the sensor is sensitive to light.

In the transitional period of my digital photographic career, when I was using film capture and digital output, I used an Optronics Colorgetter drum scanner. The scanner let you control the scanning aperture independent of the pixel resolution. I started making the aperture smaller and smaller, figuring that I’d get better detail. Instead of getting better, things went rapidly downhill. It took me a while to realize that the film grain provided an immense amount of high frequency detail, and that, by making the scanning aperture smaller, I was aliasing more grain noise into the scan.

The really bad news is that, with the exception of those using monochrome sensors (think 3-chip TV cameras) and the handful employing Carver Mead’s Foveon sensors, digital cameras don’t detect color information for each pixel at the same place on the chip.

The most common way of getting color information is to put various color filters over adjacent pixels. The Bayer, or GRGB, pattern, invented by Bryce Bayer at Eastman Kodak, uses twice as many green filters as red or blue ones, on the quite reasonable theory that a broad green filter response is not too far from the way the human eye responds to luminance, and luminance resolution in the eye is greater than chroma resolution. The use of this pattern of filters requires that calculations involving neighboring pixels need to be performed to convert each monochromatic pixel on the sensor to a color pixel in the resultant file. This mathematical operation, called demosaicing, assumes that there is no image detail finer than the group of cells involved in the calculation. If there is this kind of image detail, any aliasing will cause not only luminance errors, but color shifts.

The first 21st-century digital camera without an antialiasing filter that I used was the Kodak DCS-14N. One of the first pictures I made included in the foreground a wet asphalt road. The tiny pebbles in the asphalt combined with the way the sun was reflecting off them created a lot of high-frequency detail. The demosaiced image was a riot of highly saturated noise in all the colors of the rainbow.

It’s not easy to put a number on the amount of filtering necessary to keep aliasing from happening in a Bayer or similar array, but I’m going to give it a try. I’ve noticed that, if you turn the sensor at a forty-five degree angle, the green dots form an array whose centers in turn form a grid of squares. The edge of each of those square is the square root of two times the pitch of the Bayer array.  So, at the very minimum, the sampling frequency for the antialiasing calculation should be based on a pixel pitch of 1.4 times the actual pitch. To get an upper bound, note that the red-filtered photosensors and the green-filtered ones form an array whose centers are squares of twice the size of the pixel pitch.

So let’s now go back to our Nikon D3x, and note that, if it didn’t have an antialiasing filter, we’d see aliasing if there were any image content above some number between 60 and 42 cycles per millimeter. These numbers are within the zero-contrast resolving power of almost any decent lens. Note that the Leica 18mm lens referenced at the beginning of this post has about 80% contrast in the center of the field at 40 lp/mm at f/5.6 and wider.

Things are a little better than this because of the fill-factor filtering mentioned above, but note that the using a Bayer array means that the fill factor for green light can never go over 50% and for red and blue light, the maximum is 25%.

A more relevant sensor is the one in the Leica M9, which has a pixel pitch of 6.8 micrometers and no antialiasing filter. We’ll see aliasing if there’s any image content above some number between 52 and 37 cycles per millimeter. If we put the 18mm Leica lens on the M9, set it to f/5.6, hold it steady and aim it at a still subject, the only thing that’s going to keep us from seeing aliasing is lack of sharp focus or lack of detail in the subject.

Let’s now move to resampling in an image-editing program. We don’t have to worry about the Bayer pattern any more, which is good. If we take our M9 image and resample it to 10% in both the horizontal and vertical directions, in the absence of any smoothing, we’ll see aliasing if there’s any information in the image above around 4 cycles per millimeter, which any lens is capable of delivering at just about any aperture. If blew up the M9 sensor so that the pixel pitch were 68 micrometers, we’d get some filtering through the fact that each pixel would be taking in light across a 68x68 micrometer (fill factor assumed 100%. Because of the Bayer array, it's at best 50%, but the demosaicing software interpolates across neighboring pixels. Yeah, I know it's a crude approximation, but it's better than nothing.) area. To simulate that effect in our downsampled image, we’d have to average a 10x10 pixel area, or 100 pixels total, for each pixel in the output image.

Bilinear interpolation considers at most the four pixels in the input image that are the closest to where the output pixel will be. Bicubic goes farther, incorporating information from up to 16 pixels into the result, a sixth of the number of pixels we’d need to simulate our big-photosite 180 Kpixel (18 Mpixel/100) M9.

Jim
« Last Edit: March 06, 2013, 06:12:17 pm by Jim Kasson »
Logged

fike

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1413
  • Hiker Photographer
    • trailpixie.net
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #15 on: March 07, 2013, 02:59:53 pm »

Marc,

I found an old blog post I wrote on a similar subject some time ago, so, with generous self-plagiarism, here's my shot at your answer. Warning: it's long. Mark Twain is supposed to have said, "I didn't have time to write a short letter, so I wrote a long one instead." This post is kinda like that.

...
Bilinear interpolation considers at most the four pixels in the input image that are the closest to where the output pixel will be. Bicubic goes farther, incorporating information from up to 16 pixels into the result, a sixth of the number of pixels we’d need to simulate our big-photosite 180 Kpixel (18 Mpixel/100) M9.

Jim


WOW! Thanks!  Let me digest that for a while.  I have some broad understanding of time-based signal processing, but I have always had trouble understanding how that transferred to the two dimensions of imaging.  Your treatise has gotten me much closer to understanding how the relationships remain the same.

How is an anti-alias filter different than a band-pass filter?
Logged
Fike, Trailpixie, or Marc Shaffer

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #16 on: March 07, 2013, 03:34:17 pm »

How is an anti-alias filter different than a band-pass filter?

An anti-aliasing filter is a low-pass filter. We want to filter out those parts of he image that are so high-frequency that they'd otherwise be aliased, and keep everything else.

Jim

Jim Kasson

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 2370
    • The Last Word
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #17 on: March 07, 2013, 04:08:03 pm »

I have some broad understanding of time-based signal processing, but I have always had trouble understanding how that transferred to the two dimensions of imaging.

I've taken some liberties with the 2D case in the interest of simplification. If I've gone too far, I'm sure someone will let me know.

Jim

Vladimirovich

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 1311
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #18 on: March 08, 2013, 08:14:51 pm »

enjoy = http://www.uni-vologda.ac.ru/~c3c/plug-ins/c3cimagesize.htm

use translate.google.com, it is a freeware.
Logged

Bart_van_der_Wolf

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 8914
Re: CS6 Bicubic Automatic: Dissatisfied User
« Reply #19 on: March 09, 2013, 05:33:30 am »

enjoy = http://www.uni-vologda.ac.ru/~c3c/plug-ins/c3cimagesize.htm

use translate.google.com, it is a freeware.

Hi,

I've tried it, but it still creates more aliasing artifacts than a simple Gaussian pre-blur plus Photoshop Bi-cubic resize does. It seems the plugin's emphasis is on correct color/brightness/contrast (which is better because Photoshop doesn't simply allow to resample with linear gamma).

Cheers,
Bart
Logged
== If you do what you did, you'll get what you got. ==
Pages: [1] 2   Go Up