Hi,

Do you think you figured out the relation between the user input (Radius) and the kernel used for Gaussian Blur?

I mean, the Gaussian Kernel is built by 2 parameters: the radius of the filter and the Standard Deviation of the Gaussian Function.

does anyone know how those are calculated given the user input for Radius?

Hi,

I don't think it is formally published information, so one would have to reverse engineer the behavior to get an idea. Maybe someone has, I do not know if that is the case, but it doesn't look like a straight forward Gaussian implementation, just like the PS bi-cubic interpolation is not straightforward bi-cubic, and 16-bit is not straightforward 16-bit.

However, maybe they are using a somewhat similar approach

as ImageMagick does (although Photoshop usually cuts a few corners for speed purposes), specifying only a 'radius' actually sets the Gaussian sigma (standard deviation) which restricts the '0x' kernel

*radius* size to 3x the sigma for 16-bit data, less for 8-bit data. Thus, the kernel size is implicitly specified by the range of significant contributions.

Maybe the radius is the FWHM (Full Width Half Maximum amplitude) size of the used sigma and not the full sigma, who knows ... Adobe moves in strange ways.

Cheers,

Bart