RSS/Atom feed Twitter
Site is read-only, email is disabled

Blur filter

This discussion is connected to the gimp-developer-list.gnome.org mailing list which is provided by the GIMP developers and not related to gimpusers.com.

This is a read-only list on gimpusers.com so this discussion thread is read-only, too.

8 of 8 messages available
Toggle history

Please log in to manage your subscriptions.

blur filter M.C. Joel E. Rodriguez 19 Jun 08:47
  blur filter Sven Neumann 19 Jun 11:23
Blur filter Joel Eduardo Rodriguez Ramirez 20 Jun 01:46
  Blur filter Ernst Lippe 20 Jun 15:47
Blur filter Joel Rodriguez 21 Jun 08:58
  Blur filter Ernst Lippe 21 Jun 14:29
Blur filter Joel Rodriguez 22 Jun 22:17
  Blur filter Ernst Lippe 25 Jun 09:10
M.C. Joel E. Rodriguez
2003-06-19 08:47:25 UTC (almost 21 years ago)

blur filter

Hello Developers,

Have an image with blured added noise (actually, donk know the noise distribution)
and would like to ``clean it''.

does such a blur filter exists?, can somebody give me some directions?

Thanks in advance Joel
:)

Sven Neumann
2003-06-19 11:23:07 UTC (almost 21 years ago)

blur filter

Hi,

"M.C. Joel E. Rodriguez" writes:

Have an image with blured added noise (actually, donk know the noise distribution) and would like to ``clean it''.

does such a blur filter exists?, can somebody give me some directions?

You might have a better chance to get an answer if you tried asking on the gimp-user mailing-list.

Sven

Joel Eduardo Rodriguez Ramirez
2003-06-20 01:46:13 UTC (almost 21 years ago)

Blur filter

Actually I would like to colaborate with a new filter also (as Bowie), but mine idea is in the direction of: ``Inverse Image Filtering with Conjugate Gradient''

http://people.cornell.edu/pages/zz25/imgcg/

it is a new idea under Gimp? Downloading the latest 1.x versi'on will look into the source, but any coments are wellcome appreciated

regards
Joel

Ernst Lippe
2003-06-20 15:47:07 UTC (almost 21 years ago)

Blur filter

On Thu, 19 Jun 2003 16:46:13 -0700 (PDT) Joel Eduardo Rodriguez Ramirez wrote:

Actually I would like to colaborate with a new filter also (as Bowie), but mine idea is in the direction of: ``Inverse Image Filtering with Conjugate Gradient''

http://people.cornell.edu/pages/zz25/imgcg/

As far as I know nobody tried to implement conjugate gradient filtering as a GIMP plug-in.

One of the main reasons I think is that it is not easy to give an efficient implementation. The running time is quadratic in the number of pixels in the image which means that it is too slow to use it on the normal sized images. I would expect that any realistic implementation should use the algorithm on small parts of the image and then somehow combine the results. So, this is not a trivial plug-in to write.

Also you should not over-estimate what techniques like conjugate gradient filtering can do. The examples that most authors give are highly artificial. In the page that you referred to the convolution is known exactly and there was no noise in the blurred image. In real life this never happens.

First of all you hardly ever know the exact details of the blurring convolution. Most deconvolution algorithms give very disappointing results unless you have a very good approximation of the blurring convolution.

Second, almost all images contain noise and this has disastrous effect on the deconvolution. Most deconvolution algorithms tend to amplify the noise to an extreme degree. For example, when using the unmodified inverse deconvolution the end result is normally completely dominated by noise. Although conjugate gradient filtering is not so extremely sensitive to noise, like all deconvolution techniques its results will deteriorate rapidly even when there are only very small errors in the input.

A few years ago, when I started with my own deconvolution plug-in I examined several existing algorithms. My own conclusion was that this is a difficult subject. Most of the best algorithms are simply too slow for practical applications. I finally selected FIR Wiener filtering as a practical compromise. The running time is linear in the number of pixels in the image, and in virtually all cases its results are much better than those of similar plug-ins like unsharp mask or sharpen. If you are interested you can find it at http://refocus.sourceforge.net.

greetings,

Ernst Lippe

Joel Rodriguez
2003-06-21 08:58:03 UTC (almost 21 years ago)

Blur filter

Thanks for your attention to the matter Esnst:

it is enough information to get me going :)

I will take a close look at the link which by the way looks very impressive, the approach was thinking was not conjugate gradient itself, but a variant of constrained optimization:

http://www-fp.mcs.anl.gov/otc/Guide/SoftwareGuide/Categories/constropt.html

particularly the least squares solution, for which I'm familiar with:

http://www.sbsi-sol-optimize.com/products_lssol.htm

I do really appreciate your help, and hope to remain around

regards

Joel Rodr'iguez

P.S. wont bother for a while, thanks for your attention, yesterday's tequila,
some times make me feel that P=NP,..he,... :)

Ernst Lippe wrote:

On Thu, 19 Jun 2003 16:46:13 -0700 (PDT) Joel Eduardo Rodriguez Ramirez wrote:

Actually I would like to colaborate with a new filter also (as Bowie), but mine idea is in the direction of: ``Inverse Image Filtering with Conjugate Gradient''

http://people.cornell.edu/pages/zz25/imgcg/

As far as I know nobody tried to implement conjugate gradient filtering as a GIMP plug-in.

One of the main reasons I think is that it is not easy to give an efficient implementation. The running time is quadratic in the number of pixels in the image which means that it is too slow to use it on the normal sized images. I would expect that any realistic implementation should use the algorithm on small parts of the image and then somehow combine the results. So, this is not a trivial plug-in to write.

Also you should not over-estimate what techniques like conjugate gradient filtering can do. The examples that most authors give are highly artificial. In the page that you referred to the convolution is known exactly and there was no noise in the blurred image. In real life this never happens.

First of all you hardly ever know the exact details of the blurring convolution. Most deconvolution algorithms give very disappointing results unless you have a very good approximation of the blurring convolution.

Second, almost all images contain noise and this has disastrous effect on the deconvolution. Most deconvolution algorithms tend to amplify the noise to an extreme degree. For example, when using the unmodified inverse deconvolution the end result is normally completely dominated by noise. Although conjugate gradient filtering is not so extremely sensitive to noise, like all deconvolution techniques its results will deteriorate rapidly even when there are only very small errors in the input.

A few years ago, when I started with my own deconvolution plug-in I examined several existing algorithms. My own conclusion was that this is a difficult subject. Most of the best algorithms are simply too slow for practical applications. I finally selected FIR Wiener filtering as a practical compromise. The running time is linear in the number of pixels in the image, and in virtually all cases its results are much better than those of similar plug-ins like unsharp mask or sharpen. If you are interested you can find it at http://refocus.sourceforge.net.

greetings,

Ernst Lippe

Ernst Lippe
2003-06-21 14:29:55 UTC (almost 21 years ago)

Blur filter

On Fri, 20 Jun 2003 23:58:03 -0700 Joel Rodriguez wrote:

Thanks for your attention to the matter Esnst:

it is enough information to get me going :)

I will take a close look at the link which by the way looks very impressive, the approach was thinking was not conjugate gradient itself, but a variant of constrained optimization:

http://www-fp.mcs.anl.gov/otc/Guide/SoftwareGuide/Categories/constropt.html

particularly the least squares solution, for which I'm familiar with:

http://www.sbsi-sol-optimize.com/products_lssol.htm

P.S. wont bother for a while, thanks for your attention, yesterday's tequila,
some times make me feel that P=NP,..he,... :)

Oh, but that is a valid feeling even when you're sober, nobody has proved that they are different.

It might be wise to wait with reading the rest of this post until you have fully recovered :)

But when you have recovered, I would advice you to study Fourier Analysis. I found it very helpful in explaining why deconvolution is so difficult. One important fact is that a convolution can be described as a multiplication in the Fourier domain, i.e. the Fourier transform of the result is equal to the multiplication of the Fourier transform of the input times the Fourier transform of the convolution. Now this implies that the inverse operation (the deconvolution) can be described in the Fourier domain as a division by the Fourier transform of the convolution. But in virtually all cases the Fourier transform of the convolution contains some values that are very small. In the case that your convolution is circular symmetric, it is possible to prove that its Fourier transform must contain at least one value that is equal to zero. It is obvious that when the Fourier transform of the convolution contains any zeroes, that there cannot be an exact inverse because division by zero is undefined. Also the small values in the Fourier transform cause problems: when you divide by a small number that is of course equivalent to multiplying with a big number, in other words the inverse of the convolution will greatly magnify all errors in the image that correspond with the small valued component.

This also helps to explain why least square minimizations frequently give horrible results. Take an image for which the Fourier transform only contains values that are significantly different from zero for the components where the Fourier transform is close to zero. It should be clear that the result of convolving this image with the convolution gives a result where every pixel is almost equal to zero. Because convolution is a linear operation, when you add this image A to another image B and then apply the convolution, the end result must be equal to the sum of the convolution of A plus the convolution of B. But because the convolution of A is almost zero, it is easy to see that the convolution of A + B is almost equal to the convolution of B. But this means that the least square criterion is not very well defined, because when some multiple of A is added to an image the least squares distance will only change by a very small amount.

In practice this means, that I can have two completely different images, one that shows a "normal" image and another one that completely looks like random noise. But when I use a convolution on both images the result can be almost identical. The problem with least squares optimization is that this procedure cannot distinguish between these two images. This is the reason that least squares optimization procedures do not perform well, often the optimal solution visually completely looks like random noise. Most techniques that are based on least squares minimization are iterative and do not attempt to find the real minimum. Normally they require user intervention to determine the number of iterative steps.

Perhaps this explanation is a bit too convoluted, but I think that it contains some important points. Feel free to ask when you have any problems.

greetings,

Ernst Lippe

Joel Rodriguez
2003-06-22 22:17:49 UTC (almost 21 years ago)

Blur filter

Hi Ernst

Yes, did studied Fourier analysis, do agree with you in almost every thing you wrote, there are mainly four concepts involved into the discussion, Fourier Analysis, signal, noise and some sort of an inversion technique, under this point of view everything sounds just right, but, the question is then , Fourier analysis accounts for the ``behavior'' of the mapping among both domains?.

Fourier analysis, can not tell the difference between random noise and some deterministic behavior, namely, signals that can be characterized with a fractal number. not being able to distinguish or tell the difference, means that under particular circumstances Fourier analysis overlooks.

In practice this means, that I can have two completely different images, one that shows a "normal" image and another one that completely looks like random noise. But when I use a convolution on both images the result can be almost identical. The problem with least squares optimization is that this procedure cannot distinguish between these two images.

I almost agree with you, in the sense that the forward modeling Fourier analysis based technique might not be sufficient, although the least squares method (as a hole) might be.

Might be if there is a way to incorporate such image information that characterizes it, in some other way or aside Fourier analysis, such information (invariant measures) could be incorporated into a least squares reconstruction task as regularization scheme technique.

although the above sentence it is not a fact to my knowledge.

Most techniques that are based on least squares minimization are iterative and do not attempt to find the real minimum. Normally they require user intervention to determine the number of iterative steps.

although the obtained picture might not represent the original (real minimum), can not stop thinking in processing pictures at a 30 FPS rate. I'm confident that as technology advances and computational power improves, an real-time GPL ``refocus'' is on the horizon, before the technique could be applied to see saturn with a telescope under turbulent seeing.

http://www.djcash.demon.co.uk/astro/webcam/saturn.htm

this might sound like is getting a little off topic, in the list interests, would like to thank your insightful comments on the subject, which will for sure inspire me in continuing with my research work and the implementation of it.

regards

Joel Rodr'iguez :)

Ernst Lippe wrote:

On Fri, 20 Jun 2003 23:58:03 -0700 Joel Rodriguez wrote:

Thanks for your attention to the matter Esnst:

it is enough information to get me going :)

I will take a close look at the link which by the way looks very impressive, the approach was thinking was not conjugate gradient itself, but a variant of constrained optimization:

http://www-fp.mcs.anl.gov/otc/Guide/SoftwareGuide/Categories/constropt.html

particularly the least squares solution, for which I'm familiar with:

http://www.sbsi-sol-optimize.com/products_lssol.htm

P.S. wont bother for a while, thanks for your attention, yesterday's tequila,
some times make me feel that P=NP,..he,... :)

Oh, but that is a valid feeling even when you're sober, nobody has proved that they are different.

It might be wise to wait with reading the rest of this post until you have fully recovered :)

But when you have recovered, I would advice you to study Fourier Analysis. I found it very helpful in explaining why deconvolution is so difficult. One important fact is that a convolution can be described as a multiplication in the Fourier domain, i.e. the Fourier transform of the result is equal to the multiplication of the Fourier transform of the input times the Fourier transform of the convolution. Now this implies that the inverse operation (the deconvolution) can be described in the Fourier domain as a division by the Fourier transform of the convolution. But in virtually all cases the Fourier transform of the convolution contains some values that are very small. In the case that your convolution is circular symmetric, it is possible to prove that its Fourier transform must contain at least one value that is equal to zero. It is obvious that when the Fourier transform of the convolution contains any zeroes, that there cannot be an exact inverse because division by zero is undefined. Also the small values in the Fourier transform cause problems: when you divide by a small number that is of course equivalent to multiplying with a big number, in other words the inverse of the convolution will greatly magnify all errors in the image that correspond with the small valued component.

This also helps to explain why least square minimizations frequently give horrible results. Take an image for which the Fourier transform only contains values that are significantly different from zero for the components where the Fourier transform is close to zero. It should be clear that the result of convolving this image with the convolution gives a result where every pixel is almost equal to zero. Because convolution is a linear operation, when you add this image A to another image B and then apply the convolution, the end result must be equal to the sum of the convolution of A plus the convolution of B. But because the convolution of A is almost zero, it is easy to see that the convolution of A + B is almost equal to the convolution of B. But this means that the least square criterion is not very well defined, because when some multiple of A is added to an image the least squares distance will only change by a very small amount.

In practice this means, that I can have two completely different images, one that shows a "normal" image and another one that completely looks like random noise. But when I use a convolution on both images the result can be almost identical. The problem with least squares optimization is that this procedure cannot distinguish between these two images. This is the reason that least squares optimization procedures do not perform well, often the optimal solution visually completely looks like random noise. Most techniques that are based on least squares minimization are iterative and do not attempt to find the real minimum. Normally they require user intervention to determine the number of iterative steps.

Perhaps this explanation is a bit too convoluted, but I think that it contains some important points. Feel free to ask when you have any problems.

greetings,

Ernst Lippe

Ernst Lippe
2003-06-25 09:10:27 UTC (almost 21 years ago)

Blur filter

On Sun, 22 Jun 2003 13:17:49 -0700 Joel Rodriguez wrote:

In practice this means, that I can have two completely different images, one that shows a "normal" image and another one that completely looks like random noise. But when I use a convolution on both images the result can be almost identical. The problem with least squares optimization is that this procedure cannot distinguish between these two images.

I almost agree with you, in the sense that the forward modeling Fourier analysis based technique might not be sufficient, although the least squares method (as a hole) might be.

Might be if there is a way to incorporate such image information that characterizes it, in some other way or aside Fourier analysis, such information (invariant measures) could be incorporated into a least squares reconstruction task as regularization scheme technique.

although the above sentence it is not a fact to my knowledge.

To my knowledge it is a fact. Perhaps the following explanation might help. Given a convolution C and a blurred image B, the least squares technique tries to find an image I such that the least squares distance between C(I) and B is minimal. In my previous post I describe how you could construct an inverse of C, let's call it D. Now for all images X it is true that C(D(X)) = X, therefore I=D(B) is the least square solution because C(I)=C(D(B))=B.

When the Fourier transform of C contains any zero coeficients, the inverse D is not uniquely determined. In fact, I can choose arbitrary values for the coefficients in the Fourier transform of D that correspond with the zero coeficients in C. This is true because you can describe convolution in the Fourier domain by multiplication of the coeficients. When I multiply a value by zero the end result is always zero.

So when the Fourier transform of C contains zero coeficients there are an infinite number of least square solutions, and the least square criterium alone cannot select one of them.

Even when the Fourier transform does not contain zero coeficients the least squares solution is generally not a visually good solution because it is extremely sensitive to errors. The problem is that C frequently contains some small coeficients, the corresponding coefficients in D are therefore very large. But this means that D will greatly magnify any errors in the corresponding Fourier coeficients of the blurred image B. So even very small changes in B will have a great impact on the least square solution I=D(B). In practice this means that in general you don't want to use the true least square solution because it is highly unstable.

greetings,

Ernst Lippe