My image processing class was assigned an image restoration project. I am currently working on an inverse filter. image -> deprade -> inverse filter -> restore image. I use a simple 5x5 filter for my degradation.
If I drill the image in the spatial domain, go to the frequency domain, then invert Filter thumbnail with fft core, I get a mess. If I drill the image in the frequency domain, then invert this image, I get a good image.
The frequency domain and the convolution of the spatial domain must be the same. My only thought is - am I doing something wrong with the core? I am using a 5x5 filter. Spatial convolution divides the final result into np.sum (box). I tried to normalize the window through:
box = np.ones( 25 ).reshape( 5,5 ) / 25.0
but get the same inverse filter processing result.
I also noticed that the frequency minimized image ("g_freq.png" from the code below) is shifted, probably due to the fact that the FFT fills the upper and left lower / right parts of the image. Could this be a problem?
Spatial convolution: 
Frequency convolution: pay attention to filling along the top / left side. 
The simplest possible code to create the problem is given below. 100% numpy / scipy / matplotlib.
import sys import matplotlib matplotlib.use( 'Agg' ) import matplotlib.pyplot as plt import numpy as np import scipy from scipy import ndimage def save_image( data, filename ) : print "saving",filename plt.cla() fig = plt.figure() ax = fig.add_subplot( 111 ) ax.imshow( data, interpolation="nearest", cmap=matplotlib.cm.gray ) fig.savefig( filename ) f = scipy.misc.lena() save_image( f, "scipylena.png" )
My "f_hat_frequency" 
My "f_hat_spatial" :-( 
Thanks so much for any help.
[EDIT] I work on Mac OSX 10.6.8 using Numpy 1.6.0 through the free 32-bit version of Enthought. ( http://www.enthought.com/products/epd_free.php ) Python 2.7.2 | EPD_free 7.1-1 (32-bit)
EDIT 31-Oct-2011. I think that what I'm trying to do has deeper mathematical roots than I understand. http://www.owlnet.rice.edu/~elec539/Projects99/BACH/proj2/inverse.html helped a bit. Adding the following code to the code before the reverse filter:
H_HAT = np.copy(K) np.putmask( H_HAT, H_HAT>0.0001, 0.0001 )
gives me an image, but with a lot of calls (perhaps due to my filter box, you need to switch to Gaussian). In addition, the offset of the frequency-filtered image is likely to cause a problem. My professor looked at my code, can not find the problem. Her suggestion is to continue to use a frequency-filtered image rather than a spatially filtered image.
I have a similar question on dsp.stackexchange.com: https://dsp.stackexchange.com/questions/538/using-the-inverse-filter-to-correct-a-spatially-convolved-image