1 of 67

Image Restoration

CS 663, Ajit Rajwade

1

2 of 67

Contents

  • Introduction to image restoration
  • Inverse filter
  • Spread spectrum filters – coded aperture camera and flutter-shutter camera
  • Wiener filter – aim, assumptions, formula and derivation
  • Regularized least squares deblurring
  • PCA for denoising

2

3 of 67

What is image restoration?

  • Image restoration = task of recovering an image from its degraded version assuming some knowledge of the degradation phenomenon.

  • Models the degradation process and inverts it to obtain the original from the degraded (observed) image.

  • Differs from image enhancement – which does not fully account for the nature of the degradation.

3

4 of 67

Degradation Model

4

Observed image

Underlying image

noise

Forward model of the degradation process: note that this is an operator

Common Assumptions on H:

(1) Linearity,

(2) Space Invariance

5 of 67

Degradation Model

  • For a linear, space-invariant model, the degradation process can be modeled using a convolution:

  • The problem of estimating f from g and h is called as deconvolution.

5

h is the impulse response of the system, i.e. degraded image if f(x,y) was a unit impulse image – also called as convolution kernel.

6 of 67

Degradation Model

  • Many real-world phenomena can be approximated as linear and space-invariant.

  • Non-linear and space-variant models are more accurate, more general but more complex.

  • Even with the simplifying assumption of linearity and space-invariance, we will see that inverting the degradation model has many challenges.

6

7 of 67

Models of Blur

  • In image restoration, the most commonly encountered problem is that of blur removal given a known blur model.

  • An image is said to be blurred when it is convolved with a low-pass filter of a certain kind.

7

8 of 67

Models of Blur

  • Defocus blur

  • Motion Blur

8

9 of 67

Defocus Blur

  • Occurs when the scene being observed is not in focus. It is actually spatially variant dependent on the depth of each point (i.e. its distance from the camera), but we will model it here as spatially uniform for simplicity.
  • Its frequency response is:

9

10 of 67

10

Here we are showing plots of

log(magnitude of Fourier transform +1) for easy visualization. We will call them log-Fourier plots

Log-Fourier plot of original image

Log-Fourier plot of Gaussian-blurred image

Gaussian-blurred image.

Kernel size 15 x 15.

Blur level: 5

11 of 67

Motion blur

  • A commonly occurring form of blur – when there is relative motion between the camera and the object/scene being imaged – during the process of image acquisition.

11

12 of 67

12

13 of 67

Motion Blur

  • A camera gathers the image of a scene as follows:
  • Light from the scene enters the camera during the exposure time, i.e. when the shutter is open.

  • The light passes through the lens and hits a sensor array (CCD array)

  • The CCD array performs an integration operation during the entire exposure time.

  • The image is formed on the CCD array after the shutter closes.

13

14 of 67

Motion Blur

  • Imagine an object undergoing motion parallel to the plane of the camera sensor array.

  • Let the motion be translation (for simplicity) given by x0(t) and y0(t), i.e. the motion is a function of time.

  • Let f(x,y) be the intensity at point (x,y) of the true (underlying) image.

14

15 of 67

Motion Blur

  • Then the observed image is given by:

15

16 of 67

Motion Blur

  • Let us assume that:

  • Then the blur frequency response is:

16

17 of 67

17

motion blurred image (motion in X direction)

Log-Fourier plot of motion-blurred image (notice the strong response in the vertical direction and the sinc-like pattern parallel to the X axis!)

Log-Fourier plot of original image

Here we are showing plots of

log(magnitude of Fourier transform +1) for easy visualization. We will call them log-Fourier plots

18 of 67

Example: Restoration by Inverse Filtering

18

Assume no noise (for now)

Convolution theorem

If H(u,v) is zero, there is a problem in estimating F(u,v). Otherwise this task is completely well-defined.

Known (observed)

Known

Unknown (to be estimated)

19 of 67

Example: Restoration by Inverse Filtering

19

  • If H(u,v) has small values (this will happen for higher frequencies, i.e. higher values of u and v, if h(x,y) is a blur kernel, i.e. a low-pass filter), the corresponding estimates of F(u,v) will be hugely erroneous if there is even a tiny amount of noise.

  • This is especially because the Fourier Transform of the noise, i.e. N(u,v) (an unknown quantity), may be greater than F(u,v) for high values for high u and v (why?)

Known (observed)

Known

Unknown (to be estimated)

Unknown noise

20 of 67

20

Image restored using Inverse filter with no noise (ideal, non-realistic scenario)

Image restored using Inverse filter with 0.1% noise

Image restored using Inverse filter with 0.1% noise followed by a low-pass filter

21 of 67

Blurring with Holes

  • Let us say we put in a cardboard piece with holes inside the camera aperture.
  • The defocus blur can no more approximated as a Gaussian function.
  • Rather, the blur kernel is now represented as a Gaussian dot-multiplied with a binary pattern (with values of 1 – wherever there was a hole and a 0 wherever there was no hole).

21

22 of 67

22

23 of 67

23

Using a coded mask, the restored image is of high quality even under noise (same 0.1% as before). Why does this happen?

24 of 67

24

The coded mask preserves higher frequencies. It also is a spread-spectrum kernel, i.e. it is not a pure low-pass filter. Hence its higher frequency components have larger amplitudes and division (in the inverse filter) does not blow up the noise.

25 of 67

Flutter Shutter Camera

  • The same principle is used in the flutter shutter camera to deal with motion blur.
  • The shutter of a normal camera is usually open throughout the exposure duration (denoted by T in the derivation for motion blur).
  • This is equivalent to convolution with a temporal box filter (a low-pass filter).
  • In a flutter-shutter camera, the shutter is made to flutter (open and close) during the exposure time - as per a randomly generated binary sequence.

25

26 of 67

26

Frequency response of motion blur kernel in a flutter-shutter camera (blue curve) versus traditional camera (red curve). The green curve corresponds to a different camera (we are not studying it here).

27 of 67

27

28 of 67

28

29 of 67

A word of caution

  • Spread spectrum filters and cameras using them do not “solve” the problem of the inverse filter.
  • They merely make use of a spread spectrum filter to work around the issues with the inverse filter by stabilizing the deblurring process.
  • These cameras by design allow less light to enter the camera during image acquisition and are not superior to normal cameras for all applications. The images acquired by these cameras may appear grainier due to lower signal to noise ratio.
  • But they address one particular application, i.e. deblurring in a very principled way.
  • Spread spectrum filters may not be available in many different applications – such as motion blur in an image acquired by a typical camera.

29

30 of 67

Wiener Filter

  • Spread spectrum filters are not always possible in many applications.
  • The inverse filter approach on previous slides made no explicit use of the knowledge of the noise model.
  • The Wiener filter is one approach which makes use of knowledge of the statistical properties of the noise besides the degradation function.
  • It attempts to remove both noise as well as the blur.

30

31 of 67

Wiener filter

  • Its aim is to produce an estimate of the underlying image such that the expected mean square error between the true and estimated images is minimized:

31

32 of 67

Wiener filter

  • It asks the questions: which linear space-invariant filter shall I apply to the degraded image (i.e. with which filter shall I convolve the degraded image) to produce an estimate that is as close to the original as possible, in a least squares sense, on an average?
  • It minimizes the blow-up of noise during deconvolution, especially at frequencies where the signal to noise ratio is very poor (i.e. low).

32

33 of 67

Wiener filter

  • Assumptions made by the Wiener filter:
  • Noise is independent of the image,
  • Noise statistics do not change spatially
  • Either the image or the noise (or both) has (have) zero mean
  • Second order statistics (i.e. power spectrum) of the image and the noise are known
  • Wiener filter is also called as a minimum mean square error filter due to the criterion it optimizes.

33

34 of 67

Wiener filter

34

Wiener filter

Power spectrum of original signal = E(|F(u,v)|2)

Power spectrum of noise = E(|N(u,v)|2)

Power spectrum means magnitude-squared of the Fourier transform. Frequency spectrum means magnitude of the Fourier transform.

Estimate of the Fourier transform of f

35 of 67

35

Noise to signal ratio (or inverse signal to noise ratio – ISNR)

  • At frequencies (u,v) where the signal is much stronger than the noise, the ISNR is 0, and the Wiener filter reduces to the inverse filter.

  • At frequencies (u,v) where the signal is much weaker, the ISNR will be large and the corresponding component G(u,v) will be attenuated (note that the Wiener filter cannot reconstruct such components well!)

  • When there is no blurring, but only noise, we have:

Deblurring filter

Denoising filter

36 of 67

Wiener filter

  • The Wiener filter requires us to know the ISNR at different frequencies.
  • We usually do not have knowledge of this quantity, but we can provide some estimate (or guess-timate).
  • For example, in most (photographic) images, the ISNR is low at lower frequencies and high at higher frequencies.
  • Why? Images typically have high energy at lower frequencies, and low energy at higher frequencies. But noise is typically spread-spectrum.

36

37 of 67

Derivation of Wiener filter in deconvolution

37

We want to find a linear filter whose Fourier transform is L(u,v) and which minimizes the following expected value.

Both these terms are 0 because the image and the noise are uncorrelated and one or both have zero mean.

So E(F(u,v)N*(u,v)) =

E(N(u,v)F*(u,v)) = 0.

E(|F(u,v)|2) = Sf(u,v)

E(|N(u,v)|2) = Sƞ (u,v) E(|L(u,v)|2) = |L(u,v)| 2 as L is not a random variable

38 of 67

38

Derivation of Wiener filter in deconvolution

39 of 67

Derivation of Wiener filter in deconvolution

39

40 of 67

Wiener filter derivation: a few remarks

  • The filter L(u,v) is not a random variable – it is a filter which will yield the least MSE over all noise instances with the assumed statistics and over all images belonging to the assumed class.
  • Usually for images, the power law is followed for an estimate of |F(u,v)|2.

40

41 of 67

Interactive Wiener filter

  • As the ISNR is unknown, we can substitute it by a constant K, which we can choose interactively based on visual appeal.

  • Look at figures 5.28 and 5.29 of the book (pages 355, 356) for examples of results using this method.

41

42 of 67

42

Image taken from the Book by Gonzalez and Woods

43 of 67

43

Image taken from the Book by Gonzalez and Woods

44 of 67

44

Image taken from the Book by Gonzalez and Woods

45 of 67

45

46 of 67

46

47 of 67

Regularized Restoration

  • We know that the inverse filter is unstable if there is noise in the measured image.
  • The inverse filter (given a fixed blur kernel) comes from the least squares criterion.

47

48 of 67

Regularized Restoration

  • Now we modify our criterion.

  • We say that in addition to a least squares solution, we want a solution (i.e. an image) whose derivatives are not allowed to be arbitrarily large.

48

49 of 67

Regularized Restoration

  • Let’s pick the Laplacian as our derivative operator:

  • You know that the Laplacian can be represented by a convolution with the mask:

49

50 of 67

Regularized Restoration

  • The Fourier transform of the Laplacian of the image is given by P(u,v)F(u,v).

  • Here P(u,v) is the Fourier transform of an appropriately zero-padded version of the convolution kernel.

50

51 of 67

Regularized Restoration

  • Hence the frequency response (i.e. Fourier transform) of the desired filter is obtained as follows:

51

The second step follows from Parseval’s theorem

52 of 67

Regularized Restoration

  • How to pick the parameter ϒ?
  • Method 1: Pick it interactively till you get a visually pleasing result.
  • Method 2: Start with some initial value of ϒ. Compute a residual vector . Compute its squared norm . If it is too far away from the noise variance, readjust ϒ. Otherwise stop. Note: in many applications, the noise variance can be estimated,.

52

53 of 67

Blur is not always bad

53

Motion blur conveys a sense of speed:

can be used to estimate direction of motion of a moving object (or direction of camera motion) from a still image

Aesthetic effects!:

54 of 67

Blur is not always bad

54

Variation in blur conveys depth information

55 of 67

PCA-based denoising

55

56 of 67

Using PCA for image denoising

  • You have seen the non-local means (NLMeans) algorithm (patch-based filtering).
  • It uses the fact that natural images have a great deal of redundancy – i.e. several patches in distant regions of the image can be very similar.

56

Difference between patches

57 of 67

57

58 of 67

Use of PCA for denoising

  • This “non-local” principle can be combined with PCA for denoising.
  • Consider a clean image I corrupted by additive Gaussian noise of mean zero and standard deviation σ, to give noisy image J as follows:

J = I + N, N ~ Gaussian distribution of mean 0 and standard deviation σ.

  • Given J, we want to estimate I, i.e. we want to denoise J.

58

59 of 67

Use of PCA for denoising

  • Consider a small p x p patch – denoted qref - in J.
  • Step 1: We will collect together some L patches {q1,q2,…,qL} from J that are structurally similar to qref – pick the L nearest neighbors of qref .
  • Note: even if J is noisy, there is enough information in it to judge similarity if we assume σ << average intensity of the true image I.
  • Step 2: Assemble these L patches into a matrix of size p2 x L. Let us denote this matrix as Xref.

59

60 of 67

Use of PCA for denoising

  • Step 3: Find the eigenvectors of Xref Xref T to produce an eigenvector matrix V.
  • Step 4: Project each of the (noisy) patches {q1,q2,…,qL} onto V and compute their eigen-coefficient vectors denoted as {α1, α2,…, αL} where αi = VTqi.
  • Step 5: Now, we need to manipulate the eigencoefficients of qref in order to denoise it.

60

61 of 67

Use of PCA for denoising

  • Step 5 (continued): We will follow a Wiener filter type of update:

  • Step 6: Reconstruct the reference patch as follows:

61

Noise variance (assumed known)

Estimate of coefficient squared of true signal

62 of 67

Use of PCA for denoising

  • Repeat steps 1 to 6 for all p x p patches from image J (in a sliding window fashion).
  • Since we take overlapping patches, any given pixel will be covered by multiple patches (as many as p2 different patches).
  • Reconstruct the final image by averaging the output values that appear at any pixel.

62

63 of 67

Comments: Use of PCA for denoising

  • Note that a separate eigenspace is created for each reference patch. The eigenspace is always created from patches that are similar to the reference patch.
  • Such a technique is often called as spatially varying PCA or non-local PCA.

63

64 of 67

Patch similarity: Use of PCA for denoising

  • To compute L nearest neighbors of qref, restrict your search to a window around qref.
  • For every patch within the window, compute the sum of squared differences with qref, i.e. compute: .
  • Pick L patches with the least distance.

64

qref

Search window for similar patches

65 of 67

65

Sample result

The results with a global eigenspace (consisting of all patches from the image) yield poorer results – see top row, rightmost image.

The results improve with spatially varying PCA provided the number of patches is large enough. The best results with this method generally outperform the results with a bilateral filter which is a purely local technique.

L

L

66 of 67

Use of PCA in denoising: why Wiener-like update?

66

Eigen-coefficients of the “true patch”. We are looking for a linear update which motivates this equation.

As the image and the noise are independent

As the noise is zero mean

ni represents a vector of pure noise values which degrades the true patch to give the noisy patch. Its projection onto the eigenspace gives vector ϒi.

67 of 67

Use of PCA in denoising: why Wiener-like update?

67

Since we are dealing with L similar patches, we can assume (approximately) that the l-th eigen-coefficient of each of those L patches are very similar.