1 of 67

Nima Kalantari

CSCE 448/748 - Computational Photography

Automatic Image Alignment and RANSAC

Slides from Alexei A. Efros, James Hays, Richard Szeliski, and Steve Seitz

2 of 67

Image Alignment

How do we align two images automatically?

Two broad approaches:

    • Feature-based alignment
      • Find a few matching features in both images
      • compute alignment
    • Direct (pixel-based) alignment
      • Search for alignment where most pixels agree

3 of 67

Feature-based alignment

  1. Feature Detection: find a few important features (aka Interest Points) in each image separately
  2. Feature Matching: match them across two images
  3. Compute image transformation: RANSAC

4 of 67

Feature-based alignment

  1. Feature Detection: find a few important features (aka Interest Points) in each image separately
  2. Feature Matching: match them across two images
  3. Compute image transformation: RANSAC

5 of 67

Feature detection

How do we choose good features automatically?

    • They must be prominent in both images
    • Easy to localize
    • Think how you do that by hand

6 of 67

Example

7 of 67

Feature detection

How do we choose good features automatically?

    • They must be prominent in both images
    • Easy to localize
    • Think how you do that by hand
    • Corners!

8 of 67

Feature-based alignment

  1. Feature Detection: find a few important features (aka Interest Points) in each image separately
  2. Feature Matching: match them across two images
  3. Compute image transformation: RANSAC

9 of 67

Feature-based alignment

  1. Feature Detection: find a few important features (aka Interest Points) in each image separately
  2. Feature Matching: match them across two images
  3. Compute image transformation: RANSAC

10 of 67

Feature Matching

How do we match the features between the images?

    • Need a way to describe a region around each feature
      • e.g. image patch around each feature

Issues:

    • What if the image patches for several interest points look similar?
      • Make patch size bigger
    • What if the image patches for the same feature look different due to scale, rotation, exposure etc.
      • Need an invariant descriptor

11 of 67

Invariant Feature Descriptors

Schmid & Mohr 1997, Lowe 1999, Baumberg 2000, Tuytelaars & Van Gool 2000, Mikolajczyk & Schmid 2001, Brown & Lowe 2002, Matas et. al. 2002, Schaffalitzky & Zisserman 2002

12 of 67

Invariant Local Features

Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging parameters

Features Descriptors

13 of 67

Applications

Feature points are used for:

    • Image alignment (homography, fundamental matrix)
    • 3D reconstruction
    • Motion tracking
    • Object recognition
    • Scene categorization
    • Indexing and database retrieval
    • Robot navigation
    • … other

14 of 67

Feature-based alignment

  1. Feature Detection: find a few important features (aka Interest Points) in each image separately
  2. Feature Matching: match them across two images
  3. Compute image transformation: RANSAC

15 of 67

Feature-based alignment

  1. Feature Detection: find a few important features (aka Interest Points) in each image separately
  2. Feature Matching: match them across two images
  3. Compute image transformation: RANSAC

16 of 67

Computing transformation

    • Use successful matches to estimate homography
      • Need to do something to get rid of outliers

17 of 67

Outline

  • How to automatically align two images
    • Feature detection
    • Feature description
    • Feature matching
    • RANSAC

Reading:

Multi-image Matching using Multi-scale image patches, CVPR 2005

18 of 67

Outline

  • How to automatically align two images
    • Feature detection
    • Feature description
    • Feature matching
    • RANSAC

19 of 67

Harris corner detector

C.Harris, M.Stephens. “A Combined Corner and Edge Detector”. 1988

20 of 67

The Basic Idea

We should easily recognize the point by looking through a small window

Shifting a window in any direction should give a large change in intensity

21 of 67

Harris Detector: Basic Idea

“flat” region:�no change in all directions

“edge”:�no change along the edge direction

“corner”:�significant change in all directions

22 of 67

Harris Detector: Mathematics

Change of intensity for the shift [u,v]:

Intensity

Shifted intensity

Window function

or

Window function w(x,y) =

Gaussian

1 in window, 0 outside

 

23 of 67

Error surface

24 of 67

Harris Detector: Mathematics

For small shifts [u,v] we can use Taylor Series expansion to get:

where M is a 2×2 matrix computed from image derivatives:

25 of 67

Slide from Kavita Bala

26 of 67

Harris Detector

The Algorithm:

    • Construct matrix M
    • Compute the response function R
    • Find points with large corner response function R (R > threshold)
    • Take the points of local maxima of R

27 of 67

Harris Detector: Mathematics

Measure of corner response:

 

(k – empirical constant, k = 0.04-0.06)

 

 

28 of 67

Harris Detector: Workflow

29 of 67

Harris Detector: Workflow

Compute corner response R

30 of 67

Harris Detector

The Algorithm:

    • Construct matrix M
    • Compute the response function R
    • Find points with large corner response function R (R > threshold)
    • Take the points of local maxima of R

31 of 67

Harris Detector: Workflow

Find points with large corner response: R>threshold

32 of 67

Harris Detector

The Algorithm:

    • Construct matrix M
    • Compute the response function R
    • Find points with large corner response function R (R > threshold)
    • Take the points of local maxima of R

33 of 67

Harris Detector: Workflow

Take only the points of local maxima of R

34 of 67

Harris Detector: Workflow

35 of 67

Harris Detector: Some Properties

Rotation invariance

Corner response R is invariant to image rotation

36 of 67

Harris Detector: Some Properties

  • Only derivatives are used => invariance to intensity shift II + b

Intensity scale: I a I

R

x (image coordinate)

threshold

R

x (image coordinate)

Corner response R is partially invariant to intensity scale

37 of 67

Harris Detector: Some Properties

But: non-invariant to image scale!

All points will be classified as edges

Corner !

38 of 67

Scale Invariant Detection

Consider regions (e.g. circles) of different sizes around a point

Regions of corresponding sizes will look the same in both images

39 of 67

Scale Invariant Detection

The problem: how do we choose corresponding circles independently in each image?

Choose the scale of the “best” corner

40 of 67

Feature selection

41 of 67

Feature selection

42 of 67

Adaptive Non-maximal Suppression

Desired: Fixed # of features per image

    • Want evenly distributed spatially…
    • Sort points by non-maximal suppression radius�[Brown, Szeliski, Winder, CVPR’05]

43 of 67

Outline

  • How to automatically align two images
    • Feature detection
    • Feature description
    • Feature matching
    • RANSAC

44 of 67

Outline

  • How to automatically align two images
    • Feature detection
    • Feature description
    • Feature matching
    • RANSAC

45 of 67

Feature descriptors

We know how to detect points

Next question: How to match them?

?

Point descriptor should be:

    • Invariant 2. Distinctive

46 of 67

Multi-Scale Oriented Patches (MOPS)

Find local orientation

Dominant direction of gradient

  • Extract image patches relative to this orientation

47 of 67

Detect Features, setup Frame

Orientation = blurred gradient

Rotation Invariant Frame

    • Scale-space position (x, y, s) + orientation (θ)

48 of 67

MOPS descriptor vector

8x8 oriented patch

    • Sampled at 5 x scale

Bias/gain normalisation: I’ = (I – μ)/σ

8 pixels

40 pixels

49 of 67

Outline

  • How to automatically align two images
    • Feature detection
    • Feature description
    • Feature matching
    • RANSAC

50 of 67

Outline

  • How to automatically align two images
    • Feature detection
    • Feature description
    • Feature matching
    • RANSAC

51 of 67

Feature matching

?

52 of 67

Feature matching

Given a feature in I1, how to find the best match in I2?

    • Define distance function that compares two descriptors
    • Test all the features in I2, find the one with min distance

53 of 67

Feature distance

How to define the difference between two features f1, f2?

    • Simple approach is SSD(f1, f2)
      • sum of square differences between entries of the two descriptors
      • can give good scores to very ambiguous (bad) matches

I1

I2

f1

f2

54 of 67

Feature distance

How to define the difference between two features f1, f2?

    • Better approach: ratio distance = SSD(f1, f2) / SSD(f1, f2’)
      • f2 is best SSD match to f1 in I2
      • f2’ is 2nd best SSD match to f1 in I2
      • gives large values for ambiguous matches

I1

I2

f1

f2

f2'

55 of 67

Feature-space outliner rejection

Can we now compute H from the blue points?

    • No! Still too many outliers…
    • What can we do?

56 of 67

Outline

  • How to automatically align two images
    • Feature detection
    • Feature description
    • Feature matching
    • RANSAC

57 of 67

Outline

  • How to automatically align two images
    • Feature detection
    • Feature description
    • Feature matching
    • RANSAC

58 of 67

Matching features

What do we do about the “bad” matches?

59 of 67

RANSAC

Algorithm:

  1. Sample (randomly) the number of points required to fit the model
  2. Solve for model parameters using samples
  3. Score by the fraction of inliers within a preset threshold of the model

Repeat 1-3 until the best model is found with high confidence

Fischler & Bolles in ‘81.

(RANdom SAmple Consensus) :

60 of 67

RANSAC

Algorithm:

  1. Sample (randomly) the number of points required to fit the model (#=2)
  2. Solve for model parameters using samples
  3. Score by the fraction of inliers within a preset threshold of the model

Repeat 1-3 until the best model is found with high confidence

Illustration by Savarese

Line fitting example

61 of 67

RANSAC

Algorithm:

  1. Sample (randomly) the number of points required to fit the model (#=2)
  2. Solve for model parameters using samples
  3. Score by the fraction of inliers within a preset threshold of the model

Repeat 1-3 until the best model is found with high confidence

Line fitting example

62 of 67

RANSAC

Algorithm:

  1. Sample (randomly) the number of points required to fit the model (#=2)
  2. Solve for model parameters using samples
  3. Score by the fraction of inliers within a preset threshold of the model

Repeat 1-3 until the best model is found with high confidence

Line fitting example

63 of 67

RANSAC

Algorithm:

  1. Sample (randomly) the number of points required to fit the model (#=2)
  2. Solve for model parameters using samples
  3. Score by the fraction of inliers within a preset threshold of the model

Repeat 1-3 until the best model is found with high confidence

64 of 67

RANSAC for estimating homography

RANSAC loop:

  1. Select four feature pairs (at random)
  2. Compute homography H (exact)
  3. Compute inliers where SSD(pi’, H pi) < ε
  4. Keep largest set of inliers
  5. Re-compute least-squares H estimate on all of the inliers

65 of 67

RANSAC

66 of 67

Alignment

67 of 67

Blending