1 of 89

Color Image Processing

CS 663, Ajit Rajwade

2 of 89

Pouring in color

  • Grayscale image: 2D array of size M x N containing scalar intensity values (graylevels).

  • Color image: typically represented as a 3D array of size M x N x 3 again containing scalar values. But each pixel location now has three values – called as R(red),G(green), B(blue) intensity values.

  • Most file formats store color images based on this representation. It is also the default representation for display of color images.

2

3 of 89

Questions

  • What are RGB? How are red, green, blue determined?
  • Are there other ways of representing color?
  • How do you distinguish between different intensities of the same color (shades)? Between varying levels of whiteness in a color (tints)?

3

4 of 89

4

http://en.wikipedia.org/wiki/Tints_and_shades

5 of 89

More questions

  • How would you define an edge in a color image?
  • How do you smooth color images?
  • What are the advantages of color images over grayscale images?
  • What do we know about human color perception?
  • What do color-based optical illusions teach us?
  • Why do we consider only 3 channels (i.e. RGB)? Are there images with more channels? Where are they used?

5

6 of 89

Color perception and physics

  • Human perception of color is not fully understood, but there are some well understood physical principles.
  • The color of light is defined by its constituent wavelengths (inverse of frequency).
  • The visible part of the electromagnetic spectrum lies between 450 nm (violet) to 700 nm (red).

6

7 of 89

7

380–450 nm

450–495 nm

495–570 nm

570–590 nm

590–620 nm

620–750 nm

Ultraviolet

Infrared

8 of 89

Color physics

  • White light is a blend of several wavelengths of light, which get separated by “dispersive elements” such as prisms.
  • Objects which reflect light that is balanced in several visible wavelengths appear “white”.
  • Objects which reflect light in a narrow range of wavelengths appear “colored” (example: green objects reflect light between 500 to 560 nm).
  • No color starts or ends abruptly at a particular wavelength – the transitions are smooth.

8

9 of 89

Human color perception

  • The human retina has two types of receptor cells that respond to light – the rods and the cones.
  • The rods work in the low-light regime and are responsible for monochromatic vision.
  • The cones respond to brighter light, and are responsible for color perception.
  • There are around 5-7 million cones in a single retina.

9

10 of 89

Human color perception

  • There are 3 types of cones. Each type responds differently to light of different wavelengths: L (responsive to long wavelengths, i.e. red), M (medium wavelengths, i.e. green) and S (short wavelengths, i.e. blue).

10

Yellow color: L is stimulated a bit more than M and S is not stimulated

Red: L is stimulated much more than M and S is not stimulated

Violet: S is stimulated, M and L are not

Color-blindness: absence of one or more of the three types of cones

Response sensitivity functions for LMS cells

11 of 89

Human color perception

  • Consider a beam of light striking the retina. Let its spectral intensity as a function of wavelength λ be given as I(λ).
  • The three types of cone cells re-weigh the spectral intensity and produce the following response:

11

C is a matrix of size 3 x Nλ and I is a vector of Nλ elements

Vectors of length Nλ

12 of 89

Human color perception

  • The colors R,G,B are called primary colors – their corresponding wavelengths are 435, 546, 700 nm respectively.
  • These values were standardized by CIE (International Commission on Illumination – Commission Internationale de l’Eclairage).

12

13 of 89

Display systems (CRT/LCD)

  • The interior of a cathode ray tube (CRT) contains an array of triangular dot patterns (triads) containing electron-sensitive phosphor. Each dot in the triad produces light in one of the three primary colors based on the intensity of that primary color.
  • Thus the three primary colors get mixed in different proportions by the color sensitive cones of the human eye to perceive different colors.
  • Though the electronics of an LCD system is different from CRT, the color display follows similar principles.

13

14 of 89

Color Models (Color Spaces)

  • A purpose of color model is to serve as a method of representing color.
  • Some color models are oriented towards hardware (eg: monitors, printers), others for applications involving color manipulation.
  • Monitors: RGB, Printers: CMY, human perception: HSI, efficient compression and transmission: YCbCr.

14

15 of 89

RGB color model

15

  • Defines a cartesian coordinate system for colors – in terms of R,G,B axes.

  • Images in the RGB color model consist of three component images, one for each primary color.

  • When an RGB image is given as input to a display system, the three images combine to produce the composite image on screen.

  • Typically, an 8 bit integer is used to represent the intensity value in each channel, giving rise to (2^8)^3 = 1.677 x 10^7 colors.

16 of 89

16

17 of 89

CMY(K) color space

17

  • The colors cyan, magenta and yellow are “opponents” of red, green and blue respectively, i.e. cyan and red lie on diagonally opposite corners of the RGB cube, i.e.

C = 255-R, M = 255-G, Y = 255-B

  • Cyan, magenta and yellow are called secondary colors of light, or primary colors of pigments. A cyan colored surface illuminated with white light will not allow the reflection of red light. Likewise for magneta and green, yellow and blue.
  • CMY are the colors of ink pigments used in printing industry. Color printing is a subtractive process – the ink subtracts certain color components from white light.
  • For purposes of display, white is the full combination of RGB and black is the absence of light. For purposes of printing, white is the absence of any printing, and black is the full combination of CMY.

18 of 89

CMY(K) color space

  • The printer puts down dots (of different sizes, shapes) of CMY colors with tiny spacing (of different widths) in between. The spacing is so tiny that our eye perceives them as a single solid color (optical illusion!). This process is called color half-toning.
  • While black is a full combination of CMY, it is printed on paper using a separate black-color ink (to save costs). This is the ‘K’ of the CMYK model.

18

19 of 89

19

Three examples of color half-toning with CMYK separations. From left to right: The cyan separation, the magenta separation, the yellow separation, the black separation, the combined halftone pattern and finally how the human eye would observe the combined halftone pattern from a sufficient distance.

Color half-toning

20 of 89

20

Digression: Gray-scale half-toning

Left: Halftone dots. Right: How the human eye would see this sort of arrangement from a sufficient distance.

21 of 89

21

Digression: negative after-images!

http://thebrain.mcgill.ca/flash/a/a_02/a_02_p/a_02_p_vis/a_02_p_vis.html#

22 of 89

HSI color space

  • RGB, CMY are not intuitive from the point of view of human perception/description.
  • We don’t think naturally of colors in the form of combinations of RGB.
  • We tend of think of color as the following components: hue (the “inherent/pure” color – red, orange, purple, etc.), saturation (the amount of white mixed in the color, i.e. pink versus magenta), intensity (the amount of black mixed in the color, i.e. dark red versus bright red).

22

23 of 89

23

  • Intensity increases as we move from black to white on the intensity line.

  • Consider a plane perpendicular to the intensity line (in 3D). Saturation of a color increases as we move on that plane away from the point where the plane and the intensity line intersect.

  • How to determine hue? Pick any point (e.g. yellow) in the RGB cube, and draw a triangle connecting that point with the white point and black point. All points inside or on this triangle have the same hue. Any such point would be a color corresponding to a convex combination of yellow, black and white, i.e. of the form

a x yellow + b x black + c x white, where a, b, c are non-negative and sum to 1. By rotating this triangle about the intensity axis, you will get different hues.

24 of 89

HSI space

24

By rotating the triangle about the intensity axis, you will get different hues. In fact hue is an ANGULAR quantity ranging from 0 to 360 degrees. By convention, red is considered 0 degrees.

Primary colors are separated by 120 degrees. The secondary colors (of light) are 60 degrees away from the primary colors.

25 of 89

25

To be very accurate, this HSI spindle is actually hexagonal. But it is approximated as a circular spindle for convenience. This approximation does not alter the notion of hue or intensity and has an insignificant effect on the saturation.

26 of 89

RGB to HSI conversion

  • Conversion formulae are obtained by making the preceding geometric intuition more precise:

26

Refer to textbook for formulae to convert back from HSI to RGB

27 of 89

HSI and RGB

27

28 of 89

Practical use of hue

28

Hue is invariant to:

  • Scaling of R,G,B
  • Constant offsets added to R,G,B

What does this mean physically?

29 of 89

Practical use of hue

  • To understand this, we need to understand a model which tells you the what color is observed at a particular point on a surface of an object illuminated by one or more light sources.
  • This color is given by:

29

Ambient light (say due to sunlight): constant effect on all points of the object’s surface

Diffuse reflection of light from a directed source off a rough surfaces: varies from point to point on a surface

Reflection from shiny surface: varies from point to point on a surface

30 of 89

30

Diffuse reflection from an irregular surface

Specular reflection

31 of 89

31

Diffuse reflection

Diffuse + specular reflection

  • Diffuse reflection from a rough surface: “diffuse” means that incident light is reflected in all directions.
  • Specular reflection: part of the surface acts like a mirror, the incident light is reflected only in particular directions

Vector normal to the surface at a point

Lighting

direction

Viewing

direction

Direction of reflected light

L=Strength of white light source,

ka,kd,ks: surface reflectivity (fraction of incident light that is reflected off the surface)

For shiny surfaces, α is large.

32 of 89

Practical use of hue

  • The ambient and specular components are assumed to be the same across RGB (neutral reflection model). So they get subtracted out when computing R-G,G-B,B-R. Hence hue is invariant to specular reflection!
  • Notice: hue is independent of strength of lighting (why?), lighting direction (why?) and viewing direction (why?).
  • This makes hue useful in object detection and object recognition or in applications such as detection of faces/foliage in color images.
  • Hue is thus said to be an “illumination invariant” feature.

32

33 of 89

Food for thought

  • We’ve heaped praises on hue, all along. Any ideas on its demerits?
  • Suppose we define the following quantities (r,g,b) [the chromaticity vector] derived from RGB:

Is the chromaticity vector also an illumination invariant feature? How does it compare to hue?

33

34 of 89

Digression: Playing with color: seeing is not (!) believing

34

http://thebrain.mcgill.ca/flash/a/a_02/a_02_p/a_02_p_vis/a_02_p_vis.html#

35 of 89

Operations on color images

  • Color image histogram equalization
  • Color image filtering
  • Color edge detection

35

36 of 89

Histogram equalization

  • Method 1: perform histogram equalization on RGB channels separately.
  • Method 2: Convert RGB to HSI, histogram equalize the intensity, convert back to RGB.
  • Method 1 may cause alterations in the hue – which is undesirable.
  • Method 2 will change only the intensity, leaving hue and saturation unaltered. It is the preferred method.

36

37 of 89

37

Top row: original images

Middle row: histogram equalization channel by channel

Bottom row: histogram equalization on intensity (of HSI) and conversion back to RGB

38 of 89

Color image smoothing: bilateral filtering

  • Remember the bilateral filter (HW2): an edge-preserving filter for grayscale images.
  • It smoothes the image based on local weighted combinations driven by difference between spatial coordinates and intensity values.

38

39 of 89

Bilateral filtering for color images

  • You can filter each channel separately, i.e.

39

40 of 89

Bilateral filtering for color images

  • Or you can filter the three channels in a coupled fashion, i.e. the smoothing weights are same for all three channels and they are derived using information from all three channels.

40

41 of 89

What’s wrong with separate channel bilateral filtering?

41

Channel by channel: Color artifacts around edges. RGB channels are highly inter-dependent – you shouldn’t treat them as independent.

Separate channel

Coupled

42 of 89

42

43 of 89

43

44 of 89

44

45 of 89

Color Edges

  • A color (RGB) image will have three gradient vectors – one for each channel.
  • We could compute edges separately for each channel.
  • Option: Combine (add) channel-per-channel edges together to get a composite edge image. Not a good one? Why (see next slide)

45

46 of 89

Color Edges

  • Problem: the two circled points have the same edge strength (mathematically), though one appears to be a stronger edge.

46

(Rx,Gx,Bx) = (255,255,255),

(Ry,Gy,By) = (0,0,0),

Rx^2 + Gx^2 +Bx^2 + Ry^2+Gy^2+By^2 = 3*255^2

(Rx,Gx,Bx) = (255,255,0),

(Ry,Gy,By) = (0,0,255),

Rx^2 + Gx^2 +Bx^2 + Ry^2+Gy^2+By^2 = 3*255^2

47 of 89

Color Gradient/Edge

  • To find the color gradient, we want to ask the question: along which direction in XY space is the total magnitude of change in intensity the maximum?
  • The squared change in intensity in a direction (cos ϴ, sin ϴ) is given by (square of the directional derivative of the intensity):

  • We want to maximize this w.r.t ϴ. Take derivative with respect to ϴ and set it to zero.

47

48 of 89

Color Gradient/Edge

  • This gives the color gradient direction which makes an angle ϴ w.r.t. the X axis, given by:

  • For a grayscale image, this turns out to be

48

49 of 89

Color Gradient/Edge

  • Consider

  • It turns out that the ϴ (i.e. the color gradient) we derived is given by the eigenvector of this matrix corresponding to the larger eigenvalue. The direction perpendicular to it (i.e. the eigenvector corresponding to the smaller eigenvalue) is the color edge.

49

Local color gradient matrix

50 of 89

50

51 of 89

PCA on RGB values

  • Suppose you take N color images and extract RGB values of each pixel (3 x 1 vector at each location).
  • Now, suppose you build an eigenspace out of this – you get 3 eigenvectors, each corresponding to 3 different eigenvalues.

51

52 of 89

PCA on RGB values

  • The eigenvectors will look typically as follows:

0.5952 0.6619 0.4556

0.6037 0.0059 -0.7972

0.5303 -0.7496 0.3961

  • Exact numbers are not important, but the first eigenvector is like an average of RGB. It is called as the Luminance Channel (Y). It is similar to the intensity in the HSI space.

52

53 of 89

PCA on RGB values

  • The second eigenvector is like Y-B, and the third is like Y-G. These are called as the Chrominance Channels.
  • The Y-Cb-Cr color space is related to this PCA-based space (though there are some details in the relative weightings of RGB to get Luminance and Chrominance – denoted by Cb and Cr).
  • The values in the three channels Y, Cb and Cr are decorrelated, similar to the values projected onto the PCA-based channels.

53

54 of 89

PCA on RGB values

  • The luminance channel (Y) carries most information from the point of view of human perception, and the human eye is less sensitive to changes in chrominance.
  • This fact can be used to assign coarser quantization levels (i.e. fewer bits) for storing or transmitting Cb and Cr values as compared to the Y channel. This improves the compression rate.
  • The JPEG standard for color image compression uses the YCbCr format. For an image of size M x N x 3, it stores Y with full resolution (i.e. as an M x N image), and Cb and Cr with 25% resolution, i.e. as M/2 x N/2 images.

54

55 of 89

55

56 of 89

56

57 of 89

57

58 of 89

58

The variances of the three eigen-coefficient values:

8411, 159.1, 71.7

59 of 89

59

60 of 89

60

61 of 89

61

62 of 89

62

RGB and its corresponding Y, Cb, Cr channels

63 of 89

Beyond color: Hyperspectral images

  • Images of the form M x N x L, where L is the number of channels. L can range from 30 to 30,000 or more.
  • Finer division of wavelengths than possible in RGB!
  • Can contain wavelengths in the infrared or ultraviolet regime.

63

64 of 89

Sources of confusion ☺

  • Hyperspectral images are abbreviated as HSI!
  • Hyperspectral images are different from multispectral images. The latter contain few, discrete and discontinuous wavelengths. The former contain many more wavelengths with continuity.

64

65 of 89

Beyond color: Hyperspectral images

  • Widely used in remote sensing (satellite images) – often different materials/geographical entities (soil, water, vegetation, concrete, landmines, mountains, etc.) can be detected/classified by spectral properties.
  • Also used in chemistry, pharmaceutical industry and pathology for classification of materials/tissues.

65

66 of 89

66

Example multispectral image with 6 bands

67 of 89

67

68 of 89

68

69 of 89

69

70 of 89

70

71 of 89

71

72 of 89

72

Reference color image

73 of 89

Color image Demosaicing

CS 663, Ajit Rajwade

74 of 89

Color Filter Arrays

  • It is an array of tiny color filters placed before the image sensor array of a camera.

  • The resolution of this array is the same as that of the image sensor array.

  • Each color filter may allow a different wavelength of light to pass – this is pre-determined during the camera design.

74

75 of 89

Color Filter Arrays

  • The most common type of CFA is the Bayer pattern which is shown below:

  • The Bayer pattern collects information at red, green, blue wavelengths only as shown above.

75

76 of 89

Color Filter Arrays

  • The Bayer pattern uses twice the number of green elements as compared to red or blue elements.

  • This is because both the M and L cone cells of the retina are sensitive to green light.

  • The raw (uncompressed) output of the Bayer pattern is called as the Bayer pattern image or the mosaiced (*) image.

  • The mosaiced image needs to be converted to a normal RGB image by a process called color image demosaicing.

76

*The word “mosaic” or “mosaiced” is not to be confused with image panorama generation which is also called image mosaicing.

77 of 89

77

“original scene”

Mosaiced image

Mosaiced image – just coded with the Bayer filter colors

“Demosaiced” image – obtained by interpolating the missing color values at all the pixels

78 of 89

A Demosaicing Algorithm

  • There exist a plethora of demosaicing algorithms.

  • We will study one that is implemented in the “demosaic” function of MATLAB.

  • The algorithm implemented by this function was published in 2004.

Malvar, H.S., L. He, and R. Cutler, High quality linear interpolation for demosaicing of Bayer-patterned color images. ICASPP, Volume 34, Issue 11, pp. 2274-2282, May 2004.

78

79 of 89

Demosaicing Algorithm

  • Demosaicing involves interpolation of missing color values from nearby pixels.
  • The easiest way is to perform linear interpolation – given the structure of the Bayer pattern.

79

80 of 89

Demosaicing Algorithm

  • But such an algorithm gives highly sub-optimal results at edges – as seen in the simulation below.

80

Original image (top left), o/p of bilinear interpolation for demosaicing (top right), o/p of MATLAB’s demosaic algorithm (bottom left)

81 of 89

Demosaicing algorithm

  • Make use of the correlation between R,G,B color values for a more edge-aware interpolation!

  • Edges in natural images have stronger luminance changes than chrominance changes.

  • Consider the case of finding G at an R or a B pixel.

  • The R-gradient can be useful information for determining the G value.

81

82 of 89

Demosaicing algorithm

  • Consider the case of finding G at an R or a B pixel (x,y).

  • Obtain an estimate of the R value at pixel (x,y) by bilinear interpolation.

  • If the actual R value at (x,y) differs considerably from the bilinearly interpolated R value at (x,y), it means that there is a sharp luminance change at that pixel.

  • The corrected value of G is given as follows:

82

Gain factor

Bilinearly interp. value

83 of 89

83

84 of 89

Demosaicing algorithm

  • We have seen how to obtain G at an R or a B pixel.

  • To obtain the R value at a G pixel, the corresponding formula is

84

Bilinear interp. value

85 of 89

Demosaicing algorithm

  • To obtain a R value at a B pixel, the corresponding formula is

85

Bilinear interp. value

86 of 89

Demosaicing algorithm

  • To obtain a B value at a G pixel, the corresponding formula is

86

Bilinear interp. value

87 of 89

Demosaicing algorithm

  • To obtain a B value at a R pixel, the corresponding formula is

87

Bilinear interp. value

88 of 89

Gain factors

  • The values α, β, γ are gain factors for the correction due to gradients in the R,G,B channels respectively.

  • How are they estimated? In a training phase of the algorithm – performed offline.

  • The gain factors were designed to optimize a mean square error criterion.

88

89 of 89

Demosaicing: when does it happen?

  • Your camera acquires images in a raw format, with 12 bits per pixel.

  • Remember: at each pixel, only one of the R,G,B values is measured.

  • That is, the camera measures just the CFA image.

  • The camera then runs a demosaicing algorithm internally to generate the full RGB image.

  • This image then goes through various intensity transformations after which it is JPEG-compressed and stored in the camera memory card.

  • The demosaicing algorithm described earlier does not perform any noise removal – which can lead to noisy artifacts in the final image!

89