1 of 50

Mahmoud Afifi1,2*

Jonathan T Barron2

Chloe LeGendre2

1York University

2Google Research

Source code

*This work was done while Mahmoud was an intern at Google.

Yun-Ta Tsai2

Francois Bleibel2

Cross-Camera Convolutional Color Constancy

2 of 50

Intro

Cross-Camera Convolutional Color Constancy

1

3 of 50

Intro

Cross-Camera Convolutional Color Constancy

2

4 of 50

Intro

Cross-Camera Convolutional Color Constancy

3

5 of 50

Intro

Cross-Camera Convolutional Color Constancy

4

6 of 50

Intro

Cross-Camera Convolutional Color Constancy

5

Sensitivity of the long, medium, short (LMS) cone cells

 

Illuminant spectral power distribution

Object’s spectral reflectance properties

 

7 of 50

Intro

Cross-Camera Convolutional Color Constancy

6

Raw image

Camera spectral sensitivity

 

Illuminant spectral power distribution

Object’s spectral reflectance properties

 

Sensitivity of the long, medium, short (LMS) cone cells

Display image

8 of 50

Intro

Cross-Camera Convolutional Color Constancy

7

Display image

Camera spectral sensitivity

 

Illuminant spectral power distribution

Object’s spectral reflectance properties

Raw image

 

9 of 50

Intro

Cross-Camera Convolutional Color Constancy

8

Display image

Raw image

Camera ISP

10 of 50

Intro

Cross-Camera Convolutional Color Constancy

9

Sensor raw-RGB image

White-balanced image

White balance

Camera ISP

11 of 50

Intro

Cross-Camera Convolutional Color Constancy

10

 

Sensor raw-RGB image

“True” scene RGB colors

“Scene Illuminant

color”

 

12 of 50

Intro

Cross-Camera Convolutional Color Constancy

11

Sensor raw-RGB image

“True” scene RGB colors

“Scene Illuminant

color”

 

13 of 50

Intro

Cross-Camera Convolutional Color Constancy

12

Sensor raw-RGB image

“True” scene RGB colors

 

?

Illuminant estimation algorithm

14 of 50

Intro

Cross-Camera Convolutional Color Constancy

13

Sensor raw-RGB image

“True” scene RGB colors

 

Illuminant estimation algorithm

Estimated Illuminant

color

15 of 50

Prior work

Cross-Camera Convolutional Color Constancy

14

Learning methods

Statistical methods

Simple, easy to implement, less accurate

More accurate, generalize poorly for new camera models

Illuminant estimation

16 of 50

Prior work

Cross-Camera Convolutional Color Constancy

15

(e.g., DNN)

Training labeled raw-RGB images taken by the same camera model

0.52

0.51

0.5

0.49

0.48

0.47

0.46

0.45

0.44

0.43

0.15

0.2

0.25

0.3

0.35

0.4

0.45

g

r

Illuminant colors

(rg chroma)

17 of 50

Prior work

Cross-Camera Convolutional Color Constancy

16

Planckian locus in the rg chromaticity space

of different camera sensors

 

 

 

0.52

0.51

0.5

0.49

0.48

0.47

0.46

0.45

0.44

0.43

0.15

0.2

0.25

0.3

0.35

0.4

0.45

g

r

Planckian locus in the rg chromaticity space

of training camera sensor

18 of 50

Prior work

Cross-Camera Convolutional Color Constancy

17

Learning sensor-independent illuminant estimation

SIIE [Afifi BMVC’19]

Quasi-Unsupervised CC [Bianco CVPR’19]

19 of 50

Method

  • A self-calibration method for cross-camera color constancy.

  • Additional (unlabeled) images are provided as input to the model at test time.

  • These additional images allows the model to calibrate itself to the spectral properties of the test-set camera during inference.

Cross-Camera Convolutional Color Constancy

18

Input query image & additional images

Our result

Canon EOS 5DSR

Nikon D810

Mobile Sony IMX135

Input query image

20 of 50

Method

Cross-Camera Convolutional Color Constancy

19

Convolutional Color Constancy [Barron ICCV’15, Barron and Tsai CVPR’17]

21 of 50

Method

Cross-Camera Convolutional Color Constancy

20

Convolutional Color Constancy [Barron ICCV’15, Barron and Tsai CVPR’17]

22 of 50

Method

Cross-Camera Convolutional Color Constancy

21

Convolutional Color Constancy [Barron ICCV’15, Barron and Tsai CVPR’17]

23 of 50

Method

Cross-Camera Convolutional Color Constancy

22

24 of 50

Method

Cross-Camera Convolutional Color Constancy

23

Convolutional Color Constancy [Barron ICCV’15, Barron and Tsai CVPR’17]

25 of 50

Method

Cross-Camera Convolutional Color Constancy

24

Convolutional Color Constancy [Barron ICCV’15, Barron and Tsai CVPR’17]

26 of 50

Method

Cross-Camera Convolutional Color Constancy

25

Convolutional Color Constancy [Barron ICCV’15, Barron and Tsai CVPR’17]

27 of 50

Method

Cross-Camera Convolutional Color Constancy

26

Convolutional Color Constancy [Barron ICCV’15, Barron and Tsai CVPR’17]

28 of 50

Method

Cross-Camera Convolutional Color Constancy

27

Convolutional Color Constancy [Barron ICCV’15, Barron and Tsai CVPR’17]

29 of 50

Method

Cross-Camera Convolutional Color Constancy

28

C5: Cross-Camera Convolutional Color Constancy (ours)

30 of 50

Method

Cross-Camera Convolutional Color Constancy

29

C5: Cross-Camera Convolutional Color Constancy (ours)

31 of 50

Method

Cross-Camera Convolutional Color Constancy

30

C5: Cross-Camera Convolutional Color Constancy (ours)

32 of 50

Method

Cross-Camera Convolutional Color Constancy

31

C5: Cross-Camera Convolutional Color Constancy (ours)

33 of 50

Method: CCC Model Generator

Cross-Camera Convolutional Color Constancy

32

34 of 50

Method: CCC Model Generator

Cross-Camera Convolutional Color Constancy

33

35 of 50

Method: CCC Model Generator

Cross-Camera Convolutional Color Constancy

34

36 of 50

Method: CCC Model Generator

Cross-Camera Convolutional Color Constancy

35

37 of 50

Method: CCC Model Generator

Cross-Camera Convolutional Color Constancy

36

38 of 50

Method: CCC Model Generator

Cross-Camera Convolutional Color Constancy

37

39 of 50

Method: CCC Model Generator

Cross-Camera Convolutional Color Constancy

38

40 of 50

Method: Training/Testing

Cross-Camera Convolutional Color Constancy

39

  • C5 is trained by minimizing the angular error between the estimated illuminant color and the corresponding ground-truth illuminant color.

Estimated Illuminant

Ground-truth illuminant

 

Angular error

41 of 50

Method: Training/Testing

Cross-Camera Convolutional Color Constancy

40

  • C5 is evaluated using a leave-one-out cross-validation evaluation approach.

    • NUS [Cheng et al., JOSA`14], Gehler-Shi [Gehler et al., In CVPR’08], Cube+ [Banić et al., arXiv’17], and INTEL-TAU [Laakom et al., IEEE Access’21].

      • Testing: Cube+ and INTEL-TAU
      • Training: NUS and Gehler-Shi

    • Data augmentation

Real Fujifilm X-M1 raw image

Mapped to Nikon D40’s sensor space

Mapped to the CIE XYZ space

Real Nikon D40 raw image

42 of 50

Results

Cross-Camera Convolutional Color Constancy

41

Input raw image

FFCC

C5 (ours)

Ground truth

43 of 50

Results

Cross-Camera Convolutional Color Constancy

42

Input raw image

Quasi-Unsupervised CC

SIIE

C5 (ours)

Histogram & generated CCC model

Ground-truth

44 of 50

Results

Cross-Camera Convolutional Color Constancy

43

Input raw image

Quasi-Unsupervised CC

SIIE

C5 (ours)

Ground-truth

45 of 50

Results

Cross-Camera Convolutional Color Constancy

44

Input raw image

Quasi-Unsupervised CC

SIIE

C5 (ours)

Ground-truth

46 of 50

Results

Cross-Camera Convolutional Color Constancy

45

INTEL-TAU Dataset

Quasi-U CC

SSIE

FFCC

C5

Mean

3.71

3.42

3.42

2.52

Med.

2.67

2.42

2.38

1.70

B. 25%

0.66

0.73

0.70

0.52

W. 25%

8.55

7.80

7.96

5.96

Tri.

2.90

2.64

2.61

1.86

Cube+ Dataset

Quasi-U CC

SSIE

FFCC

C5

Mean

2.69

2.14

2.69

1.92

Med.

1.76

1.44

1.89

1.32

B. 25%

0.49

0.44

0.46

0.44

W. 25%

6.45

5.06

6.31

4.44

Tri.

2.00

-

2.08

1.46

Cube+ Challenge

Quasi-U CC

SSIE

FFCC

C5

Mean

3.12

2.89

3.25

2.24

Med.

2.19

1.72

2.04

1.48

B. 25%

0.60

0.71

0.64

0.47

W. 25%

7.28

7.06

8.22

5.39

Tri.

2.40

-

2.09

1.62

Gehler-Shi Dataset

Quasi-U CC

SSIE

FFCC

C5

Mean

3.46

2.77

2.95

2.50

Med.

2.23

1.93

2.19

1.99

B. 25%

-

0.55

0.57

0.53

W. 25%

-

6.53

6.75

5.46

Tri.

-

-

2.35

2.03

NUS Dataset

Quasi-U CC

SSIE (CS)

FFCC

C5

Mean

3.00

2.05

2.87

2.54

Med.

2.25

1.50

2.14

1.90

B. 25%

-

0.52

0.71

0.61

W. 25%

-

4.48

6.23

5.61

Tri.

-

-

2.30

2.02

C5 (CS)

1.77

1.37

0.48

3.75

1.46

47 of 50

Results

Cross-Camera Convolutional Color Constancy

46

Cube+ Dataset

 

 

 

 

 

 

 

Mean

2.60

2.28

2.23

1.87

1.92

1.93

1.95

Med.

1.86

1.50

1.52

1.27

1.32

1.41

1.35

B. 25%

0.55

0.59

0.56

0.41

0.44

0.42

0.40

W. 25%

5.89

5.19

5.11

4.36

4.44

4.35

4.52

Cube+ Challenge

 

 

 

 

 

Mean

2.70

2.55

2.24

2.41

2.39

Med.

2.00

1.63

1.48

1.72

1.61

B. 25%

0.61

0.54

0.47

0.54

0.53

W. 25%

6.15

6.21

5.39

5.58

5.64

INTEL-TAU Dataset

 

 

 

 

 

Mean

2.99

2.49

2.52

2.60

2.57

Med.

2.18

1.66

1.70

1.79

1.74

B. 25%

0.66

0.51

0.52

0.54

0.52

W. 25%

6.71

5.93

5.96

6.07

6.08

Gehler-Shi Dataset

 

 

 

 

 

Mean

2.98

2.36

2.50

2.55

2.46

Med.

2.05

1.61

1.99

1.88

1.74

B. 25%

0.54

0.44

0.53

0.50

0.50

W. 25%

7.13

5.60

5.46

5.77

5.73

NUS Dataset

 

 

 

 

 

Mean

2.84

2.68

2.54

2.64

2.49

Med.

2.20

2.00

1.90

1.99

1.88

B. 25%

0.69

0.66

0.61

0.65

0.61

W. 25%

6.14

5.90

5.61

5.75

5.43

48 of 50

Results

Cross-Camera Convolutional Color Constancy

47

Examples of vivid images

Examples of dull images

Cube+ Challenge

C5

C5 (another camera model)

C5 (dull images)

C5 (vivid images)

Mean

2.70

2.55

2.24

2.19

49 of 50

Summary

Cross-Camera Convolutional Color Constancy

48

  • We have presented C5, a cross-camera convolutional color constancy method.

  • C5 is a multi-input hypernetwork approach that is trained on images from multiple cameras.

  • A test time, C5 synthesizes weights for a CCC-like model that is dynamically calibrated to the spectral properties of the previously unseen camera of the test-set image.

  • C5 achieves state-of-the-art performance on cross-camera color constancy for several datasets.

  • C5 is fast to evaluate (∼7 and ∼90 ms per image on a GPU or CPU), and requires little memory (∼2 MB).

50 of 50

Thank you!

Cross-Camera Convolutional Color Constancy