1 of 17

Machine Media

Week 7

2 of 17

Week 7 - Class Overview

Themes & Timeline:

Week 1: Introductions, Chance & Protocol

Week 2: Chatbots and Generative text

Week 3: �Data Labor

Week 4:

Classification, Taxonomies, Computer Vision

Week 5:

Generative Adversarial Networks, Handmade Datasets

Week 6:

GAN review,�Photo tutorial

Week 7:

Facial Recognition,

Identity, Surveillance

Week 8:

Deepfakes

Week 9: The Digital is Physical: Environmental impact

Week 10:

Handmade Dataset mid-way presentations

Week 11:

Data augmentation�Workshop (python)

Week 12:

Writing Images: Text-to

-image

models

Week 13:

Data Augmentation workshop part 2

Week 14:

Training Demo�In-class work day

Week 15: Final Presentations

Thanksgiving Break

Handmade Dataset Project

3 of 17

Week 4 - Agenda

  • 8:45 - 9:45 Welcome Aki!
  • 9:45 - 9:55 Break
  • 9:55 - 10:00 Review Homework and any questions about final
  • 10:00 - 10:20 Facial Recognition, Identity, Surveillance
  • 10:20 - 10:25 Break
  • 10:25 - 10:55 Code tutorial
  • 10:55 - 11:00 Homework

4 of 17

Week 6 - Homework

Guest speaker:

Akina Younge - Movement director at the Center on Race and Digital Justice at UCLA

She works towards racial justice and design justice as a coalition builder, public policy shaper, and popular education designer.

5 of 17

Quick Homework Review, check-in about Final Project.

6 of 17

Facial Recognition, Identity, Surveillance

Intaglio over inkjet print: IBM’s DiF measurements over Bertillon's face

7 of 17

“There are no rules when it comes to what images police can submit to face recognition algorithms to generate investigative leads. As a consequence, agencies across the country can—and do—submit all manner of "probe photos," photos of unknown individuals submitted for search against a police or driver license database. These images may be low-quality surveillance camera stills, social media photos with filters, and scanned photo album pictures. Records from police departments show they may also include computer-generated facial features, or composite or artist sketches.”�

Clare Garvey - Garbage In, Garbage Out

8 of 17

“Sketches typically rely on:

  1. An eyewitness's memory of what the subject looked like;
  2. The eyewitness's ability to communicate the memory of the subject to a sketch artist;
  3. The artist's ability to translate that description into an accurate drawing of the subject’s face, someone whom the artist has never seen in person.

9 of 17

  • Stop using celebrity look-alike probe images.
  • Stop submitting artist or composite sketches to face recognition systems not expressly designed for this purpose.
  • Establish and follow minimum photo quality standards, such as pixel density and the percent of the face that must be visible in the original photo, and prohibit the practice of pasting other people’s facial features into a probe.
  • If edits to probe images are made, carefully document these edits and their results.
  • Require that any subsequent human review of the face recognition possible match be conducted against the original photo, not a photo that has undergone any enhancements, including color and pose correction.
  • As is the practice in some police departments, require double-blind confirmation. The face recognition system should produce an investigative lead only if two analysts independently conclude that the same photo is a possible match.
  • Provide concrete guidance to investigating officers about what constitutes sufficient corroboration of a possible match generated by a face recognition system before law enforcement action is taken against a suspect. This should include: mandatory photo arrays; a prohibition on informing witnesses that face recognition was used; and a concrete nexus between the suspect and the crime in addition to the identification, such as a shared address.
  • Make available to the defense any information about the use of face recognition
  • Prohibit the use of face recognition as a positive identification under any circumstance.

10 of 17

“Neophrenology”

  • Junk Science, phrenology, race science, etc.

Galton’s composite photos of criminals

Xiaolin Wu & Xi Zhang’s Composite photos

11 of 17

https://theintercept.com/2016/11/18/troubling-study-says-artificial-intelligence-can-predict-who-will-be-criminals-based-on-facial-features/

12 of 17

13 of 17

IBM Diversity in Faces (DiF) dataset of "annotations of one million publicly available face images."1 The dataset was created in 2019 to address existing biases in overwhelmingly light-skinned and male-dominated facial datasets. IBM believed that the dataset "will encourage deeper researcher on this important topic and accelerate efforts towards creating more fair and accurate face recognition systems."1

However, the dataset caused a fierce backlash after it became widely known through an article published on NBC News. IBM is now being sued in a class action lawsuit led by a photographer whose photos and biometrics were used without consent. He is seeking damages of $5,000 for each intentional violation of the Illinois Biometric Information Privacy Act, or $1,000 for each negligent violation, for everyone affected. The lawsuit aims to represent all Illinois citizens whose biometric data was used in the dataset.

14 of 17

1868 People of India

15 of 17

Discussion Questions

How might one go about collecting face data to train a facial recognition model in a more ethical and consentful way?

How might you use a facial recognition model?

16 of 17

Let’s try playing around with a facial recognition model — Note that this model recognizes faces not specific faces. It recognizes specific points of the face not individual faces.

In-class sketch: https://editor.p5js.org/AaratiAkkapeddi/sketches/iDfOD0hnq9

17 of 17

Homework

https://machine-media.net/mini-project/disinformation-campaign.html

  1. Take a deepfake detection quiz
  2. Create a “Disinformation Campaign”