Machine Media
Week 7
Week 7 - Class Overview
Themes & Timeline:
Week 1: Introductions, Chance & Protocol
Week 2: Chatbots and Generative text
Week 3: �Data Labor
Week 4:
Classification, Taxonomies, Computer Vision
Week 5:
Generative Adversarial Networks, Handmade Datasets
Week 6:
GAN review,�Photo tutorial
Week 7:
Facial Recognition,
Identity, Surveillance
Week 8:
Deepfakes
Week 9: The Digital is Physical: Environmental impact
Week 10:
Handmade Dataset mid-way presentations
Week 11:
Data augmentation�Workshop (python)
Week 12:
Writing Images: Text-to
-image
models
Week 13:
Data Augmentation workshop part 2
Week 14:
Training Demo�In-class work day
Week 15: Final Presentations
Thanksgiving Break
Handmade Dataset Project
Week 4 - Agenda
Week 6 - Homework
Guest speaker:
Akina Younge - Movement director at the Center on Race and Digital Justice at UCLA
She works towards racial justice and design justice as a coalition builder, public policy shaper, and popular education designer.
Quick Homework Review, check-in about Final Project.
Facial Recognition, Identity, Surveillance
Intaglio over inkjet print: IBM’s DiF measurements over Bertillon's face
“There are no rules when it comes to what images police can submit to face recognition algorithms to generate investigative leads. As a consequence, agencies across the country can—and do—submit all manner of "probe photos," photos of unknown individuals submitted for search against a police or driver license database. These images may be low-quality surveillance camera stills, social media photos with filters, and scanned photo album pictures. Records from police departments show they may also include computer-generated facial features, or composite or artist sketches.”�
Clare Garvey - Garbage In, Garbage Out
“Sketches typically rely on:
“Neophrenology”
Galton’s composite photos of criminals
Xiaolin Wu & Xi Zhang’s Composite photos
https://theintercept.com/2016/11/18/troubling-study-says-artificial-intelligence-can-predict-who-will-be-criminals-based-on-facial-features/
IBM Diversity in Faces (DiF) dataset of "annotations of one million publicly available face images."1 The dataset was created in 2019 to address existing biases in overwhelmingly light-skinned and male-dominated facial datasets. IBM believed that the dataset "will encourage deeper researcher on this important topic and accelerate efforts towards creating more fair and accurate face recognition systems."1
However, the dataset caused a fierce backlash after it became widely known through an article published on NBC News. IBM is now being sued in a class action lawsuit led by a photographer whose photos and biometrics were used without consent. He is seeking damages of $5,000 for each intentional violation of the Illinois Biometric Information Privacy Act, or $1,000 for each negligent violation, for everyone affected. The lawsuit aims to represent all Illinois citizens whose biometric data was used in the dataset.
1868 People of India
Discussion Questions
How might one go about collecting face data to train a facial recognition model in a more ethical and consentful way?
How might you use a facial recognition model?
Let’s try playing around with a facial recognition model — Note that this model recognizes faces not specific faces. It recognizes specific points of the face not individual faces.
In-class sketch: https://editor.p5js.org/AaratiAkkapeddi/sketches/iDfOD0hnq9
Homework
https://machine-media.net/mini-project/disinformation-campaign.html