1 of 20

We Should Have Seen That Coming.

Threat Modeling for Bad Actors and Misuse

in Machine Learning & Computer Vision

2 of 20

3 of 20

4 of 20

According to [Microsoft Cybersecurity Field CTO Diana Kelley], this was an important lesson for all parties involved, as companies are building out AI and machine learning to make it more resilient to such abuse.

"Looking at AI and how we research and create it in a way that's going to be useful for the world, and implemented properly, it's important to understand the ethical capacity of the components of AI.

"This is not just a technology problem; this is actually a bigger problem. And it's going to need to have a diverse group of people working on the creation of the systems to make sure that they are going to be ethical," she said.

5 of 20

6 of 20

7 of 20

8 of 20

9 of 20

10 of 20

11 of 20

12 of 20

Some people may wonder if such findings should be made public lest they inspire the very application that we are warning against. We share this concern. However, as the governments and companies seem to be already deploying face-based classifiers aimed at detecting intimate traits (Chin & Lin, 2017; Lubin, 2016), there is an urgent need for making policymakers, the general public, and gay communities aware of the risks that they might be facing already. Delaying or abandoning the publication of these findings could deprive individuals of the chance to take preventive measures and policymakers the ability to introduce legislation to protect people. Moreover, this work does not offer any advantage to those who may be developing or deploying classification algorithms, apart from emphasizing the ethical implications of their work. We used widely available off-the-shelf tools, publicly available data, and methods well known to computer vision practitioners. We did not create a privacy-invading tool, but rather showed that basic and widely used methods pose serious privacy threats. We hope that our findings will inform the public and policymakers, and inspire them to design technologies and write policies that reduce the risks faced by homosexual communities across the world.

13 of 20

14 of 20

“The risks, the misuse, we never thought about that.”

- Zoom CEO

15 of 20

“Science fiction writers foresee the inevitable, and although problems and catastrophes may be inevitable, solutions are not.” – Isaac Asimov

16 of 20

17 of 20

If your face won’t get you into Harvard, you can always change it.

CollegeVue, a spin-off of the HireVue AI hiring platform, has become so ubiquitous that college admissions counselors focused on “training your face” for video interviews

are charging more than most SAT prep services. Some of the most wealthy applicants are even going so far as to get plastic surgery, while applicants with less resources are at a clear disadvantage.

Tech News

12-01-2021

FIRST EDITION

Boom in “face prep” college counseling

Computer vision rules college admissions.

SLIDESMANIA.COM

18 of 20

LGBTQ Public School Teachers Targeted by Extremist Group with the Help of Facial Recognition Technology

HireVue Accused of Secretly Including Discrimination-Friendly Options for Employers

Who’s Knocking? Ring Doorbell’s Private Inferences Pit Neighbor Against Neighbor

Criminalization of Same-Sex Relationships Secretly Policed by ClearView AI

19 of 20

Can you see it coming?

  1. Choose a computer vision application or use case.
    1. This can be a future technology that doesn’t exist yet!
  2. Ask yourselves: What might go wrong, based on a bad actor or other misuse?
    • This could be something that happens tomorrow… or in 50 years.
    • The negative consequences might be intentional or unintentional.
  3. Consider what the ripples of this problem might be. How will it affect individuals, communities, society?
  4. Write a news headline based on what you’ve come up with.

20 of 20

Can you get ahead of it?

  • Discuss the example of something that can go wrong from the group you’ve been paired with.
  • Ask yourselves: What could have prevented this? (Or if that isn’t feasible, what could be done now?)
    • You might consider technical methods and/or design thinking--or even bigger solutions (e.g., regulation or social structures).
    • If it feels like a solution isn’t possible--why? What bigger problems does this point to?
    • Feel free to suggest multiple possible strategies or solutions!
  • Prepare to share!