We Should Have Seen That Coming.
Threat Modeling for Bad Actors and Misuse
in Machine Learning & Computer Vision
According to [Microsoft Cybersecurity Field CTO Diana Kelley], this was an important lesson for all parties involved, as companies are building out AI and machine learning to make it more resilient to such abuse.
"Looking at AI and how we research and create it in a way that's going to be useful for the world, and implemented properly, it's important to understand the ethical capacity of the components of AI.
"This is not just a technology problem; this is actually a bigger problem. And it's going to need to have a diverse group of people working on the creation of the systems to make sure that they are going to be ethical," she said.
Some people may wonder if such findings should be made public lest they inspire the very application that we are warning against. We share this concern. However, as the governments and companies seem to be already deploying face-based classifiers aimed at detecting intimate traits (Chin & Lin, 2017; Lubin, 2016), there is an urgent need for making policymakers, the general public, and gay communities aware of the risks that they might be facing already. Delaying or abandoning the publication of these findings could deprive individuals of the chance to take preventive measures and policymakers the ability to introduce legislation to protect people. Moreover, this work does not offer any advantage to those who may be developing or deploying classification algorithms, apart from emphasizing the ethical implications of their work. We used widely available off-the-shelf tools, publicly available data, and methods well known to computer vision practitioners. We did not create a privacy-invading tool, but rather showed that basic and widely used methods pose serious privacy threats. We hope that our findings will inform the public and policymakers, and inspire them to design technologies and write policies that reduce the risks faced by homosexual communities across the world.
“The risks, the misuse, we never thought about that.”
- Zoom CEO
“Science fiction writers foresee the inevitable, and although problems and catastrophes may be inevitable, solutions are not.” – Isaac Asimov
If your face won’t get you into Harvard, you can always change it.
CollegeVue, a spin-off of the HireVue AI hiring platform, has become so ubiquitous that college admissions counselors focused on “training your face” for video interviews
are charging more than most SAT prep services. Some of the most wealthy applicants are even going so far as to get plastic surgery, while applicants with less resources are at a clear disadvantage.
Tech News
12-01-2021
FIRST EDITION
Boom in “face prep” college counseling
Computer vision rules college admissions.
SLIDESMANIA.COM
LGBTQ Public School Teachers Targeted by Extremist Group with the Help of Facial Recognition Technology
HireVue Accused of Secretly Including Discrimination-Friendly Options for Employers
Who’s Knocking? Ring Doorbell’s Private Inferences Pit Neighbor Against Neighbor
Criminalization of Same-Sex Relationships Secretly Policed by ClearView AI
Can you see it coming?
Can you get ahead of it?