Ethics and Technology Reading Guide

Part of the Mijente #NoTechforICE International Student Day of Action 2019

notechforice.com/dayofaction

Developed by #NoTechforTyrants at the University of St Andrews  in collaboration with #NoTechforTyrants at the University of Edinburgh

Introduction 

As students entering a diversifying workforce, we face questions about ethics and technology on a day-to-day basis. In developing this reading guide, we hope to give you the tools to start formulating your own answers to these questions. Our ability to create a better future together depends on our ability to navigate conversations about what matters most to us.

In what follows, you will find links to pieces that piqued our interest, and some of our thoughts on their relevance. These links are broadly categorized into 3 sections. In Section I, you will find links to articles detailing recent overlap between ethics and technology. We hope you will use these cases as conversation starters, to determine what questions may be relevant in your own discussions. Section I also includes resources about the scope of applied ethics, from the vantage point of academic philosophy. Section II focuses on the role of philosophy in the technology industry. It touches on the phenomenon of ethics-washing, wherein large corporations engaged in unethical practices co-opt the language of academic philosophy in order to justify their work. This leads us to Section III, where we introduce questions about labor and technology. Section III cites examples of workers and citizens expressing their concern about unethical contracts.

We are releasing this reading guide on November 19, 2019, as part of the International Day of Student Action targeting Palantir’s work with United States Immigration and Customs Enforcement (ICE). Students on 16 campuses across the United States and the United Kingdom are working together with Mijente to raise awareness of the complicity of technology companies like Palantir in the inhumane detention and deportation work of ICE, and to pressure Palantir to drop its contract with ICE, which is up for renewal on November 27. We hope that you, like us, find some of these articles and resources useful in sparking further conversations.  

Sincerely,

#NoTechforTyrants at the University of St Andrews

Table of Contents

Section I- Ethics, AI, Technology, and Philosophy

Section II- Diary of An Ethicist in Silicon Valley

Section III- Labor, Complicity, and Action

Section I- Ethics, AI, Technology, and Philosophy

AI is everywhere, and folks seem to be more and more aware of the potential risks involved. Large technology companies have also been taking action to regulate their work along ethical guidelines. Here are some of the ways this is relevant in discussions of technology and ethics.

Why do these questions matter for us? AI is already being used to…

  1. Set the prices you pay
  2. Monitor your bank account
  3. Determine your credit score
  4. Set your insurance premiums
  5. Determine where police patrol
  6. Assist judges in making bail and sentencing decisions

If you’re interested in academic resources, check these out:

  1. Comprehensive review of the major ethics and policy issues facing AI:
  1. Artificial Intelligence Policy: A Primer and Roadmap
  2. Related: A useful view from the past (i.e., what were computer scientists and philosophers saying about these issues twenty years ago?):  A Very Short History of Computer Ethics
  1. Everything Nick Bostrom has ever written:
  1. Nick’s website 
  2. Policy Desiderata for Superintelligent AI: A Vector Field Approach
  3. Technological Revolutions: Ethics and Policy in the Dark
  1. Joy Buolamwini does critical work on algorithmic bias in AI, reminding us of emerging technology’s non-objectivity and capacity to reinforce and reproduce existing injustice.
  1. Algorithmic Justice League
  2. Check out her research on racial and gender bias in the AI services of companies like IBM, Microsoft, and Amazon.  
  1. The Future of Humanity Institute has some incredible academic resources on the intersection of ethics and technology, particularly AI.
  1. FHI’s website
  2. Reframing Superintelligence: Comprehensive AI Services as General Intelligence: see section on risk.
  1. Brent Hecht does some wonderful writing on technology, dissent, protest,  and political change.
  1. Brent’s website
  2. How Do People Change Their Technology Use in Protest?: Understanding “Protest Users”
  3. “Data Strikes”: Evaluating the Effectiveness of a New Form of Collective Action Against Technology Companies
  4. Out of Site: Empowering a New Approach to Online Boycotts
  1. Deeply important research from Ruha Benjamin on how emerging technologies can reinforce and deepen existing social injustice, particularly white supremacy
  1. Race After Technology: Abolitionist Tools for the New Jim Code
  1. A killer piece from Kate Klonick on the state of free speech and expression on the web:
  1. The New Governors: The People, Rules, and Processes Governing Online Speech
  1. On the need to change the computer science peer review process:
  1. ACM code of ethics, which was last updated in 1992, merely asks that computer scientists *consider* the negative societal consequences of their work. Many computer scientists and technology focused philosophers think that’s insufficient.
  2. Brent Hecht of Northwestern University, reflecting on the insufficiency the ACM’s dictum proposes in Nature that “the computer-science community should change its peer-review process to ensure that researchers disclose any possible negative societal consequences of their work in papers, or risk rejection.” For a more fleshed out proposal, see the FCA’s post.
  3. (And here’s why generally telling computer scientists to “consider ethics” isn’t enough: Does ACM’s Code of Ethics Change Ethical Decision Making in Software Development?)

Section II- Diary of An Ethicist in Silicon Valley

Most technology companies are purportedly committed to doing good and being ethical. But we know that technology companies still do bad things.  One way we can explain the disconnect between the creation of ethics policies and boards and the continuing product of harm is by exploring the problems of “ethics theatre” and “ethics washing.” What is it like to be an AI Ethicist?

  1. “Ethics theatre”
  1. “Ethics theatre” is a term coined by Meredith Whittaker, the co-founder and co-director of the AI Now Institute, that describes the phenomenon whereby ethics is watered down into a public performance. Highly publicized but internally driven ethics education initiatives, for example, might take the place of meaningful oversight and hard, enforceable rules.
  2. How do we know when a new initiative is merely ethics theatre? Whittaker encourages us to ask questions like, “What do these boards actually do?”, “Are product decisions run by them?”, “Can they cancel a product decision?”, “Do they have veto power otherwise?”, “Is there any documentation of whether their advice was taken?”, “Who chooses who’s on the board?”
  3. You can explore this problem in more detail by reading through the AI Now 2018 Report, which notes that the “rush to adopt” ethical codes has not been met with the corresponding introduction of mechanisms that can “backstop these ... commitments.”
  1. “Ethics washing”
  1. “Ethics washing” refers to a similar phenomenon. The idea was developed by Benjamin Wagner, Assistant Professor and Director of the Privacy & Sustainable Computing Lab at Vienna University of Economics and Business. Wagner explains that the technology community often views the adoption of “ethics” as a tool to avoid regulatory solutions; it’s viewed as the “easy” or “soft” option. That being said, Wagner believes that there is a minimum criteria that technology companies can adopt to improve the chances that their ethics initiatives are more than just ethics washing (see: page 5).
  2. Quote from an interview with The Verge: “Academic Ben Wagner says tech’s enthusiasm for ethics paraphernalia is just ‘ethics washing,’  a strategy to avoid government regulation. When researchers uncover new ways for technology to harm marginalized groups or infringe on civil liberties, tech companies can point to their boards and charters and say, ‘Look, we’re doing something.’ It deflects criticism, and because the boards lack any power, it means the companies don’t change. ‘Most of the ethics principles developed now lack any institutional framework,’ Wagner tells The Verge. ‘They’re non-binding. This makes it very easy for companies to look [at ethical issues] and go, “That’s important,” but continue with whatever it is they were doing beforehand.’”
  1. Examples of “ethics washing” and “ethics theatre”:
  1. The failure of Microsoft’s AI Principles and commitment to prevent their facial recognition software from being used to do harm.
  1. Microsoft, in producing AI-driven facial recognition software, stated that they will “advocate for safeguards for people’s democratic freedoms in law enforcement surveillance scenarios and will not deploy facial recognition technology in scenarios that we believe will put these freedoms at risk.” But…
  2. Microsoft has come under fire recently after reporting by Haaretz and NBC revealed that Microsoft was working with AnyVision, an Israel-based technology company. AnyVision’s facial recognition software is being used to illegally monitor Palestinians living in the occupied West Bank. After the reporting and significant public pressure, Microsoft hired the former U.S. Attorney General Eric Holder to audit whether AnyVision’s practices are in line with their ethics practices.
  3. The question we should be asking is: how is it possible that Microsoft’s AI Principles didn’t prevent the partnership in the first place? If we ask Whittaker’s questions and look at Wagner’s framework, we might learn that Microsoft’s policies were structured in such a way that they never really had the chance of preventing harmful policies and partnerships.
  1. Google’s failed effort at forming an AI ethics board, which was designed to guide the “responsible development of AI” at the company. The initiative was scrapped less than two weeks after it was announced. Here’s why the initiative failed so quickly:
  1. The board was structured in such a way that it couldn’t possibly have acted as a meaningful check on the potential harm of Google’s work. The board planned on meeting only four times over the course of a year, wouldn’t have made its recommendations publicly or transparently, and wouldn’t have any actual power to veto or change projects or partnerships.
  2. Google also included individuals on the board whose personal interests and beliefs seemed unaligned with the initiative’s purpose. The board included, for example, the CEO of drone company despite the fact that the board would need to deliberate on the ethics of producing military applications. The board also included the president of the Heritage Foundation, who has made transphobic and xenophobic comments. Given that she is unwilling, for example, to support efforts to extend civil rights protections to the trans community, it seemed unlikely that she would be willing to seriously evaluate how certain AI technologies could hurt trans individuals.
  1. If you’re looking for more encouraging reads about basic ethics for computer scientists, check out this handbook developed by the Beneficial AI Society at the University of Edinburgh.

Section III- Labor, Complicity, and Action

In this section, we’ll introduce some recent examples of tech workers whose questioning of the relationship between ethics and technology in their workplaces has led them to demand and create changes in their working environments. Through solidarity with these workers, we can help create a technology industry that reflects the values that are important to the people who fuel the work of these corporations. This section will introduce you to examples of recent movements to prevent companies from working with ICE, but we hope the relevance of these actions of dissent is clear beyond just the specific companies covered in the articles below.  Finally, if you’re in the process of looking for jobs in the tech industry, we recommend checking out AI Now’s How to Interview a Tech Company Guide.

Recent Dissent

Members of the technology industry have the power to refuse to build systems of oppression and destruction. Tech workers at many companies have taken a stand to demand that their employers stop being in the business of human rights violations. Members of the technology community also have the power to put pressure on companies and demand that they stop profiting from abuse. We’ll cover some high-profile cases like the Grace Hopper Celebration’s dropping Palantir, as well as employee dissent movements at companies like Google and Github. We also recommend taking a look at this AI In 2019 Review from AI Now for a powerful visualization of some of the most important moments in ethics and technology this year.

Many organizations and coalitions in the technology community have dropped companies that choose to continue working with ICE. The Grace Hopper Celebration, the world’s largest conference for women and technology,  dropped Palantir for these reasons. You can read about that decision on Business Insider and Vox. UC Berkeley’s Privacy Law Scholars Conference also dropped Palantir (read about it here on Bloomberg), as did Lesbians Who Tech (read here via The Verge). In the case of the Grace Hopper Celebration and Lesbians Who Tech specifically, we hope you take the opportunity to question the role of power and privilege in the companies’ choices to keep working with ICE and ICE profiteers. Further, if you are a member of an organization focusing on minorities in the technology industry, we encourage you to question how your organization’s mission intersects with these issues.

Current employees of technology companies have power to demand change too. Ethics In Tech has covered some of the work of coalitions like Googlers for Human Rights (whose petition you can read here). Amazon workers have circulated an internal letter asking Amazon to drop Palantir, and there have been worker protests at Google, Microsoft, and GitHub. Many workers have gone on strike, or followed their conscience and resigned.

Success Stories

Pressure from internal and external campaigns has already motivated prominent companies to drop partnerships with immoral parties. Two prominent examples are CloudFlare and McKinsey.

In 2017, CloudFlare, a network provider, dropped white nationalist website The Daily Stormer.  

In this article from The Verge, CloudFlare CEO Matthew Prince noted that part of his reasoning behind dropping the Daily Stormer was realizing that the website’s operation depended on the services of CloudFlare. This is just one example of the impact that technology providers can have when they consider the ethical impact of their partnerships. Not only is this relevant for the business model of technology companies, but it is also crucial in understanding the role that technology can play in creating or limiting the power of morally reprehensible actors. Read more on Wired or on the CloudFlare blog.

McKinsey & Company, a global consulting firm, dropped ICE as a partner in 2018. This article from Fortune details the decision making process. The article also speculates about McKinsey’s reasoning for dropping ICE as a partner-- was the decision part of a strategy to divert negative feedback about other controversial business practices?

UPDATE: Since the publication of this reading guide, more detail has become available about McKinsey’s work with ICE and CBP. We encourage you to read more here.

When we think about the intersection of ethics and technology, it’s important to consider the relative limits of our complicity and divestment, and we hope this example helps spark such considerations.