Ethics and Technology Reading Guide
Part of the Mijente #NoTechforICE International Student Day of Action 2019
Developed by #NoTechforTyrants at the University of St Andrews in collaboration with #NoTechforTyrants at the University of Edinburgh
As students entering a diversifying workforce, we face questions about ethics and technology on a day-to-day basis. In developing this reading guide, we hope to give you the tools to start formulating your own answers to these questions. Our ability to create a better future together depends on our ability to navigate conversations about what matters most to us.
In what follows, you will find links to pieces that piqued our interest, and some of our thoughts on their relevance. These links are broadly categorized into 3 sections. In Section I, you will find links to articles detailing recent overlap between ethics and technology. We hope you will use these cases as conversation starters, to determine what questions may be relevant in your own discussions. Section I also includes resources about the scope of applied ethics, from the vantage point of academic philosophy. Section II focuses on the role of philosophy in the technology industry. It touches on the phenomenon of ethics-washing, wherein large corporations engaged in unethical practices co-opt the language of academic philosophy in order to justify their work. This leads us to Section III, where we introduce questions about labor and technology. Section III cites examples of workers and citizens expressing their concern about unethical contracts.
We are releasing this reading guide on November 19, 2019, as part of the International Day of Student Action targeting Palantir’s work with United States Immigration and Customs Enforcement (ICE). Students on 16 campuses across the United States and the United Kingdom are working together with Mijente to raise awareness of the complicity of technology companies like Palantir in the inhumane detention and deportation work of ICE, and to pressure Palantir to drop its contract with ICE, which is up for renewal on November 27. We hope that you, like us, find some of these articles and resources useful in sparking further conversations.
#NoTechforTyrants at the University of St Andrews
Table of Contents
Section I- Ethics, AI, Technology, and Philosophy
Section II- Diary of An Ethicist in Silicon Valley
Section III- Labor, Complicity, and Action
Section I- Ethics, AI, Technology, and Philosophy
AI is everywhere, and folks seem to be more and more aware of the potential risks involved. Large technology companies have also been taking action to regulate their work along ethical guidelines. Here are some of the ways this is relevant in discussions of technology and ethics.
Why do these questions matter for us? AI is already being used to…
- Set the prices you pay
- Monitor your bank account
- Determine your credit score
- Set your insurance premiums
- Determine where police patrol
- Assist judges in making bail and sentencing decisions
If you’re interested in academic resources, check these out:
- Comprehensive review of the major ethics and policy issues facing AI:
- Artificial Intelligence Policy: A Primer and Roadmap
- Related: A useful view from the past (i.e., what were computer scientists and philosophers saying about these issues twenty years ago?): A Very Short History of Computer Ethics
- Everything Nick Bostrom has ever written:
- Nick’s website
- Policy Desiderata for Superintelligent AI: A Vector Field Approach
- Technological Revolutions: Ethics and Policy in the Dark
- Joy Buolamwini does critical work on algorithmic bias in AI, reminding us of emerging technology’s non-objectivity and capacity to reinforce and reproduce existing injustice.
- Algorithmic Justice League
- Check out her research on racial and gender bias in the AI services of companies like IBM, Microsoft, and Amazon.
- The Future of Humanity Institute has some incredible academic resources on the intersection of ethics and technology, particularly AI.
- FHI’s website
- Reframing Superintelligence: Comprehensive AI Services as General Intelligence: see section on risk.
- Brent Hecht does some wonderful writing on technology, dissent, protest, and political change.
- Brent’s website
- How Do People Change Their Technology Use in Protest?: Understanding “Protest Users”
- “Data Strikes”: Evaluating the Effectiveness of a New Form of Collective Action Against Technology Companies
- Out of Site: Empowering a New Approach to Online Boycotts
- Deeply important research from Ruha Benjamin on how emerging technologies can reinforce and deepen existing social injustice, particularly white supremacy
- Race After Technology: Abolitionist Tools for the New Jim Code
- A killer piece from Kate Klonick on the state of free speech and expression on the web:
- The New Governors: The People, Rules, and Processes Governing Online Speech
- On the need to change the computer science peer review process:
- ACM code of ethics, which was last updated in 1992, merely asks that computer scientists *consider* the negative societal consequences of their work. Many computer scientists and technology focused philosophers think that’s insufficient.
- Brent Hecht of Northwestern University, reflecting on the insufficiency the ACM’s dictum proposes in Nature that “the computer-science community should change its peer-review process to ensure that researchers disclose any possible negative societal consequences of their work in papers, or risk rejection.” For a more fleshed out proposal, see the FCA’s post.
- (And here’s why generally telling computer scientists to “consider ethics” isn’t enough: Does ACM’s Code of Ethics Change Ethical Decision Making in Software Development?)
Section II- Diary of An Ethicist in Silicon Valley
Most technology companies are purportedly committed to doing good and being ethical. But we know that technology companies still do bad things. One way we can explain the disconnect between the creation of ethics policies and boards and the continuing product of harm is by exploring the problems of “ethics theatre” and “ethics washing.” What is it like to be an AI Ethicist?
- “Ethics theatre”
- “Ethics theatre” is a term coined by Meredith Whittaker, the co-founder and co-director of the AI Now Institute, that describes the phenomenon whereby ethics is watered down into a public performance. Highly publicized but internally driven ethics education initiatives, for example, might take the place of meaningful oversight and hard, enforceable rules.
- How do we know when a new initiative is merely ethics theatre? Whittaker encourages us to ask questions like, “What do these boards actually do?”, “Are product decisions run by them?”, “Can they cancel a product decision?”, “Do they have veto power otherwise?”, “Is there any documentation of whether their advice was taken?”, “Who chooses who’s on the board?”
- You can explore this problem in more detail by reading through the AI Now 2018 Report, which notes that the “rush to adopt” ethical codes has not been met with the corresponding introduction of mechanisms that can “backstop these ... commitments.”
- “Ethics washing”
- “Ethics washing” refers to a similar phenomenon. The idea was developed by Benjamin Wagner, Assistant Professor and Director of the Privacy & Sustainable Computing Lab at Vienna University of Economics and Business. Wagner explains that the technology community often views the adoption of “ethics” as a tool to avoid regulatory solutions; it’s viewed as the “easy” or “soft” option. That being said, Wagner believes that there is a minimum criteria that technology companies can adopt to improve the chances that their ethics initiatives are more than just ethics washing (see: page 5).
- Quote from an interview with The Verge: “Academic Ben Wagner says tech’s enthusiasm for ethics paraphernalia is just ‘ethics washing,’ a strategy to avoid government regulation. When researchers uncover new ways for technology to harm marginalized groups or infringe on civil liberties, tech companies can point to their boards and charters and say, ‘Look, we’re doing something.’ It deflects criticism, and because the boards lack any power, it means the companies don’t change. ‘Most of the ethics principles developed now lack any institutional framework,’ Wagner tells The Verge. ‘They’re non-binding. This makes it very easy for companies to look [at ethical issues] and go, “That’s important,” but continue with whatever it is they were doing beforehand.’”
- Examples of “ethics washing” and “ethics theatre”:
- The failure of Microsoft’s AI Principles and commitment to prevent their facial recognition software from being used to do harm.
- Microsoft, in producing AI-driven facial recognition software, stated that they will “advocate for safeguards for people’s democratic freedoms in law enforcement surveillance scenarios and will not deploy facial recognition technology in scenarios that we believe will put these freedoms at risk.” But…
- Microsoft has come under fire recently after reporting by Haaretz and NBC revealed that Microsoft was working with AnyVision, an Israel-based technology company. AnyVision’s facial recognition software is being used to illegally monitor Palestinians living in the occupied West Bank. After the reporting and significant public pressure, Microsoft hired the former U.S. Attorney General Eric Holder to audit whether AnyVision’s practices are in line with their ethics practices.
- The question we should be asking is: how is it possible that Microsoft’s AI Principles didn’t prevent the partnership in the first place? If we ask Whittaker’s questions and look at Wagner’s framework, we might learn that Microsoft’s policies were structured in such a way that they never really had the chance of preventing harmful policies and partnerships.
- Google’s failed effort at forming an AI ethics board, which was designed to guide the “responsible development of AI” at the company. The initiative was scrapped less than two weeks after it was announced. Here’s why the initiative failed so quickly:
- The board was structured in such a way that it couldn’t possibly have acted as a meaningful check on the potential harm of Google’s work. The board planned on meeting only four times over the course of a year, wouldn’t have made its recommendations publicly or transparently, and wouldn’t have any actual power to veto or change projects or partnerships.
- Google also included individuals on the board whose personal interests and beliefs seemed unaligned with the initiative’s purpose. The board included, for example, the CEO of drone company despite the fact that the board would need to deliberate on the ethics of producing military applications. The board also included the president of the Heritage Foundation, who has made transphobic and xenophobic comments. Given that she is unwilling, for example, to support efforts to extend civil rights protections to the trans community, it seemed unlikely that she would be willing to seriously evaluate how certain AI technologies could hurt trans individuals.
- If you’re looking for more encouraging reads about basic ethics for computer scientists, check out this handbook developed by the Beneficial AI Society at the University of Edinburgh.
Section III- Labor, Complicity, and Action
In this section, we’ll introduce some recent examples of tech workers whose questioning of the relationship between ethics and technology in their workplaces has led them to demand and create changes in their working environments. Through solidarity with these workers, we can help create a technology industry that reflects the values that are important to the people who fuel the work of these corporations. This section will introduce you to examples of recent movements to prevent companies from working with ICE, but we hope the relevance of these actions of dissent is clear beyond just the specific companies covered in the articles below. Finally, if you’re in the process of looking for jobs in the tech industry, we recommend checking out AI Now’s How to Interview a Tech Company Guide.
Members of the technology industry have the power to refuse to build systems of oppression and destruction. Tech workers at many companies have taken a stand to demand that their employers stop being in the business of human rights violations. Members of the technology community also have the power to put pressure on companies and demand that they stop profiting from abuse. We’ll cover some high-profile cases like the Grace Hopper Celebration’s dropping Palantir, as well as employee dissent movements at companies like Google and Github. We also recommend taking a look at this AI In 2019 Review from AI Now for a powerful visualization of some of the most important moments in ethics and technology this year.
Many organizations and coalitions in the technology community have dropped companies that choose to continue working with ICE. The Grace Hopper Celebration, the world’s largest conference for women and technology, dropped Palantir for these reasons. You can read about that decision on Business Insider and Vox. UC Berkeley’s Privacy Law Scholars Conference also dropped Palantir (read about it here on Bloomberg), as did Lesbians Who Tech (read here via The Verge). In the case of the Grace Hopper Celebration and Lesbians Who Tech specifically, we hope you take the opportunity to question the role of power and privilege in the companies’ choices to keep working with ICE and ICE profiteers. Further, if you are a member of an organization focusing on minorities in the technology industry, we encourage you to question how your organization’s mission intersects with these issues.
Current employees of technology companies have power to demand change too. Ethics In Tech has covered some of the work of coalitions like Googlers for Human Rights (whose petition you can read here). Amazon workers have circulated an internal letter asking Amazon to drop Palantir, and there have been worker protests at Google, Microsoft, and GitHub. Many workers have gone on strike, or followed their conscience and resigned.
Pressure from internal and external campaigns has already motivated prominent companies to drop partnerships with immoral parties. Two prominent examples are CloudFlare and McKinsey.
In 2017, CloudFlare, a network provider, dropped white nationalist website The Daily Stormer.
In this article from The Verge, CloudFlare CEO Matthew Prince noted that part of his reasoning behind dropping the Daily Stormer was realizing that the website’s operation depended on the services of CloudFlare. This is just one example of the impact that technology providers can have when they consider the ethical impact of their partnerships. Not only is this relevant for the business model of technology companies, but it is also crucial in understanding the role that technology can play in creating or limiting the power of morally reprehensible actors. Read more on Wired or on the CloudFlare blog.
McKinsey & Company, a global consulting firm, dropped ICE as a partner in 2018. This article from Fortune details the decision making process. The article also speculates about McKinsey’s reasoning for dropping ICE as a partner-- was the decision part of a strategy to divert negative feedback about other controversial business practices?
UPDATE: Since the publication of this reading guide, more detail has become available about McKinsey’s work with ICE and CBP. We encourage you to read more here.
When we think about the intersection of ethics and technology, it’s important to consider the relative limits of our complicity and divestment, and we hope this example helps spark such considerations.