Some AI Governance Research Ideas

June 3 2021

Compiled by Markus Anderljung & Alexis Carlier

Below are some research ideas from folks at the Centre for the Governance of AI (GovAI) collated by Markus Anderljung and Alexis Carlier. If you are interested in pusuing any of the ideas, feel free to reach out to contact@governance.ai. We may be able to help you find mentorship, advice, or collaborators. You can also reach out if you’re intending to work on the project independently, so that we can help avoid duplication of effort.

See this EA Forum post for additional context.

The ideas:

The Impact of US Nuclear Strategists in the early Cold War

Transformative AI and the Challenge of Inequality

Human-Machine Failing

Will there be a California Effect for AI?

Nuclear Safety in China

History of existential risk concerns around nanotechnology

Broader impact statements: Learning lessons from their introduction and evolution

Structuring access to AI capabilities: lessons from synthetic biology

Bubbles, Winters, and AI

Lessons from Self-Governance Mechanisms in AI

How does government intervention and corporate self-governance relate?

Summary and analysis of “common memes” about AI, in different communities

A Review of Strategic-Trade Theory

Mind reading technology

Compute Governance ideas

Compute Funds

Compute Providers as a Node of AI Governance

China’s access to cutting edge chips

Compute Provider Actor Analysis


The Impact of US Nuclear Strategists in the early Cold War

Written by Waqar Zaidi

This project explores the impact of US nuclear strategists on nuclear strategy in the early Cold War. What types of experts provided advice on US nuclear strategy? How and in what ways did they affect state policy making on nuclear weapons from 1945 through to the end of the 1950s (and possibly beyond)? How could they have had a larger impact? This project provides a detailed case study to help us understand how and through what pathways technical experts have been able to shape state policymaking in relation to this critical technology. Knowing whether nuclear strategists had any impact should update us on the extent we might be able to have an impact on AI governance (or the development of other crucial technologies for that matter).

Sources

This project could be based almost exclusively on published sources (though archival research visits to the US could be helpful). There is a large historical and biographical literature that can be drawn upon. It is somewhat disparate and scattered, and would require significant connecting together. This process of interconnection is likely to be fruitful, providing new insights and perspectives.

The secondary literature may be thought of as consisting of three parts. First there is a longstanding literature on the historical development of nuclear strategy itself. Historically this body of work has not explored the impact of strategists in depth, though that may now be changing.[1] Second, there is a growing literature which is now exploring the impact and work of nuclear strategists, either as individuals (for example Thomas Schelling) or through organizations such as RAND.[2] Third, there are now a number of personal accounts of nuclear strategic development that provide insight into how nuclear strategy was made.[3] 

Individuals of interest could include: Bernard Brodie, Herbert Goldhammer, Herman Kahn, William Kaufmann, Nathan Leites, Andrew Marshall, Henry S. Rowen, Thomas C. Schelling, Donald Brennan, Walter Millis, and Albert Wohlstetter. Thinking on nuclear strategy was carried out in a number of government and non-government organizations, including independent think tanks and university research centers. The key organization is RAND, though the project could also look into the Hudson Institute and the Foreign Policy Research Institute, amongst many others.

Transformative AI and the Challenge of Inequality

Written by Anton Korinek

From an economic perspective, among the greatest challenges that transformative AI may pose are increases in inequality - if advanced AI systems can perform work far more efficiently than humans, workers may no longer be able to earn a living wage, whereas entrepreneurs and the owners of corporations may see their incomes rise significantly.

Inequality has been rising and been a significant challenge for policymakers for decades. But increases in inequality are not an unavoidable by-product of technological progress. Instead, as long as humans are in control, whether progress leads to greater inequality or greater shared prosperity is our collective choice. (To provide a useful analogy, we have seen in recent years that a concerted effort can successfully re-orient our economy in a "green" direction. It is similarly possible to reorient our economy in a direction that leads to shared prosperity.)  Ensuring that transformative AI leads to broadly shared increases in living standards is the most important economic dimension of the AI alignment problem.

This line of reasoning gives rise to three sets of research questions:

1) How do different types of advances in AI affect inequality? The first step for ethical AI developers to internalize their contributions to greater income inequality -- or equality -- is to measure the effects of what they are doing.

Research Question: Focusing on a specific AI application (e.g. AVs, a medical AI system, warehouse robots, customer service chatbots, etc.), what are the equilibrium effects on workers? Does a given AI application increase or reduce inequality? Does it improve or worsen worker conditions?

A model for determining these effects is outlined in Klinova and Korinek (2021), "AI and Shared Prosperity," Proceedings of the AIES '21. Useful resources are also provided by the Partnership on AI's Shared Prosperity Initiative.

2) Building on this analysis, how can ethically responsible AI developers ensure that their inventions contribute to shared prosperity if their inventions reduce demand for workers?

Research Question: Focusing on a specific AI application that threatens to significantly increase inequality, how could this application be reoriented? Are there ways of actively involving humans in some of the processes? Or ways of ensuring that the gains are distributed more equally (e.g. in the spirit of the windfall clause)? If not, should it be abandoned?

3) If AI developers ignore the discussed effects on inequality, what can policymakers do to address concerns about inequality and the challenges brought about by widespread displacement of labor in the future?

Research Question: Focusing on a specific country, what are the existing safety nets for workers? How much would workers lose if there is widespread job displacement and technological unemployment? How can safety nets be reformed so that labor displacement in the future does not automatically lead to economic misery? Also, can we put these reforms on auto-pilot such that benefits automatically increase e.g. when the economy grows, when the labor share declines, etc.

Human-Machine Failing

Written by Jeffrey Ding

How will AI affect the risk of military accidents? Can past cases of software failures in military systems shed light on this issue? This question's stakes are high. In the past, accidents in technological systems generated many "near-miss" nuclear crises during the Cold War (Sagan 1993). In the present, a naval accident is one of the most likely triggers for U.S.-China conflagration. In the future, analysis of AI-linked accidents could provide another lens into thinking about the risks associated with artificial general intelligence (AGI).

When we think of AI-related accidents, we often gravitate toward depictions of technical malfunctions (e.g. an autonomous vehicle crashes because it encounters an edge scenario that it hasn’t been trained on). This is akin to a software failure, typically defined as the inability of code “to perform its required function within specified performance requirements” (Foreman et al. 2015, 102). You could even argue that “reward hacking” is a sophisticated version of a software failure. The agent is not performing the intended function. However, this narrow definition of software failure overlooks many cases in which “software complied with its requirements yet directly contributed to or led to an accident” (Foreman et al. 2015, 102). Nothing was hacked. The code worked as intended, but an accident still occurred.

Take, as an example, the 1988 Vincennes incident, in which a U.S. naval ship accidentally shot down an Iran Air civilian airliner, killing all 290 on board, including 66 children. The official investigation revealed that the ship's Aegis system – a highly sophisticated command, control, communications, and intelligence center – performed flawlessly. Rather, the design of the user interface played a critical role, as key information and indicators were displayed on smaller screens that had to be called up — this vulnerability in the machine-to-commander link broke down during a high-stress crisis. In fact, there is substantial evidence that human-machine interaction effects are the primary vehicle by which automation software increases the risk of accidents (Mackenzie 1994).

Possible approaches to studying this question: Review how human-machine interactions affected past military accidents like the Vincennes incident, the Patriot fratricides, etc. One possible data source is the FORUM on Risks to the Public in Computers and Related Systems, a community of computer safety researchers. See, for instance, robust discussion on the Vincennes here.

Researchers could also explore how organizations learn from past incidents to reduce the risks of faulty human-machine interactions. For instance, some have pointed out that there has not been another similar incident with Aegis systems in the thirty years since Vincennes (Scharre 2018). How have risky human-machine interactions been dealt with in that time?

Will there be a California Effect for AI?

Written by Markus Anderljung (Jessica Cussins and Jared Brown likely have much more useful information than I do)

Some have argued (e.g. here and here) that there may be a California Effect in AI; that is, that more stringent Californian AI policies are likely to proliferate to other jurisdictions.

Two points provide evidence in favour of the view:

First, in some regulatory domains, such as environmental standards for cars, has seen a California Effect (see e.g. Ch 8 in Vogel’s 1995 Trading Up), where Californian, more stringent regulation has proliferated across the US. Often, this happens where companies find it cheaper to simply produce one California-compliant product and sell that outside California as well (e.g. due to the cost of maintaining two separate production lines). There is a case to be made that the similar dynamics will be at play in AI. For an excellent description of the dynamics of the California Effect, I’d recommend reading The Brussels Effect by Anu Bradford, which explores the same phenomenon with regards to EU regulation.

Second, California has been the first major jurisdiction to put in place a number of AI-relevant policies in the US, such as the 2018 California Consumer Privacy Act (CCPA), which brings a lot of GDPR-inspired rights to consumers, the 2018 Bot Disclosure Act, and prohibitions on the use of facial recognition via e.g. the Body Camera Accountability Act. California also endorsed the Asilomar AI Principles.

Question to explore:

  • To what extent do we already see the inklings of a California Effect of AI? Has Californian legislation had an effect on the US federal legislative process? On other states? On other countries?
  • What AI policy issues could see a California Effect? This will depend on, among other things:
  • What AI policy is California able and likely to put in place? Will the regulation be among the first put in place? Will it be more stringent than that of other jurisdictions?
  • Will Californian AI policy have regulatory targets that will not move out of the state? E.g. if California puts in place policies targeting cloud computing facilities within its borders, that may incentivise them to move elsewhere.
  • Does California have the regulatory capacity to ensure compliance with its policies?
  • Will the AI policies put in place by California be such that companies develop and sell different two sets of products or services, one compliant with California law and one not, undermining a California Effect.
  • Does this suggest that longtermists should engage more with California’s regulation?
  • Crucially, is there the potential for a California Effect for regulation where there is a strong and/or good case for affecting high stakes AI impacts?
  • How tractible is it to affect the legislation in positive directions? For example, to what extent will the regulation get things right without significant intervention?
  • Could other jurisdictions play a similar role? Charlotte Siegman, myself and others are currently exploring the strength of a potential Brussels Effect in the AI policy domain.

Nuclear Safety in China

Written by Jeffrey Ding

China is investing in AI-enabled decision support systems for detecting nuclear attacks. How do Chinese policymakers and analysts view the stability of their nuclear deterrent? How do these views feed into their decisions around investments in new nuclear capabilities?

According to one scholar, there is a big difference over which risks Chinese and American analysts focus on: "In the United States, military analysts are often preoccupied with the concern that alarms or early warning systems, accidentally or even intentionally triggered, could produce false positives. Chinese analysts, in contrast, are much more concerned with false negatives" (Saalman 2018). In contrast, others argue that Chinese forces "prioritize negative control over positive control of nuclear weapons to implement the strict control of the CMC and Politburo over the alerting and use of nuclear weapons" (Cunningham 2019). Here, negative control refers to control against accidental or illegitimate use of nuclear weapons; positive control means control over always being able to execute a legitimate nuclear response.

Little is known about the stability of China’s nuclear deterrent around the world.Consider this passage from Schlosser's excellent book Command and Control (p. 475):

In January 2013, a report by the Defense Science Board warned that the (nuclear command and control) system's vulnerability to a large-scale cyber attack had never been fully assessed. Testifying before Congress, the head of the U.S. Strategic Command, General C. Robert Kehler, expressed confidence that no “significant vulnerability” existed. Nevertheless, he said that an “end-to-end comprehensive review” still needed to be done, that “we don’t know what we don’t know,” and that the age of the command-and-control system might inadvertently offer some protection against the latest hacking techniques. Asked whether Russia and China had the ability to prevent a cyber attack from launching one of their nuclear missiles, Kehler replied, "Senator, I don’t know.”

Possible approaches to studying this question: Studying this would require finding and reading Chinese-language sources on this topic. It would also involve comparative analysis of the safety cultures of the U.S. and Chinese nuclear communities. This is a tough but hugely important research area. As a program officer for Stanley Center for Peace and Security told me, this "is a really tough research area. There aren’t many NC3 [Nuclear Command and Control and Communications] experts anymore, China is a hard research topic for NC3, and cross domain issues makes it even more difficult."

History of existential risk concerns around nanotechnology

Written by Ben Garfinkel

I would be interested in an investigation into the history of existential risk concerns around nanotechnology and the lessons it might hold for the modern AI risk community.

Background: My impression is that it was not uncommon for futurists in the 1980s and 1990s to believe that transformative nanotech might be imminent and might lead to the extinction of humanity if managed poorly. These concerns also seem to have spread into popular culture, to some extent, and to have been at least a peripheral presence in policy discussions (if only as something that many scientists felt the need to actively distance themselves from). My impression is that there is also significant continuity between the present-day AI-focused long-termist community and the futurist community that was previously highly concerned about nanotechnology. For example, my understanding is that some early work on aligned superintelligence (e.g. by the Singularity Institute) was partly motivated by concern about nanotech risk: some feared that transformative nanotech might arrive soon and largely without warning, might result in extinction by default, and might only be safely manageable if aligned superintelligence is developed first.

Questions I’m interested in:

  • How did the community of people worried about nanotech go about communicating this risk, trying to address it, and so on? Are there any obvious mistakes that the AI risk community ought to learn from?
  • How common was it for people in the futurist community to believe extinction from nanotech was a major near-term risk? If it was common, what led them to believe this? Was the belief reasonable given the available evidence? If not, is it possible that the modern futurist community has made some similar mistakes when thinking about AI?
  • Are there any strategic insights that were formed about nanotech risk (e.g. by the Foresight Institute) that are applicable to AI, but mostly forgotten or ignored today?

I think one could make significant progress on these questions just by talking to people who were engaged in (or at least aware of) debates around transformative nanotech in the 1980s or 1990s, including Eric Drexler, Christine Peterson, Eliezer Yudkowsky, and Robin Hanson. It would also be useful to read available histories of nanotechnology and to read essays, news coverage, popular fiction, and mailing list discussions from this period.

Broader impact statements: Learning lessons from their introduction and evolution

Written by Toby Shevlane

For the 2020 conference, the NeurIPS committee introduced a requirement that authors include in their papers a section reflecting upon the broader impact of their work. The idea was to push researchers to consider potential negative societal impacts of AI research (see e.g. Prunkl et al 2021, Ashurst et al 2020, Hecht 2020, Abuhamad & Rheault 2020). For 2021, this requirement is being changed, such that authors instead need to answer a checklist when submitting a paper, with the paper asking whether the paper discusses potential negative impacts (and the authors are free to say no).

These developments could be used as a case study to learn about the pressures that shape institutional change within the AI research community. The project would seek to answer:

  • What events and decisions led to the creation of the broader impacts initiative? What was the motivation for designing the requirement as it was?
  • Has the initiative been watered down for the 2021 conference, and if so why?
  • Did criticism of the initiative by AI researchers threaten its continued existence?
  • Was the initiative successful in pushing AI researchers to consider the possible negative impacts of their work?
  • Are conference organisers well positioned to bring about institutional changes within the AI research community?

The project would involve interviewing NeurIPS organisers, both from the 2020 and 2021 committees.

Structuring access to AI capabilities: lessons from synthetic biology

Written by Toby Shevlane

I am currently writing a book chapter on what I’m provisionally referring to as “structured capability access” (SCA) within AI research. In contrast to open source software, SCA refers to AI developers setting up controlled interactions between the user and the underlying software, with the most obvious example being the way that OpenAI hosts GPT-3 on its API service. SCA must address both safety and security: the user must use the system in a safe way, and they must be prevented from unauthorised modification or reverse engineering of the system. The book chapter focuses on SCA for AI models, but the lens of SCA also applies to access to the cloud computing used to train models.

Methods for structuring access to certain capabilities are not unique to AI. One interesting example is the printing of DNA sequences, carried out by certain biology labs. There are procedures for these labs screening requests; and also proposals for the printing hardware to be sold with certain “locks” on what can be printed (Esvelt, 2018).

This research project would explore in detail the systems that exist within synthetic biology, in order to learn lessons for how SCA could be further developed within AI. Important sub-questions would be:

  • If the DNA screening tools aren’t widely implemented, why? What have been the hurdles?
  • Similarly, if the DNA screening tools aren’t very effective, why? How does the difficulty of screening DNA synthesis orders compare to the difficulty of screening queries of an AI model, or code for training a model sent to a cloud compute provider?
  • What solutions have been proposed to improve the safety and security of DNA synthesis? Are these applicable to AI?

It would be beneficial if the researcher had some existing familiarity with biology, but this might not be necessary.

Bubbles, Winters, and AI

Written by Markus Anderljung

To what extent does high tech development, including AI, have similar dynamics to financial bubbles? Shiller (in e.g. Irrational Exuberance) says that bubbles in financial markets are driven by investors' beliefs about other investors’ beliefs and poor feedback loops with reality. Investor behaviour is thus often driven by narratives, which can be undermined suddenly if/when a strong counter-narrative takes hold. It seems plausible that the same dynamics exist in the high tech space. They might even be stronger. Factors in favour include high information asymmetries, a lot of actors with incentives to contribute to hype-narratives, and poor feedback loops with reality (research needs a lot of time to turn into profits). On the other hand, moving capital in the high tech space is much harder than in finance (changing career track takes several years).

Specific questions I’m interested in:

  • Are there bubbles in high tech development? Do they share dynamics with bubbles in financial markets?
  • If yes, how do boom and bust cycles affect the rate progress of a field? You could imagine that it doesn’t have a large effect as the actors most susceptible to bubbles are less important to the rate of development of the field (talented researchers going into the area is more important than e.g. the amount of VC capital in the area). Are the causes of bubbles and their popping different in high tech development vs. in financial markets?
  • If yes, would it be net beneficial for bubbles to grow larger and pop? To fizzle out? To never appear in the first place?
  • The questions could be explored by looking at:
  • Relevant case studies, e.g. the Dot-com Bubble (in the economic literature sometimes referred to as the “Technology Bubble”), the Railway Frenzy, the Bike Boom, the Uranium Bubble. You could also look at cases with long feedback loops, such as perhaps the Great Alpaca Bubble and Tulip Mania.
  • Quantitatively estimating whether there are boom and bust cycles in technology investment by companies, foundations, governments. It should be possible to get some handle on the question by looking at financial statements and industry reports. A major difficulty will be establishing whether investments were caused by bubble dynamics and differentiating between the actual research being done and the label applied to it.
  • Investigating the factors that might lead to bubble-like behaviour. For example, to what extent are grants or investment decisions based on believing that others believe the technology to be promising? Do we have other ways of measuring how much “dumb money” there is in AI R&D?
  • Studying the history of AI, in particular attempting to see how much and whether progress was slowed down by AI winters. You could make progress on this by looking at progress measures (e.g. measured by date of seminal papers, progress on benchmarks) and input measures (e.g. amount of funding, number of PhDs). For what input metrics is there a boom bust dynamic? Are the booms and busts visible in the progress measures?
  • Another approach might be to directly ask questions about how we’d know whether AI development exhibits bubble dynamics and if so, how such dynamics affect progress and how we could know if we’re in a bubble.
  • What relevant work is out there on these questions? The Hype Cycle framework developed by the tech advisory company Gartner seems relevant, though not very rigorous. Surely, there’s more relevant work out there. Could the literature on Industrial Life Cycles be helpful? Carlota Perez’ work (e.g. her book Technological Revolutions and Financial Capital) about the relationship between financial bubbles and technological innovation seems relevant.

Lessons from Self-Governance Mechanisms in AI

Written by Markus Anderljung & Alex Lintz

What can we learn from the self-governance mechanisms put in place by the AI community and AI companies in the last decade? Notable examples to study include: the Asilomar Conference on Beneficial AI, the Facebook Oversight Board (see Klonick 2020), AI ethics boards (interesting examples include DeepMind's ethics board, Google’s defunct AI advisory board, Microsoft’s AETHER committee), AI ethics principles put out by a huge number of entities, the Partnership on AI, shifts in publication norms (see e.g. Prunkl et al 2021, Partnership on AI 2021, and Shevlane & Dafoe 2020), and companies supporting the AI ethics field (e.g. by sponsoring conferences, research, and setting up internal teams), OpenAI’s Charter and move to a capped-profit model. Some relevant work on this topic from a broadly longtermist perspective includes forthcoming work by Cihon, Schuett, and Baum, Peter Cihon’s work on AI standards, Caroline Meinhardt’s forthcoming work on corporate AI ethics in China, Jia Yuan Loke’s EA Forum post, Will Hunt’s work on safety-critical AI in aviation, and Jessica Newman’s work evaluating some existing attempts at AI governance.

For each attempt, you might ask questions like the following:

  • Conception
  • What was the origin of the idea? How did it come to the attention of the originator? Where was it discussed prior to implementation?
  • What incentivised the actor to put the mechanism in place? Internal or external pressure? Was its implementation proactive or reactive?
  • Implementation
  • How were these attempts perceived? Was there pushback? Was the pushback warranted? Consider the responses both internal and external to the relevant organisation or group.
  • Outcomes
  • Did it succeed at its goals, stated or implicit?
  • What were the effects of the mechanism, both positive and negative? For example, did companies which adopted the mechanism become less competitive or was it rational even from a profit-seeking perspective?
  • Lessons
  • What could have made the mechanism or attempt more beneficial?
  • Does this update us in favour or against other actors taking similar actions?
  • What lessons can we draw about the strategic landscape of AI governance more broadly?

Ultimately, you should aim for this research to be able to inform questions like: If you were in control of e.g. Google, what corporate self-governance mechanisms would you put in place in order to ensure the company behaves in a socially responsible way in the face of radical technological change? What, if any, mechanisms should advocates outside companies push them to adopt?

How does government intervention and corporate self-governance relate?

Written by Alex Lintz

While self-governance is important, its secondary effects could be even more so. In particular, improved self-governance might influence the quality or quantity of regulation. For example, it is not yet clear what the most important impact of something like the Facebook Oversight Board (see e.g. Klonick 2020 for details) will be. Will it hold regulators at bay by satisficing governance needs? Might it increase Facebook’s desire for regulation which forces competitors to act within the constraints Facebook has already subjected itself to? Will it provide an example for regulators to learn from, thus improving future regulation? Understanding the relationship between self-governance and regulation may help us to understand where to target our efforts. For example, should we push hard for responsible corporate self-governance first or would better regulation (or the threat of it) improve self-governance anyway?

One approach which might shed light on these questions is to evaluate past cases of emerging industries and trace the path of their evolution to more responsible governance (or else their failure to become responsible). In those cases, did better governance start among firms and then lead to regulation, involve little in the way of self-governance, involve deceptive self-governance (e.g. the tobacco industry), or otherwise fail to achieve responsible governance? Cases should ideally be selected in part by their similarity to current attempts at regulating AI (for example, those listed in other parts of this document). Jia Yuan Locke provides one potentially useful framework for selecting relevant industries. Another might be to look from a national security lens at what Jeff Ding identifies as strategic technologies, focusing on how companies engage with e.g. threats of export controls being put in place.

For each case, you might ask questions like the following:

  • How did regulation and self-governance play off one another to contribute to responsible governance (or the failure to achieve it)? Were there feedback loops involved between the two?
  • Did the adoption of self-governance and oversight mechanisms make regulation more or less likely? What about the impact on regulatory effectiveness? For example, were companies with self-governance mechanisms more likely to push for regulation?
  • Did the threat of regulation play a role in incentivizing corporations to develop self-governance mechanisms?
  • If deceptive self-governance practices occurred which failed to stop negative externalities but which decreased the government’s desire for regulation (e.g. the Tobacco Industry’s obfuscation of the link between cigarettes and lung cancer), why did they happen? What were the characteristics of the companies or industries which allowed this to happen and how do those compare to the characteristics of AI firms?

Summary and analysis of “common memes” about AI, in different communities

Written by Ben Garfinkel

  • I think that simple memes (pithy arguments, simple analogies, commonly cited facts, distinctive concepts, etc.) often play a big role in shaping how different communities think about a given subject. At least, these memes tend to give a window into how a community thinks about the subject.
  • I think it would be interesting to try to develop a list of influential/common memes about AI, which are prevalent in different communities. (Examples: “Data is the new oil,” in certain policy communities, and “paperclippers,” in the EA x-risk community.) Then I think it’d also be interesting to ask whether any of these memes might be especially misleading or detrimental. This project could help people to better understand the worldviews of different communities better – and, I think, more importantly, to help people understand what kinds of communication/meme-pushing around AI governance might be most useful.

A Review of Strategic-Trade Theory

Written by Markus Anderljung

A review paper summarizing the state of the literature on strategic-trade theory and related ideas (e.g. “industrial policy”, “high-development theory”). For example, I would like to have a better sense of the magnitudes of the rents that a country can get from having an industrial champion. How much value does the existence of Airbus provide to Europe? How much of modern wealth comes from strategic industries? One estimate puts the commercial aircraft market at roughly $100bn per year (which is ~0.5% of US or EU GDP). Ultimately, I’m interested in this research informing questions like: Do states promote their national interest by attempting to create national AI champions? How does one country having a national AI champion affect the national interests of other states? These questions can help inform us on how strongly incentivised states will be to pursue ambitious AI industrial policy approaches and whether there is a strong national interest-case against doing so.

Potential sources

Mind reading technology

Written by Markus Anderljung

Human society is built around the fact that human minds are opaque. What would happen if improvements in machine learning and various sensors such as brain scanning technologies and wearable biometric readers could make people’s thoughts and emotions more transparent? Such transparency could have significant impacts on the world, reshaping diplomacy, surveillance, criminal justice, the economy, politics, and interpersonal relationships.

What capabilities are we likely to see?

To my limited knowledge, there are currently no particularly impressive mind reading technologies. Polygraph tests are notoriously inaccurate, several companies (e.g. Affectiva) are building visual recognition systems for emotion (though they’ve faced skepticism and don’t appear particularly impressive to me), and new products with limited accuracy are starting to be rolled out in workplaces and schools (see e.g. Giattino et al 2019 for a summary). However, progress in high-bandwidth human-computer interfaces by companies like Neuralink and advances in reconstructing mental content from e.g. fMRI data (e.g. Shen et al 2019, Hassabis et al 2014), suggest that more impressive technologies are on the horizon.

To make progress on the question, you can ask:

Forecasting mind reading technologies

  • What, if any, mind reading technologies may be available when?
  • You probably want to think about what the technology can detect, the modality it uses, it’s accuracy, and how invasive the technology is likely to be (e.g. whether it requires implants or the subject to get into an fMRI machine).
  • How easy is it to fool the technology? For example, it’s possible to fool lie detectors; if you know you’re being watched by an emotion recognition system, you can mask your facial expressions.
  • There is a lot of room for snake oil in this domain, and so you should get properly acquainted with the relevant scientific literature: psychology, neuroscience, brain-machine interfaces, and the like. It would likely be helpful to have a PhD in a relevant area.
  • To what extent will there be incentives to develop this technology? Who will develop it?
  • What actors are currently spending resources on developing this kind of technology?
  • My current guess is that the actors most incentivised to develop this kind of technology are: intelligence agencies and governments interested in improved surveillance capabilities, the justice system, product developers and marketers interested in increasing user engagement, and insurance companies interested in detecting fraudulent insurance claims.

The potential impact of mind reading technologies

  • Domains that may be affected include:
  • Strategic dynamics and negotiations: Actors becoming more transparent to each other could make it much easier to make credible commitments, causing large changes in the character of negotiations e.g. between firms and between states.  
  • Domestic politics: Politicians with incentives or beliefs not approved of by the populous could find it much more difficult to succeed. You could also imagine that these technologies could be used to make deliberative processes much more efficient.
  • Sociology: It could make groups much more homogenous as beliefs and commitment to a cause could be tested much more reliably.
  • Surveillance and law enforcement: Governments and police forces would likely be very interested in this technology. This could on the one hand help significantly reduce crime and on the other be an effective tool for totalitarian lock-in (Rafferty forthcoming makes this claim regarding brain-computer interfaces).
  • Epistemics: It seems plausible that deception or lying hampers society’s ability to form and disseminate true beliefs. On the other hand, epistemics might be undermined if it is advantageous to be able to game the technology, e.g. via self-deception or forming intentionally muddled beliefs.
  • What kind of adaptations or second order effects are we likely to see these technologies cause, if deployed?
  • It might bring down the facade of claiming to be pro-social, creating common knowledge of actor’s cynical beliefs and incentives, leading to an overall decrease in pro-sociality.
  • If the technology works like a lie detector (it detects whether you say something you do not believe), it might push in the direction of avoiding lying rather than expressing the truth. People and actors may avoid opportunities to be deceptive. They could say less, be vague, avoid forming beliefs on topics they wouldn’t want to speak truthfully about, or more-or-less intentionally form false beliefs.
  • To study the above, my guess is that you want to build on a good footing in game theory (in particular regarding complete information games, where the actors have common knowledge of each others’ payoffs, utility functions, beliefs etc.), international relations, political science, psychology, or sociology.
  • Ultimately, your research should aim to inform the question: all things considered, should we expect development of these technologies to be on net good or bad? Should longtermists seek to bolster or hold back the development of certain kinds of mind reading technologies?

A note on terminology: Folks in the longtermist / EA community often refer to this as “lie detection technology”. I have a preference for “mind reading technology” as the transformative impacts may also come from being able to detect emotions or thoughts not expressed. It does come with the problem of sounding perhaps a bit too futuristic.

Keep in mind that this space could be rife with information hazards, in particular if it turns out that mind reading technologies are on net a bad development and they are feasible.

Compute Governance ideas

Written by Markus Anderljung (you might also want to reach out to Miles Brundage, Shahar Avin, or Saif Khan if you’re interested in these topics)

Compute is a very promising node for AI governance. Why? Powerful AI systems in the near term are likely to need massive amounts of compute, especially if the scaling hypothesis proves correct. Furthermore, compute seems more easily governable than other inputs to AI systems (talent, ideas, data), because it is more easily detectable (it requires energy, takes up physical space, etc) and because it’s supply chain is very concentrated (which enables monitoring and governance) (see Khan, Mann, Peterson 2021, Avin unpublished, and Brundage forthcoming).

Compute Funds

Some have called for governments to create compute funds, where some (e.g. academics, those working on AI for Good-applications, or AI safety researchers) are given preferential or exclusive access. This might be implemented via credits researchers can spend with compute providers or via the government setting up its own domestic cloud computing infrastructure. In the US, the National Defense Authorization Act 2021 included a provision that a National AI Research Cloud task force should explore whether to set up an AI research cloud, providing both compute and datasets (summary here) and the National Security Commission on AI recommended the creation of a similar entity (p.191 here). The EU and the UK are also in various stages of considering similar initiatives.

Should governments set up such funds? Seeing as they are likely to be set up, how should they be designed? Concrete questions to explore:

  • Arguments in favour of compute funds (see e.g. Brundage, Avin, Wang, Belfield, Krueger et al 2020, Etchemendy & Li 2020, and this class at Stanford) often go (i) cutting-edge research will often require vast amounts of compute, (ii) academic researchers (or whatever group the proposal focuses on) do not have access to enough compute, (iii) it would be good if such researchers were able to conduct cutting-edge research (in the case of academic researchers, the claim is often that it will provide important scrutiny of research coming out of corporate AI labs). But why is providing compute directly preferable to giving researchers money that they can spend as they see fit, be it on talent, compute, datasets? Arguments to explore include:
  • Firstly, there may be significant economies of scale. It’s cheaper to buy compute in bulk. Secondly, academic researchers may be misallocating their resources, because they underestimate the importance of compute or, more plausibly, because they face hurdles to procuring it (e.g. administrative blocks from their home institutions or funding bodies being more interested in funding grad students than millions of dollars of compute) (h/t Miles Brundage).
  • On the other hand, academics and corporate AI labs face very different incentives. Academics are incentivised to maximise citations and it’s not clear that pursuing high compute research is more cost effective based on expected citations than e.g. hiring postdocs (h/t Carolyn Ashurst & Ben Garfinkel). Tech companies largely pursue high compute research for other reasons e.g. wanting to integrate AI research into their products and improving their brand to attract talent. As such, academics likely need more than easy access to compute to become competitive with corporate AI labs in compute intensive research.
  • Could compute funds come with other important externalities? For example, some argue that compute funds might help build a domestic computing infrastructure not reliant on imports from other countries. You might also argue that these funds could reduce the power of corporate compute providers, especially if governments set up their own compute clusters, with potential positive effects. Others argue it is an important tool to build domestic talent. It could also move us closer to a world where a prosocial actor (e.g. the UN) has control over a large amount of compute which can be used to good ends.
  • Taking into account the above, how, in practice, ought such funds be designed? Should they require the compute clusters to be located domestically? Should the government aim to set up its own compute clusters? How should access to compute be distributed?
  • To what extent is all of this reasoning applicable to the creation and distribution of large datasets? This is a recommendation often made in conjunction with the call for a compute fund (see e.g. Etchemendy & Li 2020).

Compute Providers as a Node of AI Governance

If compute is a particularly promising node of AI governance, we might expect compute providers to be particularly important. What kinds of governance activities (e.g. related to monitoring and use restrictions) would we like compute providers (e.g. semiconductor companies and cloud compute providers) to engage in? How can we move towards a world where they take these actions?

Concrete questions:

  • What kinds of monitoring and use-restrictions are technically feasible and cost-effective? The usefulness of compute as a governance node depends crucially on how easy it is to: monitor how much compute different actors have access to and what it is used for, and restrict use of compute for certain purposes or the amount of compute actors have access to.
  • These considerations may also apply to what you might call “API governance” (exemplified in OpenAI’s roll-out of GPT-3), where an actor provides access to a system via an API, giving them more opportunities to monitor and put restrictions on how the system is used.
  • Should we try to increase the extent to which compute providers take responsibility for computation done on their systems?[4] This could be done by having these actors do so independently via self-governance efforts or by the introduction of regulation or policy.
  • On the positive side:
  • It may force compute providers to invest in the systems, processes, infrastructure, and research needed to use compute as a node of governance. For example, it may cause providers to establish Know Your Customer processes and ways to tell how much compute individual actors have access to, e.g. potentially putting a cap on how much compute an individual actor can access. It might help develop methods to monitor whether clients are using compute for illicit purposes.
  • Small increases in responsibility taken by compute providers may shift norms, making it easier to implement future measures.
  • On the negative side:
  • It might cause backlash from compute providers, making them resistant to future governance efforts.  
  • Overregulation could slow down AI development in the relevant jurisdiction.
  • Increased monitoring of compute usage may present privacy concerns.
  • What are routes to compute providers taking more responsibility for how their systems are used? For all of these, one should likely consider the present state, political and technical feasibility, as well as negative / positive uses averted, and whether it helps build a compute governance system able to deal with a future world of transformative AI systems requiring vast amounts of compute.
  • My guess is we want to start with small clear cases to start setting the right precedent and norms, gradually scaling up to more comprehensive governance as we learn more and the justification for compute governance becomes more widely appreciated.
  • Shahar Avin (unpublished) has suggested some concrete requirements regulators could put in place to move towards a world of sliding scale regulation, where the more compute an actor has access to, the more regulations, controls, and checks of the compute usage are in place. Regulators could require that actors with access to large amounts of compute are required to report what they are using the compute for. It could be made illegal to report falsely and a regulator could conduct spot checks.
  • Compute providers could monitor and enforce their terms of use more aggressively. Early 2021 saw a high profile case of this with AWS cancelling their services to Parler in response to Parler’s allowing people to organise the storming of the US Capitol on their platform and other companies following suit, citing failures to follow AWS’ terms of service. However, such steps come with the risk of creating a niche for companies with less stringent terms of service, which may be less governable.
  • Regulators could insist that compute providers play a bigger role in enforcement of laws banning such as deepfake pornography[5] (e.g. in Virginia), deepfake content of politicians (e.g. in Texas and California), phishing attacks, or misinformation. I don’t know the extent to which this is done today.
  • Another route, mentioned in Khan 2021, p. 29, is to go via export controls (which may involve export restrictions or simply monitoring). Wassenaar Agreement export controls, as implemented by the US, currently include certain cutting edge chips. However, they do not include cloud computing using said cutting edge chips. Countries could adopt and argue for an interpretation where such uses would be covered. Any potential benefits in e.g. increased monitoring or controls on compute via this route should be weighed against the potential harms from the use of export controls steeping compute governance too much in national security terms, e.g. undermining opportunities for future global governance efforts. In addition, it may not be possible to have export control on certain uses of compute.
  • Other potential routes include: compute providers forming a consortium to disclose information about e.g. amount of compute/hardware supplied to what countries, intellectual property, supply chains and environmental regulation, anti-money laundering and investment law, international treaties.
  • The preceding routes differed in how much they rely on legislation vs. self-regulation. How should we think about which of these routes is more desirable?
  • What should we expect the effects of regulatory flight to be? My hunch is that they will be large, pushing in favour of global coordination in compute governance. For example, AWS’ dropping Parler from their services only led to Parler being offline for a month, when they moved to SkySilk, a cloud compute provider based in Russia. With more funding and technical expertise, it seems likely Parler could have made the move much sooner, e.g. by setting up their own servers.
  • Is it better to first target cloud compute providers or hardware providers? Cloud compute providers have more control and insight into how their systems are used. On the other hand, hardware providers may be more likely to supply compute to the most risky actors (since those using vast amounts of compute often own the hardware) and constitute a more concentrated market.
  • How can we tell whether compute providers are taking appropriate actions? E.g. if export controls are introduced on cloud computing, how can governments ensure compliance?
  • What can we learn from analogous industries and technologies such as DNA synthesis companies, social media platforms, controls of nuclear materials, and international finance (e.g. the proliferation of Know Your Customer guidelines)?
  • In what worlds are compute providers a particularly promising governance node? Good compute governance would likely not be sufficient for good outcomes e.g. where small amounts of compute can accomplish a lot of harm, where it is possible to have very distributed untraceable compute, where most harm comes from the use by governments with compute self-sufficiency.

China’s access to cutting edge chips

The aspect of compute governance that has been explored in by far the greatest detail (notably by folks at the Centre for Security and Emerging Technology, CSET) concerns whether the US and allies should attempt to (i) reduce the proliferation of the ability to produce cutting edge chips, with a focus on its spread to China, and (ii) put in place export controls for cutting edge chips for certain uses.

My summary of the core claims coming out of CSET’s research (more in Khan, Mann, & Peterson 2021 and Khan 2021):

  • There is a bottleneck. China (along with the vast majority of other countries) is currently unable to produce cutting edge chips domestically, relying instead on imports.
  • The US should preserve the bottleneck. The US and its allies can and should take various actions to ensure that this remains the case. This can be done by e.g. putting export controls on semiconductor manufacturing equipment, materials, certain software and IP. All of this is enabled by the fact that the semiconductor industry is incredibly concentrated and complex.
  • The US should use the bottleneck. It could be used to improve monitoring of the distribution or to restrict certain uses of cutting edge chips. They argue that the US should consider attempting to put export controls on cutting edge chips going to “the Chinese state, supercomputing entities, human-rights violators, and those collaborating with the Chinese military.”

While these actions come with various risks, it may also provide a number of benefits:

  • A more concentrated supply chain leads to more interdependence between countries. Such interdependence can have a stabilising effect, as conflict risks cutting actors off from a valuable resource.
  • A more concentrated supply chain, with fewer powerful actors, might be easier to govern. A smaller number of companies and countries, in particular allies, are more likely to be able to coordinate to put in place various compute governance measures such as improved monitoring.
  • More controversially, preserving the bottleneck may benefit US AI capabilities compared to China. This may be valuable if you believe that a race is safer as the distance between the laggard and leader increases or if US gov’t actions are likely to be more beneficial to the long term future than China’s.

I’m interested in questions like:

  • How likely is China to be able to create a domestic cutting edge semiconductor industry?
  • China has previously invested heavily in achieving this goal and failed. However, the current situation provides much stronger incentives for China, likely leading to greater investments and higher chances of success. US officials seem to think it will take a long time, at least a decade. Monitoring these developments seems nonetheless high priority. CSET has started conducting research on this question here and the US government is also likely to invest resources into doing so.
  • What are the chances China attempts and succeeds in assuming control of Taiwan, thereby seizing TSMC’s facilities (the world's most valuable semiconductor company)? Some reasons to think this wouldn’t succeed include the chance that facilities are destroyed during an invasion (either by military action or intentional sabotage by Taiwanese actors) and the potential negative effects of such aggressive actions on China’s international standing and therefore access to e.g. the raw materials needed to continue production.
  • How are China’s chances of having domestic production of cutting edge chips affected by potential new designs of semiconductors (e.g. optical chips, 3D architectures)? There is work ongoing on this question at CSET.
  • What are the negative effects of these actions?
  • What are the risks if China develops a domestic industry? This may lead to a cleaving of the semiconductor industry. What would be the effects of that happening? It may have significant destabilizing effects: mutual interdependence between states is stabilising and it would reduce insight into China’s AI activities.
  • To what extent are these export controls increasing US-China tensions?
  • Some have argued that the US will put in place export controls on China regardless, and so leveraging them for semiconductors will have small counterfactual impacts on US-China tensions. However, plausibly export controls elicit different responses. The more important to national security the actor perceives the product to be, I would wager, the more escalation you’ll see. As such, a key question is: Does China underestimate the importance of cutting edge chips?
  • How should the bottleneck be used?
  • Will the US government restrict access to cutting edge chips excessively? My intuition is that the chip bottleneck should be used sparingly, at least initially. This is because I expect the majority of the value of having the bottleneck in place is creating mutual interdependence and the ability for the US and allies to monitor and track Chinese use of cutting edge semiconductors.
  • My guess is that US policymakers will find it very difficult not to over utilize the tool – given the current political climate in the US and a widespread view of high Chinese civil-military fusion – forcing China to invest heavily in a domestic industry. Can this failure mode be avoided? For example, can the US provide credible assurances that Chinese companies working on civilian applications will not face export controls? If the US does implement stricter export controls on cutting edge chips to China, can these be made more targeted? Perhaps my intuition here is too strong, as the US has used this bottleneck sparingly in the past, e.g. allowing the export of some supercomputers to the USSR during the Cold War.
  • As mentioned above, a more concentrated supply chain, with fewer powerful actors, might be easier to govern. What kinds of governance mechanisms should the US and its allies try to implement? For example, could they put in place a regime to monitor the worldwide distribution of cutting edge chips? For additional ideas see above.

Compute Provider Actor Analysis

(h/t Jade Leung)

Compute providers (cloud compute companies and hardware companies) are likely an influential actor in the compute governance space. It is important to understand them better and what beneficial actions they could take.

  • How influential will compute providers be?
  • What parts of the supply chain confers most power to an actor? (see e.g. Khan, Mann, & Peterson 2021)
  • Have they influenced policy to date?
  • How competitive is the industry? The less competitive the industry, the more we should expect individual companies to be able to shape the industry. (see e.g. Khan, Mann, & Peterson 2021 on the semiconductor manufacturing industry)
  • What compute providers will be most influential?
  • Present-state of compute providers, policy and governance
  • How do these companies conceptualise their responsibility?
  • How are compute providers engaging with policy to date? Are they resistant or welcoming of regulation?
  • What is in their terms of service agreements and how are these being implemented today? Relatedly, what caused AWS to drop Parler? Did they believe that they would face public backlash for hosting the service?
  • What would we like compute providers to do?
  • Should they sign windfall clause-esque agreements?
  • How should they engage with policymakers?
  • In what geographies and jurisdictions should they set up their headquarters and production facilities?

[1] On nuclear strategy the classic text is: Lawrence Freedman, The Evolution of Nuclear Strategy (Palgrave Macmillan, 1981) (the fourth edition was issued in 2019). There are many more studies, for example: Michio Kaku and Daniel Axelrod, To Win a Nuclear War (Montreal: Black Rose Books, 1987); Francis J. Gavin, Nuclear Statecraft: History and Strategy in America's Atomic Age (Ithaca: Cornell University Press, 2012); Edward N. Luttwak, Strategy and History: Collected Essays volume two (New Brunswick: Transaction Books, 1985); Richard K. Betts, Nuclear Blackmail and Nuclear Balance (Washington, DC: The Brookings Institution, 1987); Edward Kaplan, To Kill Nations: American Strategy in the Air-Atomic Age and the Rise of Mutually Assured Destruction (Ithaca: Cornell University Press, 2015).

[2] There are a few classic studies: Peter Paret, Gordon A. Craig, Felix Gilbert (eds.), Makers of Modern Strategy from Machiavelli to the Nuclear Age (Oxford: OUP, 1986); Fred Kaplan, The Wizards of Armageddon (Stanford: Stanford University Press, 1983); Gregg Herken, Counsels of War (New York: Knopf, 1985). Other early studies include: Roy E. Ricklider, The Private Nuclear Strategists (Ohio State University Press, 1971). Recent studies include: Ron Robin, The Cold World They Made: The Strategic Legacy of Roberta and Albert Wohlstetter (Cambridge: Harvard University Press, 2016); Alex Abella, Soldiers of Reason: The RAND Corporation and the Rise of the American Empire (Orlando: Harcourt, 2008). Individual biographies include: Robert Ayson, Thomas Schelling and the Nuclear Age: Strategy as Social Science (London: Frank Cass, 2004); Robert Dodge, The Strategist: The Life and Times of Thomas Schelling (Hollis Publishing, 2006); Barry H. Steiner, Bernard Brodie and the Foundations of American Nuclear Strategy (Lawrence: University Press of Kansas, 1991); Barry Scott Zellen, Bernard Brodie, The Bomb, and the Birth of the Bipolar World (New York: Continuum, 2012); Sharon Ghamari-Tabrizi, The Worlds of Herman Kahn: the intuitive science of thermonuclear war (Cambridge: Harvard University Press, 2005). One innovative study which looks at “amateur strategists” is: James DeNardo, The Amateur Strategist: Intuitive Deterrence Theories and the Politics of the Nuclear Arms Race (Cambridge: Cambridge University Press, 1995).

[3] Most prominently: Daniel Ellsberg, The Doomsday Machine: Confessions of a Nuclear War Planner (New York: Bloomsbury, 2017).

[4] By “take responsibility for computation done on their systems”, I am mainly referring to their taking actions to avoid certain computations being done on their systems. Whether they are legally or morally responsible for such computations is only relevant, in my mind, insofar as it changes the extent to which such actions are taken.

[5] This route has been explored by Denise Melchin and Shahar Avin in unpublished work.