Published using Google Docs
AI Landscape-autopublication
Updated automatically every 5 minutes

This is an evolving collection of information and links about who is doing what in the realm of AI policy, laws and ethics. This list is perpetually under construction. I am furiously going through my research notes to extract topics, people, organizations and resources, but this is an ongoing and time consuming effort. Rather than let the perfect be the enemy of the good, I wanted to post this unfinished resource now and update it continuously.

If you add suggestions in the comments on Medium I will integrate them.

For basic definitions of AI for policy-makers see this article.

The table of contents below links to the sub-sections in the document. You may also find a browser search (command-f) useful as the document is very long.

Latest changes from May 17 - reorganized the topics, added table of contents linking, and updated a number of topics

Also, Professor Brent M. Eastwood and Tyler Prochazka joined as a collaborators. Thanks Brent and Tyler!

If you’re looking for a more general list of AI resources, check out this article by Robbie Allen.

Ethics, Values, Rights, Transparency, Bias, Norms and Trust

AI ethics and bias

AI ethics and autonomous vehicles

Human experimentation and manipulation

Encoding fairness, policy, laws and values in AI

Gender bias

Racial Bias

Income bias

Human Centric Methodologies to Guide Ethical Research and Design

Broad Safety and Security Issues with AI, AGI, ASI and Malicious Use

AI Doomsday Planning

Malicious Use of AI

Personal Privacy, Information Security, Individual Access Control and the Future of Trust

Data collection and use

AI and its integrity, availability and reliability:

AI, propaganda and disinformation

AI and psychometrics

Computational Propaganda

AI-enabled, machine-driven communication

AI and countering trolling and fake news

Law Enforcement, Security and Autonomous Weapons Systems

AI and autonomous weapons systems

International Affairs

AI and Intelligence Gathering

AI and policing

Pre-crime

Economic and Humanitarian Issues

AI, automation and jobs

AI and economic inequality

AI and disincentives to innovation

AI for Development

AI and the Law

Due process

Legal liability

AI, Government and Regulation

AI, Human Interactions, Society and Humanity

AI and affective computing

AI and love

AI’s and education

Human-Computer Interactions:

AIs and human dignity

Rights for AI systems

AIs transforming what it means to be human

AI and addiction

AI Policy events

AI Policy Organizations

Industry organizations

Major Corporate Researchers

AI News Sites

Ethics, Values, Rights, Transparency, Bias, Norms and Trust

In his book Machines of Loving Grace, John Markoff writes, ‘The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems.’

AI ethics and bias

Ethics and Governance of Artificial Intelligence Fund:The goal of the Ethics and Governance of Artificial Intelligence project is to support work around the world that advances the development of ethical AI in the public interest, with an emphasis on applied research and education. Works with the Berkman Klein Center at Harvard and the MIT Media Lab 

Moral Machine Platform: crowdsourced expectations of how an autonomous vehicle should make moral decisions.

OpenEth is founded on the principle that ethics and morality are measurable, definable and computable, across cultures, time, and geography.

Council for Big Data, Ethics, and Society brings together researchers from diverse disciplines — from anthropology and philosophy to economics and law — to address issues such as security, privacy, equality, and access in order to help guard against the repetition of known mistakes and inadequate preparation.

Joanna Bryson, associate professor Department of Computer Science, University of Bath. Bryson focuses on biases within AI, which she says comes from biased human coders.

Stuart Russell, professor of Computer Science, Berkeley. Experience in several posts as advisor to US defense and intelligence agencies. He has written about whether AI will make humans “better”. He is also concerned about the ethical systems of AI. He wants to align the value systems of AI with humans.

AI ethics and autonomous vehicles

The modern version of the Trolley Problem. Should an autonomous vehicle protect its passenger at all costs, even if it means swerving into a crowd of pedestrians? Or should the vehicle perform utilitarian calculations to cause minimal loss of life even if that means killing its passenger(s)? Surveys indicate humans want the latter in the abstract, but they want their personal vehicle to protect them and not apply a utilitarian approach. Mercedes has decided its duty is to protect the passenger. Regulation could help, but may be detrimental in the long run if it delays adoption of autonomous vehicles, which are highly likely to save lives overall. This may not be a frequent problem if autonomous cars are prevalent and radically reduce the number of accidents.

These issues will likely be resolved through traditional negligence law although this is complicated by issues of agency where AIs are not explicitly programmed to take particular actions. If a vehicle is fully automated, with a human driver no longer actively steering, the question arises as to whether damage can still be attributed to the driver or the owner of the car, or whether only the manufacturer of the system can be held liable. Policy-makers need to determine tradeoffs between cost, convenience, and safety

Human experimentation and manipulation

Human experimentation has been closely scrutinized in the psychology field for generations. What is the responsibility organizations owe to people with regard to subtly manipulative tools like nudging (software that prompts you with reminders)? Do we need codes of conduct around these types of technological experiments that may promote technology addiction?

Researchers and Facebook received criticism for a human experiment to alter users’ moods without their consent.

Encoding fairness, policy, laws and values in AI

Michael Kearns, Aaron Roth, Shahin Jabbari, Matthew Joseph, Jamie Morgenstern UPenn: conduct research on how to encode different concepts of fairness from law and philosophy into machine learning.

Fairness, Accountability and Transparency in Machine Learning Workshop: Bringing together a growing community of researchers and practitioners concerned with fairness, accountability, and transparency in machine learning. They have an extensive bibliography on AI ethics.

Gender bias

Microsoft Cortana is built to push back against abuse and harassment.

Heather Roff: Moral AI discusses how representations of gender are becoming embedded in technology and expressed through it.

The NY Times Explores AI’s White Guy Problem: most AI researchers are white men, job descriptions tend to favor male applicants, and some AI systems don’t work well for minority groups.

Venkatesh Saligrama, Professor, Boston University. Produced influential research demonstrating gender bias in machine learning; the technology correlated words such as receptionist and female.

Racial Bias

Machine learning systems trained with biased data will produce biased results. Image recognition software has categorized black people as gorillas, misread images of Asians as people blinking, and had difficulty recognizing people with dark skin. More seriously, an AI tool used to assess the risk of recidivism was found to be biased against black defendants and in favor of white defendants. Biased predictive policing tools could also perpetuate stereotypes.  

Income bias

AIs may disproportionately benefit high-income communities. Also, the negative effects of AI - like social media or gaming addictions - may disproportionately impact lower-income communities. There are also concerns that many new technologies that are too dangerous or unproven for advanced economies are tested on developing economies where liability concerns are lower. AI and automation are predicted to disproportionately impact low-skill jobs first.

Human Centric Methodologies to Guide Ethical Research and Design

Matt Chessen wrote an article advocating that public-policy professionals should collaborate with AI technologists in training machine learning systems to encode values and eliminate bias. Additionally, the public policy profession needs a new specialty in big data and AI ethics.

Broad Safety and Security Issues with AI, AGI, ASI and Malicious Use

Artificial General Intelligence and Artificial Super-Intelligence (sentient AIs and the Singularity) are worth considering, but are still science-fiction. The bigger immediate concern is the misuse of AI either through negligence or malice.

AI Doomsday Planning

CSER and ASU held an event “Envisioning and Addressing Adverse AI Outcomes” that looked at multiple future scenarios where AIs could spawn social, economic and political disasters.

Matt Chessen foresees AI-enabled virtual reality as being so addictive, much of humanity will give up on reality and stop breeding.

If you don’t want a Terminator scenario, don’t build Skynet.

Nick Bostrom, Professor, University of Oxford and Director of the Strategic Artificial Intelligence Research Centre. He focuses primarily on mitigating the potential risks of developing AI technologies and shaping laws to govern and regulate the use of emerging technologies like artificial intelligence. He is the author of “Superintelligence: Paths, Dangers, Strategies”, which warns of the potential negative ramifications if artificial intelligence surpasses human intelligence. Such a scenario could pose an existential risk to humanity, he says. Bostrom discusses strategies to avoid the risks of AI, such as integrating the goal of human survival into the programming of the AI systems. Additionally, he discusses the possibility of avoiding doomsday scenarios by utilizing “Oracle AI”, which answers questions posed by the user, rather than autonomously acting out on these recommendations in reality.

Ray Kurzweil, Singularity Think Tank Founder. Kurzweil is credited with popularizing the “singularity” concept. He is generally optimistic about the future of AI technologies.

Eliezer Yudkowsky, co-founder Machine Intelligence Research Institute. Prominent thinker on existential dilemmas of super intelligent AI.

James Barrat, author, “Our Final Invention: Artificial Intelligence and the End of the Human Era”. In the book he argues humans will inevitably establish super intelligent AI that is in opposition to humans.

Malicious Use of AI

Matt Chessen wrote about how an authoritarian regime might use an optimization algorithm and social scoring to control a population, and how it could get out of hand.

Personal Privacy, Information Security, Individual Access Control and the Future of Trust

Machine and Deep Learning systems require large amounts of data. Some of that data may be collected in private spaces like our homes, and we may reveal very intimate information to these systems. Emerging information fiduciary concepts - similar to restrictions on doctors and lawyers using client information for their own benefit - could be applied to AI and tech generally.

Data collection and use

Chatbots like Xiaoice are considered by many users to be a real-time, always available friend. Users tell these bots their intimate secrets and even proclaim love for them. Humans also tend to let their guard down when talking to AI personal assistants. Veterans are more likely to reveal sensitive information to a virtual therapist. Chatbots are being used in counseling. AI interfaces will likely become popular in medicine and education, where sensitive information may be revealed and collected. This raises questions about how this very intimate data might be used by corporations or governments.

Companies may benefit from maintaining private data-sets, but citizens may benefit from public data-sets.

Excellent summary of threats and positive uses of big data tech.

Hossein Rahnama, visiting scholar, MIT Media Lab. Rahnama is concerned about the ownership of increasing amount of personal data in the pursuit of “digital immortality.”

Jennifer Neville, professor, Purdue University. Neville’s research looks at the effectiveness of gathering data from chatbots in closed or open systems.

Michael I. Jordan, professor, UC Berkeley. Jordan is an expert on the policies used by firms that make up “big data”, analyzing the link between AI, deep learning, and statistics.

Soumith Chintala, Facebook AI researcher. Researches how AI can shift away from static datasets to dynamic systems. He has built deep learning machines for large businesses.

Ian Goodfellow, Scientist, Google Brain. He has done extensive research on keeping individuals’ data private. For example, sensitive training data can be protected by allowing the AI to learn without having direct access to the data.

Vicenç Torra, Professor, University of Skövde in Sweden. Main research interest is data privacy, focusing on statistical disclosure control and privacy preserving data mining.

Andreas Krause, Associate Professor of Computer Science, ETH Zurich. Key research area is privacy on online AI services, including the utility-privacy tradeoff, which shows high levels of personalization can be achieved with a small amount of user data.

Children’s privacy

Toys are increasingly integrating artificial intelligence systems. Parents may not understand that these systems are collecting data. Children likely do not have the sophistication to understand what they should and should not say about these systems and may disclose PII or very private information.

Trusting AIs decision-making

How do we create trust in AI systems as we increasingly automate every aspect of our lives, including very personal communications like email? And what are the norms and liability when AI systems violate that trust?

Machine learning systems are more probabilistic than algorithmic and may not have auditable decision-trees. How can we trust the AI systems we use? What happens when systems - perhaps those that filter fake news - in fact are filtering out news with a certain point of view, enclosing us in an ideological bubble?

There are concerns that AI decision-making is a black box where we can’t understand the reasoning why AIs make decisions. There are also concerns that AIs are like alien knowledge that may be inscrutable to human minds.

David Gunning, program manager, DARPA’s Explainable AI. Seeks to engineer features to allow humans to understand why AI makes its decisions.

Samuel Arbesman, scientist, Lux Capital. Argues we must build methods to understand AI’s reasoning.

IBM: learning to trust AI and robotics

AI and its integrity, availability and reliability:

How do we prevent AI from being hacked, spoofed or fooled?

Evolving AI Lab research indicates deep learning image recognition tools are easily fooled.

AI, propaganda and disinformation

Much like high-frequency trading has transformed stock markets, high-frequency messaging may dramatically alter public opinion. Over the long term, stock prices are still considered measures of value, but over the short-to-medium term they are heavily influenced by algorithms that seek only to extract value through manipulative trading. Similarly, AI-driven bot networks may heavily manipulate public opinion on issues, muddling the truth, undermining democratic speech and drowning out civil discussions online. These tools will be precisely targeted to individuals based on their specific personality profiles.

AI and psychometrics  

There are concerns that AI psychometric systems could be weaponized for political purposes.

Computational Propaganda

Politicalbots.org are a team of researchers investigating the impact of automated computer scripts–computational propaganda–on public life. This work includes analysis of how tools like social media bots are used to manipulate public opinion by megaphoning or repressing political content in various forms: disinformation, hate speech, fake news, political harassment, etc.

The Observatory on Social Media OSoMe (awe•some) highlights the results of a broad research project aimed to study information diffusion in social media. Use our tools to explore how people spread ideas through online social networks.

Detecting Early Signature of Persuasion in Information Cascades (DESPIC):The DESPIC project aims to design a system detect persuasion campaigns at their early stage of inception, in the context of online social media.

BotWatch is an online publication, built to generate multidisciplinary discussion around bots. As bots become a commonplace in our lives, BotWatch raises the very basic human questions on meaning, creativity, language, and expression.

The Weaponized Narrative Initiative is a project of the Center for the Future of War at ASU. It seeks to examine how adversaries use information to attack democracy and undermine America.

The German Marshall Fund’s Alliance for Securing Democracy, a bipartisan, transatlantic initiative housed at The German Marshall Fund of the United States (GMF), will develop comprehensive strategies to defend against, deter, and raise the costs on Russian and other state actors’ efforts to undermine democracy and democratic institutions. They run Hamilton 68, a dashboard for tracking Russian information operations on Twitter.

Samuel Woolley, Director of Research, Computational Propaganda project. Woolley researches methods to counter computational propaganda.

Martin Moore, Director, Center for the Study of Media, King’s College. Researches how bots and fake news affected US 2016 presidential election.

Michael Kosinski, created model that could assess a person’s character with high accuracy. The model was used to influence US voting behavior.

Sam Wineburg, professor, Stanford. Researches ability of young people to discern fake news, with concerning results.

Chengkai Li, professor of Computer Science and Engineering, University of Texas at Arlington. Researches fake news bots and works to combat their spread.

Philip N. Howard, Professor, University of Oxford. Studies the dissemination of fake news following Brexit and the US election. Both presidential candidates had a substantial number of bots tweeting for them.

Jonathan Albright, a professor at Elon University, conducts social media analytical research focused on fake news ecosystems, online disinformation, US election interference and the Alt-right.

AI-enabled, machine-driven communication

Matt Chessen, published an article describing how AI-enabled machine driven communications tools (MADCOMs) will radically enhance computational propaganda. Machine driven speech may drown out human speech online.

AI and countering trolling and fake news

Conversational AI: A project by the NY Times and Jigsaw designed to identify online harassment in comment sections. Conversational AI on GitHub.

Fake News Challenge is a grassroots effort of over 100 volunteers and 71 teams from academia and industry around the world. Our goals is to address the problem of fake news by organizing a competition to foster development of tools to help human fact checkers identify hoaxes and deliberate misinformation in news stories using machine learning, natural language processing and artificial intelligence

Students at WV University are working on AI tools to detect and combat fake news.

Law Enforcement, Security and Autonomous Weapons Systems

AI and autonomous weapons systems

AIs have been used in weapons like the Tomahawk missile for decades, but these systems are improving dramatically. Lethal autonomous weapons systems (LAWS) have the potential to operate - and choose to kill - fully autonomously once deployed. The power of these weapons raises the possibility of a new arms race and their autonomy raises international human rights and humanitarian law concerns. Opponents argue that LAWS will lack human judgement and context, and will be unable to judge proportionality - traits necessary to satisfy the law of war. Since these weapons can wait passively to strike, they also raise issues - similar to landmines - about inadvertently targeting civilians. Some opponents argue that LAWS armies will make it easier for advanced countries to fight wars since LAWS can reduce the risk of death to their own forces. Some proponents argue LAWS will enable highly precise targeting, reducing both the lethal force needed and civilian collateral damage. They also argue that human soldiers frequently fire on friendly forces, inadvertently target civilians, and use disproportionate force, and effective LAWS systems may act with more precision and discretion.

International Convention on Conventional Weapons in Geneva discusses LAWS issues.

The International Committee for Robot Arms Control – or ICRAC (spelled “aikræk”) for short – is an international not-for-profit association committed to the peaceful use of robotics in the service of humanity and the regulation of robot weapons.

The Campaign to Stop Killer Robots: NGO collective working to preemptively ban fully autonomous weapons.

UN Office for Disarmament Affairs has background on LAWS issues.

Dr. Heather Roff, researcher at ASU's Global Security Initiative and New America writes about LAWS issues.

Defense Science Board Summer Study of Autonomy

Peter Asaro, co-founder, International Committee for Robot Arms Control.

Wendell Wallach, fellow, Yale’s Center for Bioethics. Wallach proposes a ban on future LAWS because of the potential for technical failures and because it may violate international humanitarian law.

Lt. Col. Alan Schuller, Associate Director at Stockton Center for the Study of International Law and Fellow at Georgetown University Center on National Security and the Law. Lt. Col. Schuller produced an in-depth review of the application of AI in The DoD, and discusses the importance of programming LAWS in adherence with international humanitarian laws

Peter Warren Singer: A leading expert and acclaimed author in lethal autonomous weapon systems (LAWS), robotics, and other emerging fields in 21st century conflict. A strategist at the New America Institute and senior fellow at the Brookings Institute, he’s also worked with our State and Defense Departments on the aforementioned fields. Recently he has warned of AI proliferation in China. senior fellow at New America, a think tank in Washington, and a consultant for the US Department of Defense. He is one of the preeminent experts on the future of war and the use of technology in advancing national security. Singer is concerned about the advancement of autonomous combat technology and its proliferation to potential adversaries around the world. He advocates the US push for global rules for the militarization of AI.

Jeffrey L. Caton, former associate professor of Cyberspace OPerations, US Army War College. Researches the necessity for automated responses to cyber intrusions. He created a framework for the development of LAWS under a legal and humanitarian regime.

Michael C. Horowitz, professor, University of Pennsylvania. Focuses on AI/LAWS in relation to international relations theory. He analyzes whether LAWS operate under Just War Theory.

International Affairs

AI has the potential to dramatically alter international relations and balance of powers. There is an emerging study on how AI could affect the composition and effectiveness of international law and the stability of international affairs.

Margaret Levi, professor, political science, Stanford University. Levi researches how AI may affect the development of international affairs, and is concerned that it may be misused by nefarious actors.

Greg Allen, adjunct fellow, Center for a New American Security. Published analysis on national security implications of AI. He proposes developing AI war games and strategic assessments by the DoD, focusing on counter-AI capabilities. Allen also looks at what applications of AI should be restricted by treaties, and how to establish AI-oriented bureaucracies.

Adam Segal, Chair in Emerging Technologies and National Security, Council on Foreign Relations. Segal researches AI, cyber warfare and foreign policy, focusing on hacking related to US-Asia relationships.

AI and Intelligence Gathering

AI has the potential to transform intelligence gathering for governments around the world. It lowers the barrier to analyze large swaths of data and produce coherent intelligence.

Robert Cardillo, National Geospatial-Intelligence Agency (NGA). NGA plans to automate much of the analysis done by humans to collect, and analyze images from drones and satellites.

AI and policing

AI enables police surveillance at scale “matching thousands of photos from social media against photos from drivers’ license databases, passport databases, and other sources, then taking the results and crossing them with other kinds of records”

The ACLU revealed and The Verge reported that police in Baltimore used a face identification application called Geofeedia, together with photographs shared on Instagram, Facebook, and Twitter, to identify and arrest protesters. The ACLU believes this tool is marketed to target people of color. There are concerns that law enforcement image databases may be biased and contain images from non-criminal citizens.

The ACLU suggests social media companies “clear, public, and transparent policies to prohibit developers from exploiting user data for surveillance”

We need to develop standards for what is acceptable for law enforcement use of big data and AI, and how they will be held accountable for abuse.

IBM built a big data AI tool for determining the probability of refugee terrorist activity based on unstructured data. There are concerns about how individuals would know they are being evaluated by such systems, whether there are due process protections, and whether they have been validated as accurate.

Sandra Wachter, postdoctoral researcher, Oxford Internet Institute. Focuses on government surveillance, predictive policing, and human rights. She wants to design algorithms to ensure fairness and transparency, as well as accountability of automated decision making systems.

Pre-crime

Companies like Hitachi are launching crime-prediction software and here are concerns this could be used to arrest people before they have acted. Colorado-based Intrado sells police a service that instantly scans legal, business and social-media records for information about persons and circumstances that officers may encounter when responding to a 911 call at a specific address. AI can also predict suicide attempts and perhaps may be used to intervene.

Economic and Humanitarian Issues

AI, automation and jobs

Some argue that AI-enabled automation is different than past industrial revolutions and will result in mass blue and white collar unemployment. Others argue that the nature of work will change but the numbers of jobs will not. Some argue AI will help mid-skill workers succeed in now-unfilled high-skill jobs. Most argue that economies need improved education and skill-building programs and enhanced job transition programs for people displaced by new technologies. Governments, industry and society will need to create new programs, regulations and standards to adjust for disruptions.

We may have assumptions on AI and jobs backwards. AIs actually have a very difficult time learning how to do basic activities like walk. This is because nature spent billions of years evolving this into our DNA. Children can walk well by age two. But humans need 20 years of education to learn professions like law or medicine. Is this because professions are more complex than physical movement? Or is it because intelligence is a relatively recent development evolutionarily and it is hard for humans? Perhaps AI will continue to have difficulty mastering physical activities but will do well at knowledge work.

Irving Wladawsky-Berger, digital innovation advisor for Citigroup. Irving researches the impact of automation on the job market, believing AI will have a positive long-term effect.

Peter Stone, professor, UT Austin’s AI Lab. Stone believes AI will threaten existing jobs, but also provide new opportunities over time.

Daniel Araya, tech innovation public policy expert. Believes the government should take a large role in mitigating the effects of technological disruption.

Katja Grace, leading research at Machine Intelligence Research Institute. She focuses on the time frame for Human Level Machine Intelligence to be reached.

Angelica Lim, SoftBank Robotics researcher. Lim focuses on making robots which can imitate human emotion to make them more accessible to actual humans.

David Kenny, general manager, IBM’s Watson. Focus on AI for general population services. He succeeded in developing AI for medicinal practices and is pursuing General Intelligence.

AI and economic inequality

AI’s will likely create outsized economic gains for their creators. This could push additional income to capital vice labor and result in increasing economic inequality. Rising incomes and the emergence of the global middle class over the last thirty years have been correlated with increasing economic and political liberalization. Increased inequality threatens these trends and could promote populist backlashes.

Cathy O’Neil, former director of Lede Program in Data Practices, Columbia University. Focuses on how big data and AI perpetuate inequality due to biased inputs and decision making processes which do not take into account existing inequalities. She is the author of “Weapons of Math Destruction.”

Daron Acemoglu, economics professor, MIT. His paper “Robots and Jobs: Evidence from US Labor Market” provides one of the most compelling cases that automation has put downward pressure on wages and employment, and that this displacement effect has been occurring since the nineties. He argues that there should be a response by policymakers to address the effects of robot-induced job displacement, since it is unlikely the market will self-correct to create equitable economic opportunities.

AI and disincentives to innovation

Standards could benefit large, first movers and stifle innovation. Standards could also promote interoperability and ‘safe’ AI systems.

Open Source AI systems could stifle innovation. Eg TensorFlow is very useful but could create a homogenized group of practitioners. Or the availability of these systems could promote innovation. The balance is unknown.

In the United States, AI technologists are getting huge private sector offers out of college. This disincentivizes them from academia -where they may struggle to repay student loans - or startups where the risk and uncertainty are much higher. Populist immigration restrictions may also inhibit the availability of AI talent that is in short supply.

AI for Development

Ai-d.org is a non-profit organization established to support research on AI for Development (AI-D). A focal point of current AI-D efforts is the coalescence and distribution of data sets in support of research.

Patrick Meier regularly blogs about drones for humanitarian activities.

AI and the Law

The International Association for AI and the Law is a nonprofit association devoted to promoting research and development in the field of AI and Law, with members throughout the world. IAAIL organizes a biennial conference (ICAIL), which provides a forum for the presentation and discussion of the latest research results and practical applications and stimulates interdisciplinary and international collaboration.

The International Bar Association issued a report detailing the gap between current legislation and new laws necessary for an emerging workplace reality. The IBI Global Employment Institute report assesses the law at different points in the automation cycle – from the developmental stage, when computerisation of an industry begins, to what workers may experience as AI becomes more prevalent, through to issues of responsibility when things go wrong

Due process

Machine Bias in criminal sentencing: COMPAS, an AI sentencing tool, consistently scores blacks as greater risks for re-offending than whites who committed similar or more serious crimes.

Legal liability

Reasonable foreseeability is a key factor for negligence. How do you determine whether an AIs actions were reasonably foreseeable when machine-learning systems learn and adapt, and could produce results the developer didn’t anticipate? There may also be multiple AIs interacting in unexpected ways.

The EU Parliament asked the EC to propose liability rules on AI and robotics, and recommended a code of ethical conduct.

AI, Government and Regulation

There is a growing sense that some governments are not able to cope with today’s challenges. AI could help governments manage the growing difficulty of analysis and decisionmaking in an increasingly complex world. Or AI advances in the private sector could expose government’s shortcomings if it isn’t able to adapt to the future.  

How do we do personalization for public service - especially when the public may expect it - when the very premise of democracy is that you treat everyone equally? We may need mechanisms to ‘do unto others as they would have you do unto them’ but this requires much more user control over their data and the level of personalization desired. This is a broad social conversation that needs to occur.

Government Personalization vs Equality and Privacy

How do we do personalization for public service - especially when the public may expect it - when the very premise of democracy is that you treat everyone equally? We may need mechanisms to ‘do unto others as they would have you do unto them’ but this requires much more user control over their data and the level of personalization desired. This is a broad social conversation that needs to occur.

The White House generated a report of ‘Preparing for the Future of Artificial Intelligence,’ and a companion  “National Artificial Intelligence Research and Development Strategic Plan,” in 2016. The White House also co-hosted  public workshops on AI policy areas and requested information from the public on AI issues.

Japan has pushed for basic rules on AI at the G7 meetings in 2016.

South Korea is developing a robot ethics charter.

U.S. Congress

The U.S. Senate Committee on Commerce, Science, and Transportation oversees AI through its Subcommittee on Space, Science and Competitiveness. The subcommittee held a hearing in November 2016 on AI called the “Dawn of Artificial Intelligence.” The hearing explored how AI affects public policy and the economy.

Department of Defense

DoD’s Defense Science Board conducted an “Autonomy” study on military implications of AI.

AI and wildlife/environmental management

Bradley Cantrell, Laura J. Martin and Erle C. Ellis explore the use of AI to manage wildlife. Maintaining wild places increasingly involves intensive human interventions. Several recent projects use semi-automated mediating technologies to enact conservation and restoration actions, including re-seeding and invasive species eradication. Could a deep-learning system sustain the autonomy of nonhuman ecological processes at designated sites without direct human interventions?

A good overview in the Atlantic: AI could also be used to manage river systems. Groups are developing drones that can plant trees, artificial pollinators, swarms of oceanic vehicles for cleaning up oil spills, or an autonomous, weed-punching farm-bot

AI, Human Interactions, Society and Humanity

Instead of humans programming software, AI bots may shape culture and thereby program human beings through the manipulation of our information space.

AI and affective computing

Machines are becoming effective at both portraying realistic human emotions and detecting human emotions in video, text and speech. This could enable better human-computer interactions, but could also be used to manipulate people. Also, some people may not like emotional machines. Building morality and values into AI systems will be critical if we want their decisions to reflect our laws, policies, and virtues.

Soul Machines works on humanizing the interface between man and machines.

Affectiva: is leading the effort to emotion-enable technology.

MIT has a number of projects looking at areas of affective computing.

AI and love

Dr. Julia Mossbridge, IONS Innovations Lab, leads work on developing AIs that have a loving, caring outlook towards human beings.

Matt McMullan’s quest for AI enabled sex robots.


Matt Chessen explores whether AI partners will be
preferable to humans and will contribute to human extinction.

AI’s and education

Computing: the human experience project by Grady Booch and others on how computing has changed humanity

Human-Computer Interactions:

ArticuLab, Human-Computer Interaction Institute at Carnegie Mellon University: focuses on human-computer interactions including AI

AIs and human dignity

The concern is that people may simply take orders from their AI system that is directing an enterprise. How do we preserve human dignity so humans and AIs work together, and workers are not simply minions for AIs that make extremely complex business decisions?

What are the ethics of hiring human beings to work jobs specifically so AIs can learn how to do the job and replace them? Are these silent workers protected? Are US companies utilizing fair labor practices when outsourcing these services from abroad?

Humans may not treat their human-like AIs well. How does this negative behavior translate over into interactions with human beings? And what are the implications when a human is in the loop with the AI system and must face hidden abuse? Is abuse to AI systems a marker for mental health issues or potential abuse elsewhere?

Some principles: Treat bots like you would treat a human being. There may be a human being on the other end of the bot curating its behavior. Your speech may also be training the bot how to interact with other people. (See Tay for how this can go wrong).

What are the implications when AI virtual agents further shield us from interactions with other human beings?

AI and Human Judgement

If machines are in the loop for decision-making, do we undermine human beings’ ability to think critically about the facts?

Rights for AI systems

The issue is whether sentient or human-like AI systems deserve any rights. Rather speculative since AIs are nowhere near this level of capability.

AIs transforming what it means to be human

Elon Musk says humans need to become cyborgs to stay relevant.

Anupam Rastogi argues that what we call AI is actually ‘Intelligence Augmentation’ for humans and true AI is still in the future.

Richard Granger, Director, Brain Engineering Laboratory, Dartmouth. Granger has created algorithms that simulate circuits in the human brain. This may allow scientists to more accurately understand the brain’s functions.

AI and addiction

AIs could make things like social media and video games more addictive due to psychometric personalization and machine learning.

AI Policy events

2016

Overview of the US White House AI workshops:

February 22, 2016 San Francisco, California Workshop on the Ethics of Online Experimentation:This workshop therefore aims to draw together researchers from inside and outside of the computer science community to jointly identify and discuss the ethical issues raised by the specific kinds of experiments that are a routine part of running a production online service.

May 24, 2016: Legal and Governance Implications of Artificial Intelligence in Seattle, WA

June 7, 2016: Artificial Intelligence for Social Good in Washington, DC

June 28, 2016: Safety and Control for Artificial Intelligence in Pittsburgh, PA

July 7: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term in New York City

“CCITT/ITU-T 60th Anniversary Talks on Artificial Intelligence (AI)”  ITU WTSA-16 (26 October 2016 - Hammamet, Tunisia)

AI: is the future finally here?” ITU Telecom World 2016 (16 September 2016 - Bangkok, Thailand)

Artificial Intelligence for a sustainable future: friendly companion or threatening conqueror?” ITU Kaleidoscope 2016 Jules Verne’s corner (14-16 September 2016 - Bangkok, Thailand)

November 16-19 Data Transparency Lab 2016: The DTL is an inter-institutional collaboration, seeking to create a global community of technologists, researchers, policymakers and industry representatives working to advance online personal data transparency through scientific research and design.

November 18, New York, NY 3rd Workshop on Fairness, Accountability, and Transparency in Machine Learning: FAT/ML Co-located with Data Transparency Lab 2016. This workshop aims to bring together a growing community of researchers and practitioners concerned with fairness, accountability, and transparency in machine learning.

November 19, 2016, New York University Law School, Workshop on Data and Algorithmic Transparency (DAT'16). The Workshop on Data and Algorithmic Transparency (DAT'16) is being organized as a forum for academics, industry practicioners, regulators, and policy makers to come together and discuss issues related to increasing role that "big data" algorithms play in our society.

Princeton Envision Conference: A student-run conference to bring together current and future leaders in harnessing technology for a brighter future; sub-section on AI. Dec 2–4, 2016 Princeton University

8 December, 2016 Machine Learning and the Law NIPS Symposium Barcelona, Spain:  This symposium will explore the key themes of privacy, transparency, accountability and fairness specifically as they relate to the legal treatment and regulation of algorithms and data. Our primary goals are (i) to inform our community about important current and ongoing legislation (e.g. the EU’s General Data Protection Regulation); and (ii) to bring together the legal and technical communities to help form better policy in the future.

December 12, 2016 - Barcelona The 1st IEEE ICDM International Workshop on Privacy and Discrimination in Data Mining

2017

January 5-8, Asilomar CA Beneficial AI: conference hosted by the Future of Life Institute. 2017 conference produced the Asilomar Principles ranging from research strategies to data rights to future issues including potential super-intelligence. (Summary version)

January 19-20, 2017, Philadelphia Fairness for Digital Infrastructure at UPenn.

4th February 2017, 3rd International Workshop on AI, Ethics and Society San Francisco, USA:The focus of this workshop is on the ethical and societal implications of building AI systems.

Feb 19-20, Oxford. Bad Actors and AI Workshop: FHI hosted a workshop on the potential risks posed by the malicious misuse of emerging technologies in machine learning and artificial intelligence

April 4 Valencia, Spain Ethics in NLP workshop at EACL 2017 focuses on ethics issues surrounding natural language processing

April 4, Perth Australia, FAT/WEB: Workshop on Fairness, Accountability, and Transparency on the Web The objective of this full day workshop is to study and discuss the problems and solutions with algorithmic fairness, accountability, and transparency of models in the context of web-based services.

May 17-19, 2017,  Phoenix, AZ The Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics. The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics.

June 7-9, 2017 in Geneva AI for Good Global Summit: ITU and XPRIZE organized event for industry, academia and civil society work together to evaluate the opportunities presented by AI, ensuring that AI benefits all of humanity. The Summit aims to accelerate and advance the development and democratization of AI solutions that can address specific global challenges related to poverty, hunger, health, education, the environment, and others.

July 15-19 2017, Workshop on Fairness, Accountability and Transparency

in AI and Big Data – Singapore (FAT-SG), School of Computing, National University of Singapore. URL: http://www.fat-sg.org 

August 14 2017, Halifax, Nova Scotia, Canada 4th Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2017) Co-located with 23rd SIGKDD conference on Knowledge Discovery and Data Mining (KDD 2017)

September  6-7  2017 Data for Policy 2017 is a conference held in London hosted by the UK Government Data Science Partnership from, the third in a series of annual conferences on this topic. This year’s theme highlights ‘Government by Algorithm?’.

September 17–20, O'Reilly Artificial Intelligence Conference in San Francisco.

November 1, CNAS Artificial Intelligence and Global Security Summit. See video here.

November 13 - 17, UN Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS) in Geneva. Paul Scharre, Senior Fellow and Director of the Technology and National Security Program will discuss how to address autonomous weapons.

December 6-7, 2017 – New York, NY – The AI Summit. An AI event focused on business to determine what AI means for enterprises. It is partnered with IBM, Microsoft, Amazon, Google, Facebook and others.

 

December 11-13, 2017 – Boston, MA – AI World Conference and Expo Focused on the business and technology of AI in enterprise. Designed for business executives to understand AI in business.

 

December TBD – California TBD - Global AI Dialogue Series (in collaboration with the World Economic Forum (WEF))

 

2018

January 2018 – Davos – World Economic Forum (not whether AI will be a focus)

February 2-3 2018 AAAI/ACM Conference on AI, Ethics and Society in New Orleans. A multi-disciplinary event to combine AI’s politics, philosophy, economics, law, etc. and address them in a scientific context.

February 2-7, 2018 – New Orleans – AAAI Conference 2018. To promote research in AI and scientific exchange.

February 15-19, 2018 – Austin, Texas – AAAS Annual Meeting. Includes Seminar on the “Future of Artificial Intelligence” on February 17.

 

March 26-28, 2017 - Palo Alto, California - AAAI Spring Symposium Series. In cooperation with Stanford’s Computer Science Dept, including eight symposia on social and scientific aspects of AI.

 

May 14-18, 2018 - WTISD on Enabling the Positive use of AI for All, Geneva

 

May 15-17, 2018 – 2nd AI for Global Good Summit (at ITU headquarters), Geneva

Summer - ICWSM 18, TBD

 

Fall TBD – First meeting of the partners of the Partnership on AI to benefit people and society

AI Policy Organizations

The World Economic Forum’s Council on the Future of AI and Robotics will explore how developments in Artificial Intelligence and Robotics could impact industry, governments and society in the future, and design innovative governance models that ensure that their benefits are maximized and the associated risks kept under control.

Data & Society's Intelligence and Autonomy Initiative develops policy research connecting the dots between robots, algorithms and automation. Our goal is to reframe debates around the rise of machine intelligence.

AI Now Initiative: Led by Kate Crawford and Meredith Whittaker, AI Now is a New York-based research initiative working across disciplines to understand AI's social impacts. The AI Now Report provides recommendations that can help ensure AI is more fair and equitable.

The USC Center for Artificial Intelligence in Society’s mission is to conduct research in Artificial Intelligence to help solve the most difficult social problems facing our world.

Berkman Klein Center for Internet and Society at Harvard University:The Berkman Klein Center and the MIT Media Lab will act as anchor academic institutions for the Ethics of Governance and Artificial Intelligence fund and develop a range of activities, research, tools, and prototypes aimed at bridging the gap between disciplines and connecting human values with technical capabilities. We will work together to strengthen existing and form new interdisciplinary human networks and institutional collaborations, and serve as a collaborative platform where stakeholders working across disciplines, sectors, and geographies can meet, engage, learn, and share.

The Stanford One Hundred Year Study on Artificial Intelligence, or AI100, is a 100-year effort to study and anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live and play.

MIT Media Lab, AI, Ethics and Governance Project project will support social scientists, philosophers, and policy and legal scholars who undertake research that aims to impact how artificial intelligence technologies are designed, implemented, understood, and held accountable

The MIT Laboratory for Social Machines develops data science methods — primarily based on natural language processing, network science, and machine learning — to map and analyze social systems, and designs tools that enable new forms of human networks for positive change.

MIT Solid (derived from "social linked data") is a proposed set of conventions and tools for building decentralized social applications based on Linked Data principles. Solid is modular and extensible and it relies as much as possible on existing W3C standards and protocols.

The Partnership on AI: Established to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society. Partners include Apple, Amazon, Facebook, Google, Microsoft, IBM, ACLU and OpenAI.

OpenAI is a non-profit artificial intelligence research company. Their mission is to build safe AI, and ensure AI's benefits are as widely and evenly distributed as possible; advancing digital intelligence in the way that is most likely to benefit humanity as a whole

University of Wyoming Evolving AI Lab: focuses on evolution in AI and other bio-inspired techniques

The Future of Life Institute’s mission is to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges. Hosts the Beneficial AI Conference.

The Machine Intelligence Research Institute is a research nonprofit studying the mathematical underpinnings of intelligent behavior. Our mission is to develop formal tools for the clean design and analysis of general-purpose AI systems, with the intent of making such systems safer and more reliable when they are developed.

The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems: The purpose of this Initiative is to ensure every technologist is educated, trained, and empowered to prioritize ethical considerations in the design and development of autonomous and intelligent systems.

AI Now is researching the social impacts of artificial intelligence now to ensure a more equitable future. The workshop produced a report that summarizes many of the key social and economic issues with AI.

Fairness, Accountability and Transparency in Machine Learning community.

UNICRI is creating a center for AI and Robotics at the Hague, headed by Irakli Beridze

The Allen Institute for Artificial Intelligence: AI2, founded by Paul Allen and led by Dr. Oren Etzioni, conducts high-impact research and engineering to tackle key problems in artificial intelligence.

The Future of Humanity Institute FHI houses the Strategic AI Research Centre, a joint Oxford-Cambridge initiative developing strategies and tools to ensure artificial intelligence (AI) remains safe and beneficial.

The Cambridge Center for the Study of Existential Risk: goals are to significantly advance the state of research on AI safety protocol and risk, and to inform industry leaders and policy makers on appropriate strategies and regulations to allow the benefits of AI advances to be safely realised.

The Alan Turing Institute Data Ethics Group The group will work in collaboration with the broader data science community, will support public dialogue on relevant topics, and there will be open calls for participation in workshops later this year, as well as public events.

Leverhulme Centre for the Future of Intelligence: Our mission at the Leverhulme Centre for the Future of Intelligence (CFI) is to build a new interdisciplinary community of researchers, with strong links to technologists and the policy world, and a clear practical goal: to work together to ensure that we humans make the best of the opportunities of artificial intelligence as it develops over coming decades.

AI Austin:Encouraging practical and responsible design, development and use of Artificial Intelligence to expand the opportunities and minimize harm in both local and global communities.  

The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems: An incubation space for new standards and solutions, certifications and codes of conduct,  and consensus building for ethical implementation of intelligent technologies

UC Berkeley Center for Human-Compatible AI: The goal of CHAI is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.

Effective Altruism Foundation: Focuses on AI and human interactions. Their research suggests AI developers must take the lead to ensure that AI is humane.

Industry organizations

The Association for the Advancement of Artificial Intelligence (AAAI) (formerly the American Association for Artificial Intelligence) is a nonprofit scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. AAAI aims to promote research in, and responsible use of, artificial intelligence.

The Computing Research Association is a computer science trade group that has a strong interest in AI policy. CRA pursues AI initiatives through its Computing Community Consortium.

Major Corporate Researchers

Facebook Artificial Intelligence Researchers (FAIR) seek to understand and develop systems with human level intelligence by advancing the longer-term academic problems surrounding AI.

Google:  Tensor Flow:An open-source software library for Machine Intelligence. Deep Mind: We’re on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be taught how.

IBM: Cognitive Computing and IBM Watson technologies: Watson products and APIs can understand all forms of data to reveal business-critical insights, and bring the power of cognitive computing to your organization.

Microsoft: AI research group and a major focus on chatbot technology and frameworks. CNTK is an open source deep learning framework.

Amazon: Amazon AI services bring natural language understanding (NLU), automatic speech recognition (ASR), visual search and image recognition, text-to-speech (TTS), and machine learning (ML) technologies within the reach of every developer.

AI News Sites

Scout.ai combines science fiction and journalism to bring you frequent online dispatches on the future of technology.

Kurzwilai.net covers AI and emerging technologies.

Import AI is curated by Jack Clark (now with OpenAI) and is a weekly newsletter of AI tech and policy developments.

Law and AI: A Blog Devoted to Examining the Law of Artificial Intelligence, AI in Law, and AI Policy

Singularity Hub AI Archives: news about technology and policy

TopBots is an AI and Bot focused newsletter that covers products and industry development.

Venturebeat has a bot section that also covers artificial intelligence topics.  

Wired’s AI tag covers a wide variety of AI topics.

----

About the authors:

Matt Chessen is a State Department Foreign Service Officer on a fellowship at the George Washington University where I am studying artificial intelligence. Any opinions in this document have been collated from other sources or are personal views and do not represent the opinions of the U.S. Government, Department of State or any other organization.

Brent Eastwood is an entrepreneur and professor at George Washington University. He is the founding principal of GovBrain, an AI prediction company that links government information and political events from around the world to individual stocks, bonds, commodities and currencies.