IEEE Position Statement
Artificial Intelligence
Approved by the IEEE Board of Directors (24 June 2019)
Artificial Intelligence (AI) has been defined in many ways. One definition is “Artificial Intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.” 1 Regardless of the exact definition, artificial intelligence involves computational technologies that are inspired by – but typically operate differently from – the way people and other biological organisms sense, learn, reason, and take action.
Applications of artificial intelligence increasingly affect every aspect of society, including defense and national security, civil and criminal justice systems, commerce, finance, manufacturing, health care, transportation, education, entertainment, and social interactions. Applications such as these are expanding through the combination of advanced processors, large datasets, and new algorithms. By one estimate, AI will contribute about $13 trillion to global GDP by 2030.2
1 “Artificial Intelligence and Life in 2030: One Hundred Year Study on Artificial Intelligence,” 2016, http://ai100.stanford.edu/2016-report. Other examples include the following: IEEE-USA defines AI as “the theory and development of computer systems that are able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, learning, decision-making, and natural language processing.” IEEE-USA Position Statement, “Artificial Intelligence Research, Development and Regulation,” February 10, 2017, https://ieeeusa.org/wp-content/uploads/2017/10/AI0217.pdf; India’s National AI Strategy Discussion Paper defines AI as “a constellation of technologies that enable machines to act with higher level of intelligence and emulate human capabilities of sense, comprehend (sic), and act (sic).” NITI Aayog, National Strategy for Artificial Intelligence, Discussion Paper, June 2018, http://niti.gov.in/writereaddata/files/document_publication/NationalStrategy-for-AI-Discussion- Paper.pdf. Google’s chief executive officer defined AI as “computer programming that learns and adapts.” Sundar Pichai, “AI at Google: Our Principles,” June 7, 2018, https://blog.google/topics/ai/ai-principles/.
2 Jacques Bughin et al., “Notes from the AI Frontier: Modeling the Impact of AI on the World Economy,” McKinsey Global Institute Discussion Paper, September 2018, https://www.mckinsey.com/~/media/McKinsey/Featured%20Insights/Artificial%20Intelligence/No tes%20from%20the%20frontier%20Modeling%20the%20impact%20of%20AI%20on%20the%20 world%20economy/MGI-Notes-from-the-AI-frontier-Modeling-the-impact-of-AI-on-the-world-
445 Hoes Lane, Piscataway, NJ 08854 USA • +1 732 981 0060 • Fax +1 732 981 0027 • www.ieee.org
As artificial intelligence becomes a greater part of our everyday lives, it becomes increasingly important to manage its rewards and risks, build trust in AI-enabled systems, and integrate ethical considerations into designs.3 This can best be done through ongoing engagements between policy makers and technologists, aimed at encouraging and stimulating the development of artificial intelligence while protecting the interests of the public.4 5 6
To ensure that artificial intelligence serves the interests of society, IEEE urges governments to adopt policies that:
1. Increase AI technical expertise within governments and foster greater
government access to academic and private-sector technical expertise.
a. Governments can take various approaches to increasing internal technical expertise. They can train current employees in AI or recruit people with AI expertise into existing positions; establish new permanent offices and positions with a specific focus on AI technical expertise; and provide support for programs that temporarily place academic or private- sector technical experts in government positions.7
economy-September-2018.ashx.
3See, as examples: “Ethically Aligned Design” at https://ethicsinaction.ieee.org/; Future of Life Institute’s “Asilomar AI Principles” at https://futureoflife.org/ai-principles/; “Montreal Declaration for a Responsible Development of Artificial Intelligence” at http://nouvelles.umontreal.ca/en/article/2017/11/03/montreal-declaration-for-a-responsible- development-of-artificial-intelligence/; “Toronto Declaration: Protecting the Rights to Equality and Non-Discrimination in Machine Learning Systems” at https://www.accessnow.org/the- toronto-declaration-protecting-the-rights-to-equality-and-non-discrimination-in-machine-learning- systems/; “Partnership on AI” at https://www.partnershiponai.org/; “OpenAI” at https://openai.com/about/
4 Public policies include laws, government regulations, and non-regulatory mechanisms such as subsidies and government purchases.
5 See, for example, “Annex B: G7 Innovation Ministers’ Statement on Artificial Intelligence,” Montreal, Canada, March 2018, http://www.g8.utoronto.ca/employment/2018-labour-annex-b- en.html.
6 In: J. Holdren and M. Smith, “Preparing for the Future of Artificial Intelligence,” Executive Office of the President, National Science and Technology Council, 2016, https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC /preparing_for_the_future_of_ai.pdf; and “Artificial Intelligence and Life in 2030: One Hundred Year Study on Artificial Intelligence,” 2016, http://ai100.stanford.edu/2016-report.
7 US examples include the temporary placement of academic personnel in government positions under the Intergovernmental Personnel Act, Office of Personnel Management, https://www.opm.gov/policy-data-oversight/hiring-information/intergovernment-personnel- act/#url=Provisions; and the Science and Technology Policy Fellowships program, American
2
b. Government agencies and offices should establish relationships with
technical experts outside government to complement increased technical expertise within government. Non-government experts can serve on advisory committees,8 be engaged in public hearings, participate jointly with government experts in workshops on issues relevant to AI, and respond to requests for technical analyses of potential public policy interventions.
2. Support the R&D needed to advance innovation and development in
artificial intelligence and its application to benefit humanity.
a. Provide and stimulate R&D investment in artificial intelligence.
Current progress in AI makes it timely to focus public and private R&D investment on more capable AI systems in all areas of application, and on maximizing societal benefits while mitigating associated risks.
b. Promote and fund interdisciplinary research on societal implications
of artificial intelligence. Research topics include ethics, safety, privacy, fairness and algorithmic bias, liability, explainability, and trustworthiness of AI technology. Societal aspects should be addressed, not only at the academic level, but also involving business, policymakers, civil society, and other stakeholders in trials and demonstration projects.
c. Remove impediments to third-party research on fairness and
algorithmic bias, security, privacy, and social impacts of Artificial InteIligence systems. Some interpretations of existing laws are ambiguous regarding whether and how proprietary AI systems may be “reverse-engineered” and evaluated by third parties, such as academics, journalists, and other researchers. Enabling such research requires careful consideration of trade-offs between potentially competing values such as transparency and protection of trade secrets, or access to data and individual privacy. Nevertheless, third-party research is needed if AI systems are to be properly vetted and held accountable.
Association for the Advancement of Science, http://www.aaas.org/program/science-technology- policy-fellowships.
8 Australian government ministers can bring in outside experts through boards and advisory committees, https://www.directory.gov.au/boards-and-other-entities. Canada has Chief Science Advisor to the Prime Minister, who can form committees of experts on specific topics, https://pm.gc.ca/eng/news/2017/09/26/chief-science-advisor. Singapore government consults with members of the community through various advisory committees, see, e.g., https://www.imda.gov.sg/regulations-licensing-and-consultations/content-standards-and- classification/consultation-with-committees.
3
d. Encourage and fund test and evaluation laboratories. Evaluation
laboratories can provide scientifically sound testing environments for AI- enabled systems and processes. Such environments can, in turn, be used to develop scientifically sound protocols for evaluations of AI-enabled systems and processes and collection of data necessary for evidence- based decision-making.
3. To ensure public welfare, provide an effective legal and regulatory
framework for AI development, application, use, and monitoring.
a. Create an appropriate mechanism to determine how AI technology
should be coordinated and regulated. This mechanism can take different organizational forms, such as an intergovernmental taskforce or a special commission. However constituted, the body should seek input from a range of expert stakeholders, including academia, industry, civil society, and government, as it considers questions related to the governance and safe deployment of AI. It should consider societal implications; public engagement; appropriate levels of public investment; economic and national security impacts; transparency, accountability and explainability; trust and safety assurance; ethical principles; and legal and regulatory compliance.
b. Develop protocols for field testing systems employing artificial
intelligence. Engineers need field trials to test AI systems in a public setting to determine their safety and effectiveness, to gather data, and to let the machine learn to operate in public. But field testing of AI-systems can pose a risk to the public, one that the public may not recognize that it is accepting. Necessary protocols would be similar to clinical trial protocols and would have a similar purpose.
c. Ensure that AI regulations always comply with human rights laws
and prioritize the protection of personal data relating to the individuals coming into contact with AI systems or algorithms.
d. Ensure that intellectual property rights laws account for unique
characteristics of AI. AI potentially has the capability to both infringe on intellectual property (IP) and to generate outputs that are worthy of additional intellectual property protection.
e. Ensure that liability laws account for the inclusion of AI in systems and
products.
f. Recommend the application of international system and software
engineering standards in relevant regulatory frameworks, at least to assure fail-safe operations where health and safety are concerned.
4
4. Support and fund AI education and training to meet future workforce
needs.
a. Support and fund education for AI technical expertise. The
extraordinary growth in AI has created public and private sector demand for knowledgeable personnel who have both technical expertise and ethical and cultural awareness. Addressing workforce needs will help maintain technological competitiveness, and ensure that the skills acquired by the workforce remain relevant in the future.
b. Encourage the development of credentials for creators and operators
of AI-enabled systems and processes that affect individual life, rights, liberty, privacy, or right to opportunity. Both creators and operators of AI-enabled systems need to be able to demonstrate that they understand operating parameters, appropriate uses, and limitations of such systems. Credentials would create trust that AI-enabled processes can reliably, repeatably, and predictably create the desired outcomes.
c. Support and fund retraining opportunities for people whose jobs are
affected by AI. AI is disrupting existing industries, often resulting in reduction in jobs or economic strength in these industries. There is a need to design educational, training, and development strategies for jobs that are changed as a result of AI, including jobs that can take advantage of the division of labor between humans and machines.
5. Facilitate public understanding and discourse about AI.
a. Develop strategies for informing and engaging the public on AI
policies, as well as benefits, risks, and challenges of AI applications. This will be critical to creating an environment conducive to effective decision making, particularly as more government services come to rely on AI. Public opinion related to trust, safety, privacy, employment, society, and the economy will drive public policy.
b. Promote artificial intelligence literacy among the general population.
Include education about AI in curricula at all levels.
IEEE believes that AI systems hold great promise to benefit society, but also present serious social, legal and ethical challenges, with corresponding new requirements to address issues of systemic risk, diminishing trust, privacy challenges and issues of data transparency, ownership and agency. Our recommendations and commitments related to the ethical aspects of AI systems are addressed in a separate IEEE Position Statement entitled Ethical Aspects of Autonomous and Intelligent Systems.
5
About IEEE
IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice in a wide variety of areas ranging from aerospace systems, computers, and telecommunications to biomedical engineering, electric power, and consumer electronics.
Recognizing that artificial intelligence can profoundly transform industries, economies, and societies, IEEE will continue to encourage, facilitate, and support meaningful discussions of its development and societal implications.
Related Resources from IEEE
IEEE-USA Position Statement entitled “Artificial Intelligence Research, Development and Regulation,” issued 10 February 2017 and available at https://ieeeusa.org/wp- content/uploads/2017/10/AI0217.pdf. This statement addresses issues from a U.S. perspective and is intended as input to U.S. policymakers.
IEEE European Public Policy Committee Position Statement entitled “Artificial Intelligence: Calling on Policy Makers to Take a Leading Role in Setting a Long-Term AI Strategy,” issued 15 October 2017, and available at http://globalpolicy.ieee.org/wp- content/uploads/2017/10/IEEE17021.pdf. This statement addresses issues from the European perspective and is intended as input to European Union policymakers.
“Ethically Aligned Design, First Edition,” published 25 March 2019 and available at https://ethicsinaction.ieee.org/. This document is the result of a study conducted by The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. The Initiative brings together several hundred participants from six continents, representing academia, industry, civil society, and government “to ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.”
The IEEE Standards Association is developing the P7000 series of standards focusing on ethical considerations in Autonomous/Intelligent Systems (A/IS), including:
• IEEE P7000 (Model Process for Addressing Ethical Concerns During System Design)9
9 https://standards.ieee.org/develop/project/7000.html
6
• IEEE P7001 (Transparency of Autonomous Systems)10
• IEEE P7002 (Data Privacy Process)11
• IEEE P7003 (Algorithmic Bias Considerations)12
• IEEE P7004 (Standard for Child and Student Data Governance)13
• IEEE P7005 (Standard for Transparent Employer Data Governance)14
• IEEE P7006 (Standard for Personal Data Artificial Intelligence (AI) Agent)15
10 https://standards.ieee.org/develop/project/7001.html 11 https://standards.ieee.org/develop/project/7002.html 12 https://standards.ieee.org/develop/project/7003.html 13 https://standards.ieee.org/develop/project/7004.html 14 https://standards.ieee.org/develop/project/7005.html 15 https://standards.ieee.org/develop/project/7006.html
7