1 of 21

AI Governance

2024

Chon, Kilnam

KAIST

2023.12.01rev2024.2.1

2 of 21

Contents

0. Three laws of Robotics by Isaac Asimov, 1942

1. Introduction - Digital Space

2. History

3. AI Governance

4. Major Topics of AI Governance

5. Future - AGI, and AGI Governance

6. Issues

7. Remarks

References

Appendix AI Governance Organizations

Appendix AI Governance Conferences

2

3 of 21

0. Three Laws of Robotics by Isaac Asimov, 1942

The First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

The Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

The Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

3

4 of 21

1. Digital Space

4

5 of 21

2. History

  1. History of AI (detail diagram) by Danielle Williams.
  2. Future of Life Institute (FLI) with Beneficial AI Conference, 2015, 2017, 2019

delivered Asilomar AI Principles with Research Issues, Ethics & Value, and � Long-Term Issues.

c. There are more than 10 AI principles developed around the world.

d. Governments started to involve in the AI governance in the 21st century, � especially on the safety issues including autonomous driving, and AI-based � weapons.

5

6 of 21

3. AI Governance

a. The Internet governance principles were reaffirmed the definition of the Geneva � World Summit on the Information Society in 2005;

the Internet has evolved into a global facility available to the public and its governance should constitute a core issue of the Information Society agenda. The international management of the Internet should be multilateral, transparent and democratic, with the full involvement of governments, the private sector, civil society and international organizations. It should ensure an equitable distribution of resources, facilitate access for all and ensure a stable and secure functioning of the Internet, taking into account multilingualism.

It was revised in 2014, and is called NETmundial.

b. Many national governments involved in AI governance partly due to safety � issues such as autonomous driving and AI-based weapons.

c. Ethics and Safety are coming up as the major AI governance issues lately.

6

7 of 21

4. Major AI Governance Topics

4.1 Social Area

Ethics

AI Policy� AI Principle� Social and Economic Impact� Institutions�4.2 Technical Area

Safety

Security

Algorithm� �4.3 Long-Term Area

Artificial General Intelligence (AGI)

Existential Risk

7

8 of 21

4.1 Social Area - Ethics

Ethical Guideline for Trustworthy AI by European AI Alliance in 2019;

Components for Trustworthy AI

Lawful

Ethical� Robust

Requirements

Human agency and oversight� Technical robustness and safety� Privacy and data governance� Transparency� Diversity, non-discrimination and fairness� Societal and environmental wellbeing� Accountability

8

9 of 21

4.2 Technical Area - Safety

The Asilomar AI Principles on Safety states that “AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible” [Asilomar 2017]. The Future of Life Institute (FLI) organized the AI Safety Program with over 30 research institutions in 2010s.

UK Government organized AI Safety Summit with around 20 national governments and many more from industry in 2023, which delivered Bletchley Declaration. The follow-on AI safety summits will be held in 2024.

9

10 of 21

4.2 Technical Area - Security

The security of AI systems is very important. AI technology is increasingly used to enhance security.

Cybersecurity with AI is very important but raises very difficult issues. �Cybersecurity could be enhanced substantially with the proper application of AI �technology, and we could solve some of the pending issues related to cybersecurity. On the other hand, the same and similar AI technologies could be abused.

Internet Governance Forum (IGF) organized a session #33, “AI for Security,” in 2023.

10

11 of 21

4.2 Technical Area - Algorithm

AI algorithms could bias AI systems and harm their deployments to human society [AINOW 2018].

AI algorithms needs to be as transparent as possible.

The AI algorithms could be closely related to the accountability and explainability of AI systems [Felton 2018; DARPA 2018; Kroll 2016].

If data is biased, then the AI algorithm could be biased as we have seen on face �recognition algorithms.

11

12 of 21

4.3 Long Term Issues – Artificial General Intelligence

Artificial General Intelligence (AGI) needs special attention as many scholars raised issues regarding proper handling of AGI technologies.[Legg 2023]

Max Tegmark and Nick Bostrom among others wrote books which focus on AGI [Tegmark 2017; Bostrom 2014].

Stuart Russell gave the analogy of AGI to nuclear technology in the mid-20th century and recommends to prepare now even though we lack good consensus on AGI’s realization [Russell 2018].

One of the key issues is “What to do when AGI exceeds human-level intelligence?”

The Global AI Policy website of the Future of Life Institute (FLI) stated “AGI would encounter all the challenges of narrow AI, but would additionally pose its own risks such as containment”.

12

13 of 21

4.3 Long Term Issues – Existential Risks

The Center for Study on Existential Risks at Cambridge University cautioned on the existential risks of robots and AI among others in the 21st century [CSER 2019].

According to the website of the Berkeley Existential Risk Initiative (BERI), “The main strategy is to take on ethical and legal responsibility, as a grant-maker and collaborator, for projects deemed to be important for reducing existential risk” [BERI 2019].

The Future of Life Institute (FLI) covers the benefits and risks of AI in its Existential Risk webpage, and stated in AI Safety of AI Policy Challenges and Recommendations, “Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.” [FLI2019]

13

14 of 21

5. Future – AGI, and AGI Governance

  • Artificial General Intelligence (AGI) – human-level intelligence and higher

  • “AGI Governance” – Can we govern AGI, we co-exist with AGI, or AGI � governs us?

Remark: Natural selection favors AIs over humans� (by Dan Hendrycks (text and video).

14

15 of 21

6. Issues

a. Who receives the gain of AI (~20% of GDP in20xx);

the world, companies, or countries?

b. When does AGI reach to human-level intelligence?�c. AI-based weapon

15

16 of 21

7. Remarks

“AI Safety Ethics and Society”virtual course is offered online, AIsafetyBook.com, with the course material including full textbook.

16

17 of 21

References

AAAI/ACM, AI, Ethics, and Society Summit.�AI Safety Ethics & Society, CAIS, 2024. (with course; aisafetybook.com)�David Bray, “Governance of AGI when AI gov. has not been figuredout,2023.12.7.�Center for AI and Digital Policy (CAIDP)�Center for AI Safety (CAIS), Statement on extinction risk, 2023.�Paul Christiano, Alignment Research Center, YouTube, 2023.�EU, Ethical Guideline for Trustworthy AI, 2019.�EU, AI Act, 2023.�Future of Life Institute, Asilomar AI Principles, 2019.�M. Harrison, Hugh proportion of internet is AI-generated slime, 2024.1.19�Dan Hendryks, Natural selection favors Ais over humans, 2023.(text and video)�Dan Hendryks, et al., An overview of catastrophic AI risks, 2023.�Henry Kissinger, et al., The age of AI: and our human future, 2021.�IGF, Ethical principle for the use of AI in security, #33, 2023.10.11.�

17

18 of 21

References(continued)

Shane Legg, Path to AGI, 2023.11.(and AGI by 2028)�Darren McKee, Uncontrollable (Superintelligence), You Tube, 2023. (Book, 2024).�Melanie Mitchell, Future of AI (and past), Santa Fe Institute, 2023.�NETmundial Multistakeholder Statement, 2014.�Singapore Government, Singapore proposes framework to foster trusted � generative AI development, 2024.1.16.(& model AI gov)�Stuart Russell, Human compatible, 2019.�UK, AI Safety Summit, 2023.�Danielle Williams, (Graph) History of AI, daniellewilliams.com, 2023.12.�WEF, AI Governance Alliance�White House, Executive order on safe, secure and trustworthy AI, 2023.�Danielle Williams, History of AI (diagram), daniellewilliams.com, 2023.�WSIS, Tunis Agenda for the Information Society, 2005. �Xiaolce, ChiAI Newsletter #247, 2032.12.11.�

18

19 of 21

Appendix AI Governance Organizations

CAICT, Blue paper report on large model governance, 2023�Center for AI Safety (CAIS)�Center for Governance of AI, Future of Humanity Institute, Oxford �Center for Study on Existential Risks (CSER), Cambridge �Future of Life Institute (FLI), USA�Institute for AI International Governance, Tsinghua University�Levenhulme Center for Future Intelligence, Cambridge�Machine Intelligence Research Institute (MIRI), Berkeley�AI Governance Alliance, WEF�

19

20 of 21

Appendix AI Governance Conferences

  • AI, Ethics and Society Summit (AIES)
  • Internet Governance Forum (IGF), 2023
  • AI Safety Summit, 2023

20

21 of 21

Appendix The Asilomar AI Principles

Research Issues Research Goal, Research Funding, Science-Policy Link, � Research Culture, Race Avoidance�Ethics and Values Safety � Failure Transparency� Judicial Transparency� Responsibility� Value Alignment� Human Values� Personal Privacy� Liberty and Privacy� Shared Benefit� Shared Prosperity� Human Control� Non-subversion� AI Arms Race

Long-term Issues Capability Caution, Importance, Risks, Recursive Self-Improvement

Common Good

21