ABCDEFGH
1
Initiative(s)Forum/ProcessAcronymInstitutional FormatDescriptionApproach to AIAnticipated outcomes
2
Overview of initiative, mandate and activitiesNoting most relevant outcome (in case multiple)
3
1CAI Treaty on AICouncil of EuropeCoEMultilateralInitiative Overview: The Committee on Artificial Intelligence (CAI) is a committee established by the Council of Europe (CoE), which is an international organisation that aims to uphold human rights, democracy and the rule of law in Europe. The draft convention is scheduled for completion in March 2024.

Mandate: “To establish an international negotiation process and conduct work to elaborate an appropriate legal framework on the develop democracy and the rule of law, and conducive to innovation, which can be composed of a binding legal instrument of a transversal character, including notably general common principles, as well as additional binding or non-binding instruments to address challenges relating to the application of artificial intelligence in specific sectors, in accordance with the relevant decisions of the Committee of Ministers.”

Activities: plenary meetings will run until March 2024 when the instrument is expect to be finalised and sent to the Committee of Ministers. There are also informal meetings attended by governments between plenary sessions.

Human RightsRegulation/Policy Guidance
4
2VARIOUS (EU AI Act / EU Strategy on the rights of the child & Strategy for Better Internet for Children (BIK+))European UnionEUMultilateralInitiative Overview: The European Union is currently in the final stages of negotiations on the European Union AI Act. If passed, this would be the world's first piece of legislation to regulate the use of artificial intelligence. The act originated from a European Commission regulation proposed in April 2021. The proposal responded to explicit requests from the European Parliament and the European Council for legislative action to ensure a well-functioning internal market for artificial intelligence systems where both benefits and risks of AI are adequately addressed at Union level. It supports the objective of the Union being a global leader in the development of secure, trustworthy and ethical artificial intelligence as stated by the European Council and ensures the protection of ethical principles as specifically requested by the European Parliament. The aim is for EU institutions to come to an agreement ahead. of the European Parliament elections (scheduled for 6 - 9 June 2024). Together with the Coordinated Plan on AI, the EU AI act outlines a European approach to AI that "focuses on putting people first" and whole rules aim to ensure that AI develops in a way that guarantees trust, safety and fundamental rights, while also promoting excellence in innovation. The European Union has identified AI as a priority area for its work in the coming year, with the European Commission President calling for an international panel of experts similar to the IPCC on climate change to steer its development in her annual State of the European Union speech.

Mandate: As part of its digital strategy, the EU wants to regulate AI to ensure better conditions for the development and use of AI.

Activities: At this time, the European Parliament, the Council of the European Union and European Commission engage in inter-institutional negotiations, ‘trilogues’, to reach a provisional agreement on the legislative proposal that is acceptable to both the Parliament and the Council.

Findings of AI and the Rights of the Child report will be used to implement EU strategy on the rights of the child and the EU Strategy for Better Internet for Children (BIK+), as well as the proposed EU AI Act.
Cross-cutting, Human RightsRegulation/Policy Guidance
5
3G7 Hiroshima ProcessGroup of SevenG7MultilateralInitiative Overview: The annual Group of Seven (G7) Summit, hosted by Japan in Hiroshima on May 19-21, 2023, decided to initiate the Hiroshima AI Process (HAP) to regulate AI. The ministerial declaration of the G7 Digital and Tech Ministers’ Meeting, on April 30, 2023, discussed responsible AI and global AI governance based on the OECD AI Principles and to foster collaboration to maximise the benefits for all brought by AI technologies.

Mandate: Advance international discussions on inclusive AI governance and interoperability to achieve our common vision and goal of trustworthy AI, in line with our shared democratic values.

Activities: In April this year, a working group was established to produce Guiding Principles and a Code of Conduct for Organizations Developing Advanced AI Systems. The G7 Digital & Tech Ministers’ endorsed a report by the OECD summarising priority risks, challenges and opportunities of generative AI, and explicitly reaffirmed its commitment to promote a human-centric and trustworthy AI based on the OECD’s Recommendation on AI. On October 30th were released the Guiding Principles and the Code of Conduct for Organizations Developing Advanced AI Systems.
Cross-cutting, MultidisciplinaryPrinciples/Voluntary Guidance
6
4High-level Advisory Body on AIHigh-Level Multistakeholder Advisory Body on AIHLAB - AIIOInitiative Overview: The High-level multistakeholder Advisory Body on Artificial Intelligence (the Body) is being convened by the United Nations Secretary-General to undertake analysis and advance recommendations for the international governance of AI. The Body will consist of up to 32 members from governments, private sector, civil society, and academia, as well as a member Secretary. Its composition will be balanced by gender, age, geographic representation, and area of expertise related to the risks and applications of AI. The members of the Body will serve in their personal capacity. The members of the Body will be selected by the Secretary-General based on nominations from Member States and a public call for candidates. Evaluation of nominees will be conducted on the basis of the nominee's suitability, capacity and willingness to contribute to the Advisory Body's overall objectives. The Body will have two Co-Chairs and an Executive Committee. All stakeholder groups will be represented in the Executive Committee. The Body is convened for an initial period of one year, with the possibility of extension by the Secretary-General. The deliberations of the Body will be supported by a small secretariat based in the Office of the Secretary-General’s Envoy on Technology and be funded by extrabudgetary donor resources.

Mandate: Providing a recommendation on the UN governance of AI. The Body’s initial reports will provide high-level expert and independent contributions to ongoing national, regional, and multilateral debates.

Activities: The UN Tech Envoy announced on October 26th the 39 members of the High-level Advisory Body on AI (HLAB). According to the roadmap published on HLAB's website, the timeline for completion of work is as follows:
- November 2023: Analysis & Engagement (initial consultations)
- End-2023: Interim report released on the AI governance landscape mapping and presenting a high-level analysis of options for the international governance of AI for the consideration of the Secretary-General and the Member States of the United Nations
- Q1 2024: Further Consultations (across stakeholder groups and ongoing initiatives)
- Mid-2024: final report (incorporating results from consultations) by 31 August 2024 which may provide detailed recommendations on the functions, form, and timelines for a new international agency for the governance of AI
- Sep 2024: Summit of the Future (member states consider the Global Digital Compact)
Cross-cutting, MultidisciplinaryBody/International Agency
7
5Recommendation on the Ethics of AI - ImplementationUN Educational, Scientific and Cultural OrganizationUNESCOIOInitiative Overview: UNESCO produced the first global standard on AI ethics through its ‘Recommendation on the Ethics of Artificial Intelligence’ adopted by its all 193 Member States in November 2021. The Recommendations identify four values: human rights and human dignity, living in peaceful, ensuring diversity and inclusiveness, and environment and ecosystem flourishing. The ten core principles lay out a human-rights centred approach to the Ethics of AI. Finally, the Recommendations set eleven key areas for policy actions where Member States can advance responsible developments in AI by moving toward practical strategies.

As part of the implementation phase, UNESCO has developed a number of tools to support country's uptake of the recommendation, including the Readiness Impact Assessment (RAM) and the Ethical Impact Assessment (EIA). The tools have been developed with the support of Alan Turing Institute, but lack of meaningful opportunities for extended feedback from civil society. As reported by UNESCO, there are currently 50 countries engaged in the implementation of RAM this year, including Antigua & Barbuda, Barbados, Brazil, Botswana, Chad, Chile, Costa Rica, Cuba, Democratic Republic of Congo, the Dominican Republic, Gabon, India, Kenya, Malawi, the Maldives, Mauritius, Mexico, Morocco, Mozambique, Namibia, Rwanda, São Tomé and Príncipe Senegal, South Africa, Timor Leste, Uruguay, and Zimbabwe. The UNESCO’s RAM is implemented with the support of the European Commission, the Government of Japan, the Patrick McGovern Foundation, and the Development Bank in Latin America (Corporación Andina de Fomento- CAF)

Mandate: This Recommendation addresses ethical issues related to the domain of AI to the extent that they are within UNESCO’s mandate. It approaches AI ethics as a systematic normative reflection, based on a holistic, comprehensive, multicultural and evolving framework of interdependent values, principles and actions that can guide societies in dealing responsibly with the known and unknown impacts of AI technologies on human beings, societies and the environment and ecosystems, and offers them a basis to accept or reject AI technologies.

Activities:
- Readiness Impact Assessment: a tool that enables the evaluation of how prepared a country is for the ethical implementation of AI. It evaluates five dimensions: Legal/Regulatory, Social/Cultural, Economic, Scientific/Educational, and Technological/Infrastructure that highlights the institutional and regulatory changes that will be necessary to effectively implement the recommendation
- Country reports, based on the RAM diagnostic assessment will be published on UNESCO’s AI Ethical Observatory to be launched with the Alan Turing Institute. It will be an online transparency portal for the latest data and analysis on the ethical development and use of AI around the world, and a platform for best practice sharing.
- Ethical Impact Assessment: a structured process to help AI project teams, in collaboration with affected communities, to identify and assess the impact an AI system may have; allows for reflection on AI's potential impact and to identify needed harm prevention actions
- A report synthesizing the lessons learnt in the preparation of the RAM will be published. Its results will deliver insights that will then inform the Global Forum on the Ethics of Artificial Intelligence, to take place in Slovenia in early 2024.
- UNESCO, CAF and the Ministry of Science, Technology, Knowledge and Innovation of Chile are organising the first Forum on the Ethics of Artificial Intelligence in Latin America and the Caribbean, which seeks to establish a regional council to implement the Ethics Recommendation
- Establishing regional roundtables for peer learning
- Developing networks of partners around the world, such as the AI Ethics Experts Without Borders (AIEB network) - a flexible facility of experts for deployment in Member States on needs basis to assist in the implementation of the Recommendation and the application of the capacity-building tools - and the Women 4 Ethical AI network ( a platform for influential women leaders in industry, government, and civil society, driving transformations towards gender equality in and through AI)
- Global Forum on Ethics of AI, a high-level annual event to advance the state-of-the-art knowledge of the challenges raised by AI technologies and to facilitate peer-to-peer learning among governments and other stakeholders
- Business Council for Ethics of AI: a collaborative initiative between UNESCO & companies involved in the development or use of AI in various sectors, serving as a platform for companies to come together, exchange experiences, and promote ethical practices within the AI industry; co-chaired by Microsoft and Telefonica (to be launched in an event on AI and Ethics, back to back with the High-level and Ministerial meeting on AI and Ethics in Santiago, Chile, in October 2023)
Cross-cutting, Human RightsReport/Resource
8
6VARIOUS (NET Resolution / Technical Standards & HR Report / Right to Privacy in the Digital Age Resolution)UN Human Rights CouncilHRCIOInitiative Overview: The Human Rights Council is the principal intergovernmental forum within the United Nations for questions relating to human rights. The HRC is a separate entity from the Office of the High Commissioner for Human Rights (OHCHR); they have different mandates as given by the UNGA, but work closely together. HRC resolutions can include recommendations for governance of new and emerging technologies (such as AI), as well as requests for guidance from the OHCHR ( such as the development of a report mapping the work and recommendations of UN bodies in new and emerging technologies, the provision of advice and technical assistance to states. This expanded OHCHR mandate could provide more opportunities for civil society engagement given the forum’s comparative openness and accessibility. However, without efforts to align the OHCHR work with that of other UN initiatives in the area of AI there is a risk of fragmented efforts within the UN human rights system - and within the UN system more broadly.

The OHCHR report on technical standards and AI also included recommendations for AI. Additionally, the Report of the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism on the "Human rights implications of the development, use and transfer of new technologies in the context of counter-terrorism and countering and preventing violent extremism" included AI in its scope and recommendations.

Mandate: The HRC is an intergovernmental body within the United Nations system responsible for strengthening the promotion and protection of human rights around the globe and for addressing situations of human rights violations and making recommendations on them. It has the ability to discuss all thematic human rights issues and situations that require its attention throughout the year. It meets at the United Nations Office at Geneva.

Activities:
- Issuance of resolutions: (1) the New and Emerging Digital Technologies resolution (53/29) was adopted by consensus by the Human Rights Council (HRC) on July 14, 2023; (2) the Rights to Privacy in the Digital Age resolution (54/21) was reviewed during the 54th HRC session and adopted by consensus the 12th of October 2023.
- Development of guidance for member states
- Drafting reports
- Organising multi stakeholder consultations
Human RightsPrinciples/Voluntary Guidance
9
7Joint Roadmap for Trustworthy AI and Risk ManagementEU-U.S. Tech & Trade CouncilTTCGov't-led (Bilateral)Initiative Overview: The European Union (EU) and the United States (US) set the EU-US Trade and Technology Council (TTC) a partnership for driving digital transformation and cooperating on new technologies based on their shared democratic values, including respect for human rights. The EU-US Trade and Technology Council serves as a forum to coordinate approaches to key global trade, economic, and technology issues and to deepen transatlantic trade and economic relations. It was established during the EU-US Summit on 15 June 2021 in Brussels. During the fourth ministerial meeting of the TTC in 2023, the EU and the US agreed on the implementation of the TTC Joint Roadmap for Trustworthy AI and risk management.

Mandate: Through the TTC, the EU and the US are working together to: ensure that trade and technology serve their societies and economies, while upholding their common values; strengthen their technological and industrial leadership; and expand bilateral trade and investment.

Activities: Launching three experts groups working on terminologies, taxonomies, standards and emerging risks. There is an agreement to include a focus on generative AI systems in the ongoing activities. Climate change, natural disasters, healthcare, energy and agriculture are identified as priority areas to address global challenges under the Administrative Arrangement on Artificial Intelligence for the Public Good.
Cross-cutting, MultidisciplinaryPrinciples/Voluntary Guidance
10
8AI Contact GroupU.S. White House / Leading AI Companies N/AMultistakeholderInitiative Overview: The seven leading AI companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – committed to a set of voluntary commitments to help move toward safe, secure, and transparent development of AI technology. These commitments underscore three principles that must be fundamental to the future of AI – safety, security, and trust – and mark a critical step toward developing responsible AI. In August 2023, the White House announced a second round of voluntary commitments from eight companies—Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability—that will help drive safe, secure, and trustworthy development of AI technology.....These commitments [...] underscore three principles that must be fundamental to the future of AI—safety, security, and trust—and mark a critical step toward developing responsible AI."

Commitments to....
- Ensuring Products are Safe Before Introducing Them to the Public
- Building Systems that Put Security First
- Earning the Public’s Trust

On 30 October, the Biden admninistration released its
Executive order, which directly builds on these principles in various ways.

Mandate: to uphold the highest standards to ensure that innovation doesn't come at the expense of Americans’ rights and safety.

Activities: Follow up TBD; no official data available
Crosscutting, Multidisciplinary, SafetyPrinciples/Voluntary Guidance
11
9Expert Group on Ethics and Governance of AI for HealthWorld Health OrganisationWHOIOInitiative Overview: In June 2021, WHO released its Guidance on Ethics & Governance of Artificial Intelligence for Health that was developed by an Expert Group on Ethics and Governance of Artificial Intelligence for Health. This Expert Group has continued working for delivering guidance in the implementation of the principles presented in 2021, including the upcoming release of guidance on the opportunities and challenges of generative AI systems in health.

Mandate: WHO as the specialized UN agency for health lead the work on developing an ethics and governance framework that will allow AI to promote fair and equitable global health. Such leadership will also support global efforts to identify shared interests and values and streamline initiatives to use AI in support of Universal Health Coverage.

Activities: Upcoming release of guidance on the opportunities and challenges of generative AI systems in health; WHO Expert Group on Ethics and Governance of AI for Health; implementation of WHO Guidance on the ethics and governance of AI for health
Sectoral Principles/Voluntary Guidance
12
10Global Digital CompactUN Roadmap for Digital Cooperation / Global Digital CompactGDCIOInitiative Overview: The Global Digital Compact is a United Nations initiative which aims to bring together governments, private sector entities, civil society organisations and other stakeholders to work collaboratively on the a set of shared principles and commitments, which are intended to ensure that digital technologies are used responsibility and for the benefit of all, while addressing the digital divide and fostering a safe and inclusive digital environment.

Mandate: The UN Secretary-General report "Our Common Agenda" was released in September 2021, and it proposed a Global Digital Compact to be agreed at the Summit of the Future. This emerged out of the road map for digital cooperation. In October 2022, the President of the UN General Assembly appointed Rwanda and Sweden as co-facilitators to lead the intergovernmental process on the GDC.

Activities: AI as one of the key elements of the Compact, which includes a consultative process organised by co-facilitators. This has included the development of a platform by the Office of the Secretary-General's Envoy on Technology, enabling all stakeholders to submit responses to a consultation that ended in 2023, as well as thematic deep-dives on each of the key elements, including one on AI in June 2023. The Summit for the Future (September 2024); Ministerial (September 2023); call for papers on Global AI Governance (due 30 September) are key dates.
Cross-cutting, MultidisciplinaryPrinciples/Voluntary Guidance
13
11VARIOUS (Global AI Observatory / AI & Equality Initiative)Carnegie Council for Ethics in International AffairsGAIOMultistakeholderInitiative Overview: The Artificial Intelligence & Equality Initiative (AIEI) is an innovative impact-oriented community of practice seeking to understand the innumerable ways in which AI impacts equality for better or worse. We work to empower ethics in AI so that it is deployed in a just, responsible, and inclusive manner. The Global AI Observatory (GAIO) is a proposal to create a repository of information to provide the necessary facts and analysis to support decision-making. The proposal is part of Carnegie's work in identifying current and future critical ethical issues, convening leading experts and thinkers to catalyze the creation of ethical solutions to global problems, creating communities and activating constituencies by embracing multilateralism and exploring shared values; and framing ethical perspectives.

Mandate: the AIEI initiative aims to:
- Build the foundation for an inclusive dialogue—an Agora—to probe issues related to the benefits, risks, trade offs, and tensions that AI fosters
- Nurture an interdisciplinary, intergenerational community of practice to rapidly address urgent challenges in the uses of AI and other novel technologies
- Establish: a forum for those in positions where they must make considered choices and decisions about the development and deployment of AI applications.
- Forge transparent, cross-disciplinary, and inclusive conversations and guided inquiries
- Empower: ethics as a tool for making thoughtful decisions about embedding AI systems and applications in the fabric of daily life

Activities:
- creation of observatory
- development of resources such as proposed modalities for AI governance; a proposed framework for the international governance of AI
- hosting convenings
- podcast episodes
Human Rights, MultidisciplinaryObservatory
14
12VARIOUS (Task Force on AI and Human Rights; 2023 Program of Action)Freedom Online CoalitionFOCMultilateralInitiative Overview: The Freedom Online Coalition (FOC) is an intergovernmental coalition of 38 Member States committed to ensuring the use of the Internet and digital technologies reinforce human rights, democracy, and the rule of law. FOC Members coordinate diplomatic efforts, share information about current developments, and collectively voice concerns over measures that threaten human rights in the digital age. Its activities and priority areas are outlined annually in the Program of Action (PoA), drafted by the rotating Chair of the Coalition, in consultation with the wider FOC Membership and the FOC Advisory Network (FOC-AN).

The FOC has two main initiatives on AI: the Task Force on AI and Human Rights (TFAIR) and the 2023 Programme of Action (PoA).

Mandate: Each year, in consultation with other FOC Members and the FOC Advisory Network, the incoming FOC Chair develops a Program of Action, which outlines the FOC’s vision, priorities, and activities for an annual term. Over the past years, the Coalition has made meaningful contributions to shape regional and international discussions on a number of areas that have been the focus of previous Programs of Action, including disinformation, Internet censorship, network disruptions, cross-border attacks on freedom of expression online, human rights impacts of cybersecurity policies, civic space, digital inclusion, human rights implications of artificial intelligence and emerging technologies, and others.

The 2023 PoA listed "advancing norms, principles, and safeguards for artificial intelligence (AI) based on human rights" as a policy priority under the United States' Chairship of the FOC. TFAIR works to advance the application of the international human rights framework to the global governance of AI

Activities:
- In September 2023, the FOC-Advisory Network published its Proactive Advice on the Global Digital Compact (GDC), which provided input into the AI section of the GDC and expressed procedural concerns with the GDC's development
- Task Force on Artificial Intelligence and Human Rights (TFAIR) - c0-led by Germany and the International Centre for Not-for-Profit Law (ICNL): hosts regular meetings, provides a space for FOC Members and the Advisory Network to promote human rights respecting AI technologies through sharing and disseminating information and collaborating on joint initiatives
- in 2021, TFAIR (then led by Canada) facilitated a number of multistakeholder coordination meetings (between FOC govts, FOC Advisory Network as well as external civil society and academic orgs) during the drafting and negotiation period of the UNESCO Recommendations on the Ethics of AI - resulted in a number of Member States utilising language on AI that highlights importance of rights-respecting approaches and importance of grounding the text in internationally binding human rights standards and related obligations.
- FOC Joint Statement on Artificial Intelligence and Human Rights (2020)
- Ottawa Agenda
- Helsinki Declaration
Human Rights, Cross-cutting, MultidisciplinaryPrinciples/Voluntary Guidance
15
13Frontier Model ForumFrontier Model ForumN/AMultistakeholderInitiative Overview: New industry body launched by Anthropic, Google, Microsoft, and OpenAI to promote the safe and responsible development of frontier AI systems: advancing AI safety research, identifying best practices and standards, and facilitating information sharing among policymakers and industry. The Frontier Model Forum is interested in influencing existing government and multilateral initiatives such as the G7 Hiroshima process, the OECD’s work on AI risks, standards, and social impact, and the US-EU Trade and Technology Council.

Mandate: (i) advance AI safety research to promote responsible development of frontier models and minimize potential risks, (ii) identify safety best practices for frontier models, (iii) share knowledge with policymakers, academics, civil society and others to advance responsible AI development; and (iv) support efforts to leverage AI to address society’s biggest challenges.

Activities: The Frontier Model Forum will establish an Advisory Board to help guide its strategy and priorities.
Crosscutting, Multidisciplinary, SafetyPrinciples/Voluntary Guidance
16
14GGE on Lethal Autonomous Weapons SystemsHigh Contracting Parties to the Convention on Certain Conventional WeaponsCCWIOInitiative Overview: In 2016, at the Fifth CCW Review Conference, High Contracting Parties decided to establish an open-ended Group of Governmental Experts on emerging technologies in the area of LAWS (GGE on LAWS), to build on the work of the previous meetings of experts. Since then, the group has been re-convened on a yearly basis. It is open to all High Contracting Parties and non-State Parties to the CCW, international organisations, and non-governmental organisations.

Mandate: The GGE was mandated to examine issues related to emerging technologies in the area of LAWS in the context of the objectives and purposes of the Convention on Certain Conventional Weapons.

Activities: The GGE usually meets several times a year to discuss a variety of issues, including characterisation of LAWS, potential challenges posed to international humanitarian law and other bodies of international law, the human element in the use of force and options for addressing challenges. There will be a meeting in November 2023 and the 2024 GGE will meet in August in Geneva.
SectoralReport/Resource
17
15AI Standards HubAlan Turing InstituteN/AMultistakeholderInitiative Overview: Overseen by the public policy department, the AI Governance & Regulatory Innovation team at the Alan Turing Institute works to advance effective governance approaches and inform the development of regulatory practices for the age of AI. The team's main output is the AI Standards Hub. The team works with policy makers to develop innovative, data-driven solutions to policy problems, and developing ethical frameworks for the use of Al in the public sphere.

Mandate: the AI Standards Hub is a UK initiative dedicated to the evolving and international field of standardisation for AI technologies, which aims to facilitate knowledge sharing and world-leading research

Activities:
- Conducts research
- Provides advice
- Works with stakeholders to address the challenges and opportunities that AI presents for technology governance and regulation
SectoralReport/Resource
18
16Policy Network on AIInternet Governance ForumIGFMultistakeholderInitiative Overview: The Policy Network on Artificial Intelligence (PNAI) is a multistakeholder initiative that works on policy matters related to AI. It is hosted by the UN Internet Governance Forum, providing a platform for stakeholders and changemakers in the field to contribute their expertise, insights and recommendations.

Mandate: PNAI emerged at the request of the community following the discussion held at the 17th annual IGF meeting in Addis Ababa and its main session on Addressing Advanced Technologies, including AI. The Addis Ababa IGF Messages conclude that the "IGF could be used as a platform for developing cooperation mechanisms on artificial intelligence. A policy network on artificial intelligence could be considered for the upcoming work streams in order to review the implementation of different principles with appropriate tools and metrics".

Activities: PNAI and its Multistakeholder Working Group is open to anyone. The group has developed recommendations and a report that was presented and discussed at the 18th annual IGF meeting in Kyoto. The outcomes of this work will contribute to the UN's Global Digital Compact.
Cross-cutting, MultidisciplinaryReport/Resource
19
17President's Council of Advisors on Science & TechnologyUS White HousePCASTMultistakeholderInitiative Overview: PCAST Working Group on Generative AI - issued a call for input in May 2023

Mandate: The President’s Council of Advisors on Science and Technology (PCAST) has launched a working group on generative artificial intelligence (AI) to help assess key opportunities and risks and provide input on how best to ensure that these technologies are developed and deployed as equitably, responsibly, and safely as possible.

The PCAST Working Group on Generative AI aims to build upon these existing efforts by identifying additional needs and opportunities and making recommendations to the President for how best to address them. Over the course of the year, PCAST will be consulting with experts from all sectors, beginning with panel discussions at the most recent public meeting on May 19, 2023. We also welcome input from the public on the challenges and opportunities that should be considered, along with potential solutions, for the benefit of the Nation.

Activities: PCAST Public Session on Generative AI; input from the public
Crosscutting, MultidisciplinaryPrinciples/Voluntary Guidance
20
18Global AI Safety SummitUK Global AI Safety SummitUKGov't-led (Multilateral)Initiative Overview: The Global Safety Summit will be hosted by the UK to bring together key countries, leading tech companies and researchers to agree safety measures to evaluate and monitor the most significant risks to AI. The talks will explore and build consensus on rapid, international action and serve as a platform for finding a coordinated approach.

Mandate: The Prime Minister of the United Kingdom announced its intention to host the first global summit on AI safety. The Prime Minister has spoken about the global duty to ensure that AI is developed and adopted safety and responsibility. The summit has been noted as a means of building on ongoing work at international forums including the OECD, Global Partnership on AI, Council of Europe and the UN and standards-development organisations, as well as the recently agreed G7 Hiroshima AI Process. In its "introduction" to the AI Safety Summit, the UK government outlined its vision for the event to be focused on "frontier AI," defined as "highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models." They outline two main risks on which they will be focusing: misuse (ex: where a bad actor is aided by new AI capabilities in cyber-attacks) and loss of control (emerging from "advanced systems that we would seek to be aligned with our values and intentions").

Activites: The summit will take place on 1-2 November 2023 in Bletchley Park.
Cross-cutting, Multidisciplinary, SafetyEvent(s)
21
19Framework for Responsible Human-Centric AI GovernanceThe Group of TwentyG20MultilateralInitiative Overview: The G20 is an international forum, made up of 19 countries and the European Union, representing the world’s major developed and emerging economies. The G20 Ministerial Statement on Trade and Digital Economy issued in 2019 contains the G20 AI Principles which draw from the OECD principles on AI. Discussions on responsible AI global governance has become a regular part of the G20 agenda since then. While the G20 AI Principles provide a framework for countries and organisations to develop and deploy AI in a way that is beneficial and addresses concerns related to ethics, privacy, and security, there is limited consideration of the distributional aspects and existing multidimensional power dynamics that shape global AI governance.

In 2023 India is chairing the G20 presidency, with the 18th G20 Heads of State and Government Summit taking place on 9 - 10 September 2023 in New Delhi where the 2023 Leaders' Declaration was agreed upon and released. During the meeting, PM Modi issued a statement calling for the establishment of a framework for Responsible Human-Centric AI governance. While this framework was not explicitly mentioned in the declaration, this is an area to watch for potential future developments.

Mandate: The G20 endeavours to leverage AI for the public good by solving challenges in a responsible, inclusive and human-centric manner, while protecting people’s rights and safety. To ensure responsible AI development, deployment and use, the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, biases, privacy, and data protection must be addressed. To unlock the full potential of AI, equitably share its benefits and mitigate risks, the G20 will work together to promote international cooperation and further discussions on international governance for AI.

Activities: G20 Leaders G20 stressed promoting international cooperation and further discussions on international governance for AI [exact activities TBC; these are the ambitions laid out in the 2023 New Delhi Leaders' Declaration]
- Reaffirm its commitment to G20 AI Principles (2019) and endeavour to share information on approaches to using AI to support solutions in the digital economy
- Pursue a pro-innovation regulatory/governance approach that maximizes the benefits and takes into account the risks associated with the use of AI
- Promote responsible AI for achieving SDGs
Cross-cutting, MultidisciplinaryPrinciples/Voluntary Guidance
22
20Advancing Technology for DemocracyU.S. Summit for DemocracyS4DGov't-led (Multilateral)Initiative Overview: Prior work includes the release of the Blueprint for an AI Bill of Rights in 2022 and its implementation. Most recently, the summit's advancing technology for democracy program has a number of AI-relevant elements, including publishing guiding principles and codes of conduct

Mandate
: Countering the misuse of technology and rise of digital authoritarianism

Activities: Publication of the Guiding Principles on Government Use of Surveillance Technologies; the multilateral Code of Conduct, developed through the Export Controls and Human Rights Initiative announced at the first Summit for Democracy, which commits subscribing states to better integrate human rights criteria in their export control regimes
Cross-cutting, MultidisciplinaryPrinciples/Voluntary Guidance
23
21Digital Technologies & TradeWorld Trade OrganisationWTOIOInitiative Overview: The World Trade Organization (WTO) deals with the global rules of trade between nations and is working on developing governance for digital trade (which encompasses AI) with the aim of bolstering "greater competitiveness and inclusiveness." At the WTO more than 80 countries are engaged in digital economy negotiations, although major policy differences among governments have impeded progress. For this reason, the most extensive updating has occurred in bilateral and regional trade agreements with many recent agreements including dedicated ‘digital trade’ chapters. In 2020, Singapore, Chile, and New Zealand pioneered a Digital Economy Partnership Agreement (DEPA), a digital-only trade agreement which seeks to elaborate rules for a global digital economy, a move since emulated by several other countries.

Mandate: WTO function is to ensure that trade flows as smoothly, predictably and freely as possible.

Activities: The Secretariats of the World Trade Organization (WTO) and the Organisation Internationale de la Francophonie (OIF) have launched a joint call for contributions for research papers on regulatory aspects of digital trade, and inclusive and sustainable development dynamics, particularly with regard to French-speaking developing and least-developed countries (LDCs). The deadline for submissions is 30 October 2023. The research call cover the following topics: (i) Digital trade governance for greater competitiveness and inclusiveness; (ii) Digital trade governance and regional integration; and (iii) The role of digital trade in ensuring sustainability.
Cross-cutting, MultidisciplinaryReport/Resource
24
22Generative AI: Risks and Opportunities for Children UN Children's Fund UNICEFIOInitiative Overview: UNICEF seeks to work together with a diverse set of partners to outline the opportunities and challenges, as well as engaging stakeholders to build AI powered solutions that help realize and uphold child rights. This is a multi-year initiative.

Mandate: The program will engage stakeholders to build AI powered solutions that help realize and uphold child rights by exploring current opportunities, risks and questions, and proposing next steps for UNICEF and others working to empower and protect children.

Activities: The team will consult experts across relevant fields (ranging from psychology, to industrial design, to AI science, to technology law, etc.) through formal desk research (including a dedicated Masters course at UC Berkeley and an evidence review by Baker Mckenzie), phone interviews, workshops, etc. to fill in the gaps in evidence where it is most needed to further child rights in the context of the extremely far reaching, fast-paced, and in some cases unpredictable, development of AI technologies. This work will inform sets of actionable, specific recommendations for governments, companies, and caregivers that will be stress tested before implementation.

Sectoral, SafetyPrinciples/Voluntary Guidance
25
23VARIOUS (Global Challenge to Build Trust in the Age of Generative AI / GPAI Summit / Working Groups)Global Partnership on AIGPAIMultistakeholderInitiative Overview: GPAI aims to provide a mechanism for sharing multidisciplinary research and identifying key issues among AI practitioners, with the objective of facilitating international collaboration, reducing duplication, acting as a global reference point for specific AI issues, and ultimately promoting trust in and the adoption of trustworthy AI. GPAI assesses the scientific, technical, and socio-economic information relevant to understanding AI impacts, encouraging its responsible development and options for adaptation and mitigation of potential challenges. The Global Challenge to Build Trust in the Age of Generative AI is a flagship joint initiative of GPAI, OECD and UNESCO. The initiative is a global challenge to promote trust by equipping governments, organisations, and individuals to be resilient in the era of scalable synthetic content. This challenge will bring together technologists, policymakers, researchers, experts, and practitioners to put forth and test innovative ideas to promote trust and counter the spread of disinformation exacerbated by generative AI.

Additionally, GPAI experts collaborate across four
Working Groups and themes, each with their own set of projects: Responsible AI; Data Governance; Future of work; Innovation and Commercialisation

Mandate: Putting forth and testing innovative ideas to promote trust and counter the spread of disinformation exacerbated by generative AI

Activities: The global challenge initiative takes place in three phases: Identifying promising ideas, prototype, pilot and scale; meetings and convening; GPAI working groups; annual summit; publication of reports (WG/project outputs). The GPAI also hosts an annual summit, with the next one taking place in New Delhi, India in December 2023. The GPAI will also be a collaborator on the G7 Hiroshima process.
Crosscutting, Multidisciplinary, SafetyReport/Resource
26
24VARIOUS (Hash-Sharing Database / Incident Response Framework / Content Incident Protocol)Global Internet Forum to Counter TerrorismGIFCTMultistakeholderInitiative Overview: The GIFCT hash-sharing database is the safe and secure industry database of “perceptual hashes” of known terrorist content as defined by GIFCT’s hash-sharing database taxonomy; GIFCT’s Incident Response Framework guides how GIFCT and members respond to a mass violent incident, streamlining how members can communicate and share situational awareness as an incident unfolds in order to identify any online dimension to the offline attack; The Content Incident Protocol (CIP) is the highest level of our Incident Response Framework and is activated when the perpetrators or accomplices of a terrorist or violent extremist attack record video or livestream the attack and the content is shared on a GIFCT member platform

Mandate:
prevent, respond and learn

Activities:
global summit (annual), events, working groups

The counter-terrorism report recommended the following actions:
(a) All United Nations entities, the Office of Counter-Terrorism and the Counter-Terrorism Committee Executive Directorate in particular, fully and practically address the human rights implications of providing capacity-building and technical assistance capacity in the new technology realm, including AI, biometrics and border management tools, to States with demonstrated records of human rights violations in the security and counter-terrorism arena. Both moratorium and suspension protocols on capacity-building and technical assistance in the use of high-risk technologies should be established in line with the principles and supporting frameworks of the human rights due diligence policy;
(b) All United Nations entities engaged in programming in these areas establish dedicated risk matrices, due diligence protocols and evaluation capacities that are timely, responsive and committed to ensuring the principle of doing no harm. The high-risk nature of these technologies and the high levels of engagement of actors outside of the definition of State security forces in the human rights due diligence policy make doing so critical;
(c) The Secretary-General initiate the process of internal inspection and evaluation or an external independent audit of the Countering Terrorist Travel Programme to ensure the integrity of its practices and technology transfers in respect of human rights, data protection and the rule of law.
Sectoral, SafetyReport/Resource
27
25AI Study Group of BRICS Institute for Future NetworksBRICS AllianceN/AGov't-led (Multilateral)Initiative Overview: BRICS countries have agreed to launch the AI Study Group of BRICS Institute of Future Networks at an early date.

Mandate: To further expand cooperation on AI, and step up information exchange and technological cooperation. The aim is to jointly fend off risks, and develop AI governance frameworks and standards with broad-based consensus, so as to make AI technologies more secure, reliable, controllable and equitable.

Activities: develop AI governance frameworks and standards; relevant events (ex: BRICS Forum on Future Networks Innovation held in September 2023)
TBDPrinciples/Voluntary Guidance
28
26Accountability Principles for AI (AP4AI) ProjectEuropolN/AMultilateralInitiative Overview: The Accountability Principles for Artificial Intelligence (AP4AI) project aims to address the challenge of how to capitalise on the opportunities offered by Artificial Intelligence (AI) and Machine Learning to improve the way investigators, prosecutors, judges or border guards carry out their mission of keeping citizens safe and rendering justice while, at the same time, safeguarding and demonstrating true accountability of AI use towards society. AP4AI is an ongoing research project that aims create a practical toolkit that directly and meaningfully supports AI Accountability when implemented in the internal security domain. Started in 2021, he AP4AI Project is jointly conducted by CENTRIC and Europol Innovation Lab and supported by Eurojust, the EU Agency for Asylum (EUAA), the EU Agency for Law Enforcement Training (CEPOL) and the EU Agency for Fundamental Rights (FRA), in the framework of the EU Innovation Hub for Internal Security.

The project has 3 main objectives:
- 1) Operational objective: Improve the knowledge and capabilities of practitioners in the internal security domain to integrate AI Accountability into their decision making about AI capabilities throughout the full AI lifecycle (i.e., design, procurement, deployment, migration); provide practical capabilities to assess and demonstrate that specific AI capabilities and uses adhere to AI Accountability principles.
- 2) Policy-related objective: Support policy-making and governance bodies with a mature, tested and expert- and citizen-validated definition of AI Accountability
- 3) Societal objective: Improve societal awareness of AI Accountability and participation in AI Accountability procedures; improve informed public trust in AI deployments in the internal security domain

AP4AI has put forward 12
Accountability Principles to define the requirements that need to be fulfilled to assure AI Accountability in the internal security domain. The 12 principles are the foundation on which all other AP4AI activities and solutions are built. The project has also published a framework blueprint for Accountability Principles for Artificial Intelligence (AP4AI) in the Internal Security Domain.

Mandate: The project develops solutions to assess, review and safeguard the accountability of AI usage by internal security practitioners in line with EU values and fundamental rights. AP4AI means a step-change in the application of AI by the internal security community by offering a robust and application-focused Framework that integrates security, legal, ethical as well as citizens’ positions on AI. Its goal is to create a global Framework for AI Accountability for Policing, Security and Justice.

Activities: Building on the foundations of AP4AI, the project has published its CC4AI tool. This is a web-based tool to support internal security practitioners assess compliance of their AI systems with the requirements of the AI Act. This will allow users to evaluate whether, existing or future applications, meet the criteria set by the new regulatory framework. Since the regulation is still under negotiation, this tool will be expanded and updated once the AIA has been adopted.
; implementation
Security / Defense; SectoralPrinciples/Voluntary Guidance
29
27AI Pilot ProgramsfAIrLACfAIr LACMultistakeholderInitiative Overview: Established by the Inter-American Development Bank in 2019, fAIr LAC is a partnership between the public and private sectors, civil society and academic institutions. fAIr LAC is a diverse network of professionals and experts who want to promote an ethical application of AI in Latin America and the Caribbean from academia, government, civil society, industry, and the entrepreneurial sector. They are connected to the Inter-American Development Bank and are part of global policy.ai. fAIr LAC is part of global policy.ai. fAIr LAC conducts pilots, which are projects with AI components that have been developed by the IDB and its partners and implemented with the help of hubs.

Mandate: fAIr LAC is designed to influence public policy and the entrepreneurial ecosystem in the promotion of the responsible and ethical use of AI.

Activities:
- running
pilot programs: the pilots have two purposes: (1) To systematize the lessons learned from applications where AI helps create greater social impact; and (2) to create a cooperative environment so that projects may be scaled and emulated in the region.
- creating guidance for AI implementation, deployment (ex: the Algorithmic Audit Guide, which uses structured questions to guide the need to audit IA systems and indicates the elements to consider when performing the audit), formulation (ex: the
Project Formulation Handbook, which is a practical guide for AI Project Directors on what to consider during the model design, development, validation and its subsequent deployment and monitoring; and the Data science handbook, which provides recommendations and best practices designed for IA models developing technical teams to avoid unexpected results or contrary to the objectives of the solution)
- development of tools/resources (ex:
Ethical self-assessment for the Public sector; Ethical self-assessment for the entrepreneurial ecosystem)
- Production of technical guidelines that are aimed to foster algorithmic justice and equity addressing a range of challenges: class imbalance, external validity, discriminatory biases, explicability, etc
-
Development of a regional observatory to map and track AI projects and AI use cases
Crosscutting, MultidisciplinaryReport/Resource
30
28Public Policy ProgramPartnership on AIPAIMultistakeholderInitiative Overview: The Partnership on AI (PAI) engages with public and policy officials, international organizations, industry practitioners, and representatives from civil society and academia to share learnings, build consensus, advance equitable AI outcomes, and increase the interoperability of AI tools and policy frameworks around the world. We approach our work with a goal to ensure the design, development, and deployment of AI fundamentally protects people and society, particularly the most marginalized. PAI's work. is organised across programs. The Inclusive Research and Design Program is currently creating resources to help AI practitioners and impacted communities more effectively engage one another to develop AI responsibly. Ultimately, this work seeks to achieve a more holistic reimagining of how AI is developed and deployed around the world, leading to an AI industry that recognizes end users and impacted communities as essential expert groups.

Mandate: Partnership on AI's Policy work seeks to facilitate this coordination by convening stakeholders to develop evidence-based frameworks, promoting a shared understanding of how policy can foster responsible AI practices, building connections across borders to support global equity and interoperability, and working with Partners to ensure policy implementation is impactful

Activities:
- Inclusive research & design: The Inclusive Research and Design Program is currently creating resources to help AI practitioners and impacted communities more effectively engage one another to develop AI responsibly. Ultimately, this work seeks to achieve a more holistic reimagining of how AI is developed and deployed around the world, leading to an AI industry that recognizes end users and impacted communities as essential expert groups
- Develop evidence-based tools and policy
- Convene experts across sectors
- Build consensus-based frameworks
- Facilitate global policy dialogue
- Support impactful implementation
Crosscutting, Multidisciplinary, SafetyReport/Resource
31
29AI StandardsISO/IECISOTechnical SpaceInitiative Overview: development of standards for AI (ex: ISO/IEC DTR 5469 Artificial intelligence — Functional safety and AI systems; a standard for AI risk management; ISO/IEC 23894)

Mandate: explore the big questions related to ethics and governance of AI

Activities: sessions; annual ISO meeting
TechnicalPrinciples/Voluntary Guidance
32
30VARIOUS (Autonomous and Intelligent Systems (AIS) Standards / IEEE CertifAIEd)Institute of Electrical and Electronics Engineers Standards Association IEEETechnical SpaceInitiative Overview: development of resources and standards globally recognized in the area of applied ethics and systems engineering and continue to develop accessible and sustainable approaches and solutions for pragmatic application of AIS principles and frameworks

Examples include: IEEE Global Initiative on Ethics of A/IS; Open Community for Ethics in Autonomous and Intelligent Systems (OCEANIS); IEEE CertifAIEd™ (a certification program for assessing ethics of Autonomous Intelligent Systems to help protect, differentiate, and grow product adoption with the aim to deliver solutions with a more trustworthy AIS experience to their users; IEEE CertifAIEd currently includes four ethical criteria applied against an organization’s AIS: transparency, accountability, algorithmic bias, privacy)

Mandate: assessing ethics of Autonomous Intelligent Systems (AIS) to help protect, differentiate, and grow product adoption

Activities:
- development of standards (ex: Designing Trustworthy Digital Experiences for Children, which aims to develop standards and establishing frameworks and processes for age appropriate digital services and protecting children's data; or the IEEE GET Program for AI Ethics & Governance Standards, which supports efforts around AI Ethics and Governance Literacy, informs understanding around human-centric design, age appropriate design and AI systems governance and standardization, and allows for an increased understanding around certification considerations.)
- Training and education, certification programs, and more, to empower stakeholders designing, developing, and using Autonomous Intelligent Systems (AIS)
- Certifications: the IEEE CertifAIEd Mark helps organizations demonstrate that they are addressing transparency, accountability, algorithmic bias, and privacy aspects, needed to build trust in their AIS. The mark: affirms an organization’s commitment to uphold human values, dignity, and well-being, and to respect, protect and preserve fundamental human rights; conveys an AIS’s capability to fulfill applicable requirements stipulated in the appropriate suite of criteria, fosters trust, and facilitates the adoption of AI; enhances confidence in public and private enterprises in the absence of or as a complement to broadly accepted and enforced regulations for AI
- Global Initiative on ethics of autonomous and intelligent systems: building on the landmark document published in 2019 on "Ethically Aligned Design"; via the IEEE P7000 standards series, has created of over twelve standards working groups working to address key socio-technical issues identified by EAD in pragmatic and actionable ways to put principles into practice for Artificial Intelligence Systems (AIS)
- Establishment of the Open Community for Ethics in Autonomous and Intelligent Systems (OCEANIS), a global forum that brings together organizations interested in the development and use of standards as a means to address ethical matters in autonomous and intelligent systems
Technical / SafetyPrinciples/Voluntary Guidance
33
31Conversation on IP and AIWorld Intellectual Property OrganisationWIPOIOInitiative Overview: WIPO is a specialized agency of the United Nations and represents the global forum for intellectual property services, policy, information and cooperation for its member states. WIPO is leading a conversation on Intellectual Property (IP) and AI bringing together member states and other stakeholders to discuss the impact of Al on IP. The WIPO Conversation is an open, inclusive, multi-stakeholder forum intended to provide stakeholders with a leading, global setting to discuss the impact of frontier technologies on all IP rights and to bridge the existing information gap in this fast moving and complex field. WIPO generally holds two sessions of the Conversation each year in a format that allows the widest possible global audience to participate. The Eighth session of the WIPO Conversation to be held on September 20 and 21, 2023 will be devoted to Generative AI and IP.

Mandate: WIPO mission is to lead the development of a balanced and effective international IP system that enables innovation and creativity for the benefit of all. WIPO works to ensure that the current IP system continues to promote innovation in the age of frontier technologies.

Activities: Through annual events, WIPO provides a multi-stakeholder forum to advance the understanding of the IP issues involved in the development of AI applications throughout the economy and society and its significant impact on the creation, production and distribution of economic and cultural goods and services. WIPO also produces factsheest on frontier technologies such as AI. WIPO has implemented and continues to develop its own IP management services and tools using AI technologies, creating best-in-class applications of AI for the international IP system.
SectoralEvent(s)
34
32VARIOUS OECD (AI Policy Observatory, OECD Network of Experts on AI / OECD AI Principles, Framework for the Classification of AI Systems)OECDOECD.AIMultilateralInitiative Overview: The OECD AI Policy Observatory (OECD.AI) is an initiative and online platform that was launched in 2020 to build on the momentum of the OECD's Recommendation on Artificial Intelligence (OECD AI Principles) - the first intergovernmental standard on AI - adopted in 2019. It combines resources from across the OECD and its partners from all stakeholder groups. It facilitates dialogue and provides multidisciplinary, evidence-based policy analysis and data on AI’s areas of impact. It is a unique source of real-time information, analysis and dialogue designed to shape and share AI policies across the globe. Its country dashboards allow you to browse and compare hundreds of AI policy initiatives in over 60 countries and territories. The Observatory also hosts the AI Wonk blog, a space where the OECD Network of Experts on AI and guest contributors share their experiences and research.

The OECD is also in partnership with the G7 in the Hiroshima AI process.

The OECD framework for classifying AI systems aims to help policy makers, regulators, legislators and others to assess the opportunities and risks that different types of AI systems present, to inform their AI strategies and ensure policy consistency across borders. The Framework is a user-friendly tool that links the technical characteristics of AI with policy implications, based on the OECD AI Principles that promote values such as fairness, transparency, safety and accountability and policies such as building human capacity and fostering international cooperation.
.
Mandate: The OECD is working to facilitate the implementation of the Principles through the establishment and ongoing work of the OECD AI Policy Observatory, which aims to provide evidence and guidance on AI metrics and to constitute a hub for dialogue and sharing best practices on AI policies.

Activities: The Observatory works to facilitate the implementation of the Principles through a myriad of activities, including:
- Establishing the OECD.AI Network of Experts and the OECD Working Party on Artificial Intelligence Governance (part of the OECD's Committee on Digital Economy Policy), which both play a key role in the implementation of the Principles by driving activities
- Developing a framework for AI classification systems
- Publishing reports on tools for trustworthy AI, and analysis on national AI policies
- Hosting events on putting the OECD AI Principles into practice and experts forums on issues such as generative AI.
- OECD. AI also acts as the Secretariat for the GPAI
- Publication of a paper that aims to inform policy considerations and support decision makers in addressing the challenges posed by generative AI

The working party oversees and gives direction to the CDEP work programme on AI policy and governance. This includes:
- Analysis of the design, implementation, monitoring and evaluation of national AI policies and action plans;
- AI impact assessment; approaches for trustworthy and accountable AI;
- Supervising measurement and data efforts as part of the OECD.AI Observatory’s pillar on trends & data;
- Conducting foresight work on AI and on related emerging technologies
- Supporting the implementation of OECD standards relating to AI
- Serving as a forum for exchanging experience and documenting approaches for advancing trustworthy AI that benefits people and planet;
- Developing tools, methods and guidance to advance the responsible stewardship of trustworthy AI, including the OECD.AI Policy Observatory and Globalpolicy.AI platforms;
- Supporting the collaboration between governments and other stakeholders on assessing and managing AI risks;
- Conducting outreach to non-OECD Member countries to support the implementation of OECD standards relating to AI.

The OECD.AI Network of Experts works with the working party as an informal group of AI experts from government, business, academia and civil society. The network:
- Provides AI-specific policy advice for the OECD’s work on AI policy
- Contributes to the OECD Policy Observatory on AI, OECD.AI
- Provides a space for the international community to have in-depth discussions about shared AI policy opportunities and challenges
Cross-cutting, MultidisciplinaryReport/Resource
35
33REAIM 2023 Call to ActionGovernment of NetherlandsN/AGov't-led (Multilateral)Initiative Overview: At the first REAIM Summit in 2023, co-hosted by the Netherlands and the Republic of Korea (ROK), government participants agreed a joint call to action on the responsible development, deployment and use of AI in the military domain. The call emphasizes the importance of the responsible use of AI in the military domain, employed in full accordance with international legal obligations and in a way that does not undermine international security, stability and accountability.

Mandate: call to action; Follow up likely ahead of next REAIM summit, to be hosted in ROK.

Activities: implementation of call to action until next Summit
Sectoral, SafetyPrinciples/Voluntary Guidance
36
34VARIOUS (Principles on AI for Climate Action / AI for SDGs Observatory etc.)AI for Sustainable Development Goals Think TankAI4SDGsMultistakeholderInitiative Overview: AI for Sustainable Development Goals (AI4SDGs) Think Tank is represented as an online open service and a global repository and an analytic engine of AI projects and proposals that impacts UN Sustainable Development Goals, Associated to the AI4SDGs Research Program, and led by the Center for Long-term Artificial Intelligence (CLAI) with supports from various academic institutions, and industries as partners all over the world.

It runs multiple initiatives related to AI, including the Principles on AI for Climate Action, and the
AI for SDGs Observatory.

Mandate: n/a

Activities: implementation of AI principles for climate action; AI for SDGs observatory, cooperation network; AI Carbon Efficiency Observatory; cultural interactions engine, cross-cultural cooperation on AI; principles on AI for biodiversity conservation, working group on AI for biodiversity; symbiosis panorama; principles on symbiosis for natural life and living AI.
SectoralPrinciples/Voluntary Guidance
37
35VARIOUS (f<A+I>r Network / Feminist AI Research Network Global Webinars / Global Directory)A+ Alliance for Inclusive AlgorithmsA+ AllianceMultistakeholderInitiative Overview: The <A+> Alliance for inclusive algorithms is organized by Women@theTable and Instituto Tecnológico de Costa Rica (TEC). It’s a global, multidisciplinary, feminist coalition of academics, activists, technologists, prototyping the future of artificial intelligence and automated decision making to accelerate gender equality with technology and innovation. The <A+> Alliance was created for and by women and girls to leave no one behind.

The <A+> Alliance's goal is to springboard from well-established descriptions of ‘what’ potential harms in algorithms and machine learning exist to urgent focus on ‘how’ to course correct future harms with new data, algorithms, models, policies and systems can be researched and piloted for gender-transformative change. The Alliance does this via the Global f<A+I>r Network, which aims is to support the skill and imagination of Global South/the Majority World feminists in producing effective, innovative, interdisciplinary models that harness emerging technologies which correct for real life bias and barriers to women’s rights, representation and equality

Mandate:
As outlined in their declaration, the A+ alliance aims to advocate for and adopt guidelines that establish accountability and transparency for algorithmic decision making (ADM) in both the public and private sectors; to take clear proactive steps to include an intersectional variety and equal numbers of women and girls in the creation, design, and coding of ADM; ensure international cooperation and an approach to ADM and machine learning grounded in human rights;

Activities:
Incubating, normalizing, building and networking Feminist AI>
- Building New Models: prototyping the future of AI and algorithmic decision-making; providing research funding and mentorship
- f<A+I>r Network: f<A+I>r uses a combination of public and private bi-monthly regional Hub meetings, 
and bi-monthly Global meetings (in collaboration with sister network AI4D Gender & Inclusion Network Africa, funded by IDRC/SIDA) to foster network building, South-South knowledge and extend and deepen boundary partnerships
- Hosting a series of Feminist AI Research Network Global Webinars
- Promoting Feminist AI: with f<A+I>r, the alliance have created the first online Global Directory to highlight and promote feminist doers, thinkers and their innovations from the Majority World f<a+i>r network and their innovations.
Crosscutting, MultidisciplinaryReport/Resource
38
36AI for Good Global SummitInternational Telecommunication UnionITUIOInitiative Overview: The AI for Good Global Summit is the leading action-oriented United Nations platform promoting AI to advance health, climate, gender, inclusive prosperity, sustainable infrastructure, and other global development priorities. AI for Good is organized by the International Telecommunication Union (ITU) – the UN specialized agency for information and communication technology – in partnership with 40 UN sister agencies and co-convened with the government of Switzerland. It’s the leading action-oriented, global & inclusive UN platform on AI.

Mandate: The goal of AI for Good is to identify practical applications of AI to advance the UN Sustainable Development Goals (SDGs) and scale those solutions for global impact.

Activities: The last AI for Good Global Summit took place in July 2023.
Technical, SectoralEvent(s)
39
37VARIOUS (Toolkit for Responsible AI Innovation in Law Enforcement / AI for Safer Children Hub / UNICRI Centre for AI and Robotics)UN Interregional Crime and Justice Research InstituteUNICRIIOInitiative Overview: Publication of toolkit for Responsible AI Innovation in Law Enforcement, principles & initiative; launch of AI for Safer Children Hub– a unique centralized platform, containing information on currently available AI tools to prevent and combat this crime, and the UNICRI Centre for AI and Robotics.

Established in 2019, the main outcome of the UNICRI Centre for Artificial Intelligence and Robotics will be that all stakeholders, including policy makers and governmental officials, possess improved knowledge and understanding of both the risks and benefits of such technologies and that they commence discussion on these risks and potential solutions in an appropriate and balanced manner.

Mandate:
The took is to serve as a practical guide for law enforcement agencies on developing and deploying AI responsibly, while respecting human rights and ethics principles;

The AI for Safer Children initiative seeks to build the capacity of law enforcement and related authorities worldwide in exploring the positive potential of AI to combat online child sexual exploitation and abuse. Moreover, the initiative aims to enhance cooperation, awareness-raising and advocacy on the issue of online child sexual exploitation and abuse. The hub includes:
- AI Tools Catalogue to provide law enforcement users with information on the range of AI tools that currently exist and how they can identify potential tools that meet their specific needs
- A learning Centre to enable law enforcement agents to learn more about leveraging AI to rescue children faster, investigation techniques to improve their workflow and how to safeguard their mental wellbeing.
- Networking to strengthen communication and networking on using AI for combating online child sexual exploitation and abuse throughout the law enforcement community

The aim is for the UNICRI Centre for Artificial Intelligence and Robotics to serve as an international resource on matters related to AI and robotics.

Activities:
evolving instrument to be updated and revised regularly; capacity-building initiatives, support for countries looking to draft national guidelines or strategies; a strategy ensuring the ethical and legal soundness of the initiative The platform will also offer guidance on how to navigate the challenges faced throughout the development, procurement and deployment of such AI technologies to ensure that they produce the expected impact.

UNICRI Centre for Artificial Intelligence and Robotics activities include:
- Performance of a risk assessment and stakeholder mapping and analysis
- Implementation of training and mentoring programmes
- Contributing to the UN Sustainable Development Goals through facilitation of technology exchange and by orienting policies to promote security and development
- Convening of expert meetings
- Organization of policy makers’ awareness-raising workshops
- Organization of international conferences
Sectoral, SafetyPrinciples/Voluntary Guidance
40
38AI Governance Alliance World Economic ForumWEFMultistakeholderInitiative Overview: In June 2023, the World Economic Forum launched the AI Governance Alliance, a dedicated initiative focused on responsible generative artificial intelligence (AI). This initiative is intended to build on the recommendations from the Responsible AI Leadership: A Global Summit on Generative AI held in April 2023. The initiative will prioritize three main areas: ensuring safe systems and technologies, promoting sustainable applications and transformation, and contributing to resilient governance and regulation. The initiative is open to stakeholders from various sectors, including businesses, academia and regulatory bodies, to contribute their expertise and insights.

Publication of AI for Children toolkit in March 2022.

Mandate: Providing guidance on the responsible design, development and deployment of AI systems.

Activities: TBD
Crosscutting, MultidisciplinaryPrinciples/Voluntary Guidance
41
39Political Declaration on Responsible Military Use of AI and AutonomyUS GovernmentN/AGov't-led (Multilateral)Initiative Overview: A principled approach to the military use of AI

Mandate:
to ensure military use of AI is ethical, responsible, and enhance international security

Activities:
endorsement of statement
Sectoral, SafetyPrinciples/Voluntary Guidance
42
40Harnessing AI for better public policyParis Peace ForumPPFMultistakeholderInitiative Overview: The Paris Peace Forum (PPF) is a French government initiative launched in 2018 to create a multi-actor platform in Paris to address global governance issues. Throughout the year, the Forum works with actors from across the world to strengthen the governance of global commons, including on climate, public health, outer space and digital issues. Its annual event gathers heads of state, government and international organizations, together with civil society and private sector leaders around concrete solutions for better global governance. The PPF fosters hybrid coalitions for global governance: states and international organizations, NGOs, companies, foundations, philanthropic organizations, development agencies, religious groups, trade unions, think tanks, universities, and civil society at large. Of most relevance to this project is the "Digital Rights in Society" initiative.

Mandate: to define common standards for the use of automated technologies in all jurisdictions, to pave the way for a Digital Bill of Rights.

Activities: The annual Forum will take place 10-11 November, 2023. The Forum provides three different spaces: for Solutions, where any organization can present and advance a new governance project; Agenda Debate, where stakeholders in global governance can discuss projects, initiatives and ideas to address contemporary challenges; and the Space for Innovation, where specialists bring out the technological solutions. Each year, PPF supports ten governance projects selected from those presented at the annual Forum. Throughout the year, PPF provide these projects with tailored support for their advocacy, communication and organizational development activities."

In 2022, the PPF released a report on AI governance titled "Beyond the North-South Fork on the Road to AI Governance: An Action Plan." The report is an output of the Digital Rights in Society initiative. This is a North-South multi-stakeholder initiative incubated by the Paris Peace Forum that aims to define common standards for the use of automated technologies in all jurisdictions, to pave the way for a Digital Bill of Rights.
Crosscutting, MultidisciplinaryEvent(s)
43
41Working Group on Artificial IntelligenceForum on Information and DemocracyFIDMultistakeholderInitiative Overview: The forum on information and democracy has launched a working group that has commenced its work gathering broad input from experts around the world. The new Working Group on Artificial Intelligence has 14 members representing a wide range of stakeholders including civil society, academia, private sector, and representatives of international organisations (such as UNESCO).

The Forum is the civil society-led implementation body of the Partnership on Information and Democracy, endorsed by 51 democratic states worldwide. Its objective is to implement democratic safeguards in the digital space and address new threats to democracy emerging from the globalization and digitization of our information and communication ecosystem by providing concrete regulatory and policy recommendations. Since its creation, the Forum has already published the reports of 4 working groups: How to End Infodemics (2020), A New Deal for Journalism (2021), Accountability Regimes for Social Networks and their Users (2022) and Pluralism of information in Curation and Indexation of Algorithms (2023).

Mandate: the focus group is prioritising research into the following three critical areas:
- 1) The development and deployment of AI systems: Provide recommendations for putting in place guardrails in the design, development and deployment of AI systems to reduce their risks to the information space, respect data privacy of AI subjects and intellectual property, and promote transparency, explainability and contestability of AI systems by AI subjects.
- 2) Accountability regimes: Provide recommendations for putting in place accountability regimes for the developers, deployers, users, and subjects of AI systems with regards to the outputs generated and decisions taken by AI.
- 3) Governance of AI: Provide recommendations of governance options for the deployment and monitoring of AI systems, their deployment and use.

Activities: Key milestones include:
- 28 September 2023: Launch of the Working Group
- October 2023: Call for public contributions
- Beginning of 2024: Publication of the recommendations
Human Rights (Democracy), Safety, CrosscuttingReport/Resource
44
42European AI Alliance
European CommissionMultilateralInitiative Overview: The European AI Alliance is an initiative of the European Commission. Since its launch in 2018, the AI Alliance has engaged around 6000 stakeholders through regular events, public consultations and online forum exchanges. This is an initiative by the European Commission that aims to to establish an open policy dialogue on Artificial Intelligence within the framework of the EU’s AI Strategy. The AI Alliance was initially created to steer the work of the High-Level Expert Group on Artificial Intelligence (AI HLEG). After the AI HLEG’s mandate closed, the AI Alliance community continues to promote Trustworthy AI by sharing best practices among the members and by helping developers of AI and other stakeholders to apply key requirements, through the ALTAI tool - a practical Assessment List for Trustworthy AI.

Mandate: to establish an open policy dialogue on Artificial Intelligence.

Activities: The members of the European AI Alliance meet with experts, stakeholders and international actors in the field of Ai, in regular events. Since the launch of the forum, such events brought together in average 500 (in person) to 1000 (virtual) participants on a yearly basis. The 4th AI Alliance Assembly will take place on 16 and 17 November 2023 in Madrid and will focus on policy aspects that are "Leading Trustworthy AI Globally". It will be co-organised by the Commission together with the Spanish Ministry of Economic Affairs and Digital Transformation, in the frame of the Spanish Presidency of the Council of the EU and will be open to the public for in-person and online participation.
Crosscutting, MultidisciplinaryEvent(s)
45
43Association for Data Robotics and AIadra-eMultistakeholderInitiative Overview: Adra-e supports the AI, Data and Robotics Association (ADR) and Partnership to create the conditions for a sustainable European ecosystem, This is a project funded by the European Commission under the Horizon Europe program, which is the EU’s key funding programme for research and innovation with a budget of €95.5 billion. The Strategic Research Innovation and Deployment Agenda (SRIDA) aims to build on the fundamentals of Europe to be world-leading in ADR for both enhancing the revenue-generating potential for companies’ business models and enriching our society as a whole.

Mandate: To support the development of standards and regulations maintaining European technological sovereignty; to map the AI, Data and Robotics landscape and infrastructures to deliver services and build connections between structured initiatives; and to support the update and implementation of the AI, Data and Robotics Strategic Research, Innovation and Deployment Agenda. (among other objectives).

Activities:
- Working groups: AI Act Dissemination group (aim: to disseminate the AI Act and to deliver a document that would make the AI Act more accessible to non-standards people) & the AI Trustworthiness Characterisation group (aim: to fill the gaps of SC 42 standards and bring actionable requirements to the harmonised standards while being an overarching layer connecting and adapting SC 42 standards or others, when they exist, to the EU specificities and support EU values and principles)
- Development of resources such as the ADR observatory of standards (an online observatory of standards and standardisation activities, developed with a group of experts in the field), ADR Cartography (an open repository of major European and national initiatives in the field of AI, Data and Robotics), and the AI Trust Label (tools to support the understanding of the potentialities, quality, performance and trustworthiness of AI technologies and applications)
- Events including ADR Forums (to identify strategic challenges in the AI, Data and Robotics fields and share best practices to enhance trustworthy of ADR in the economy, society, and environment), and ADR Convergence Summits (to facilitate high-level multi-stakeholder dialogues to disseminate to decision-makers and influencers)
- Roadmap of Actionable Recommendations for the update and implementation of the Strategic Research, Innovation and Deployment Agenda (SRIDA) in the area of AI, Data and Robotics
Observatory
46
44VARIOUS (NATO Data & AI Review Board / NATO Advisory Group on EDT’s / AI Strategy / Autonomy Implementation Plan)
North Atlantic Treaty OrganizationNATOMultilateralInitiative Overview: NATO is working with public and private sector partners, academia and civil society to develop and adopt new technologies, establish international principles of responsible use and maintain NATO’s technological edge through innovation. NATO’s focus on EDTs is strongly linked to cooperation with partners in the public and private sector, academia and civil society. NATO is engaging with other international organisations, including the European Union (EU) and the United Nations (UN), to address emerging and disruptive technologies. Emerging and disruptive technologies are also a key facet of the NATO 2030 agenda, an initiative to strengthen NATO both militarily and politically and to adopt a more global approach for the Alliance. AI is one of the key priority technology areas laid out in NATO's overarching strategy to guide its relationship with EDTs. In October 2021, NATO Defence Ministers endorsed NATO’s Artificial Intelligence (AI) Strategy, setting out how the Alliance aims to adapt AI to meet operational requirements, and to accelerate and mainstream the secure and trustworthy integration of AI across a range of Alliance capabilities.

In October 2022, NATO Defence Ministers established NATO’s Data and Artificial Intelligence Review Board and NATO’s Autonomy Implementation Plan.

The NATO Advisory Group on Emerging and Disruptive Technologies is an independent group that consists of 12 experts from the private sector and academia. The group, whose membership is renewed every two years, continues to provide concrete short- and long-term recommendations on NATO’s approach to emerging and disruptive technologies. In 2023, its deliverables include inputs to NATO’s Quantum Strategy and NATO’s Biotechnology and Human Enhancement Strategy.

Mandate: The Advisory Group on EDTs provides external advice to NATO and has issued two annual reports as well as inputs to key NATO EDT strategies and efforts. The Data & AI Review Board operationalises the Principles of Responsible Use of AI. The Data and Artificial Intelligence Review Board serves to operationalise NATO’s Principles of Responsible Use of AI, as set out in NATO’s AI Strategy. The Autonomy Implementation Plan drives a coherent approach to NATO’s autonomy protection and development efforts in line with the Alliance’s norms, values and commitment to international law. The AI Strategy is centred on principles of responsible use for AI in defence, with twin pillars of activities that will help the Alliance foster development and adoption of AI as well as protect against threats arising from this technology.

Activities: The Advisory Group on EDTs releases annual reports with recommendations to NATO. The Data & AI Review Board creates practical Responsible AI toolkits, guides Responsible AI implementation in NATO and supports Allies in their Responsible AI effort. The 2023 Report adds next-generation communications networks as a new EDT area and determines that data has been sufficiently mainstreamed into NATO lines of effort, including data exploitation and digital transformation, to no longer be considered a standalone EDT area.

In February 2023, NATO’s Data and Artificial Intelligence Review Board (DARB) meets for the first time to start the development of a user-friendly Responsible AI certification standard
Security / DefenseReport/Resource
47
45World Summit on AIWorld Summit on AIPrivate Sector-ledInitiative Overview: Since launching in 2017, the World Summit on AI has been involved in the development of strategies on AI and spotlighting the worldwide applications, risks, benefits and opportunities. The event convenes primarily industry representatives in addition to academia (e.g. sponsored by accenture and microsoft, among others).

Mandate: To convene those from across the global AI ecosystem including "Enterprise to BigTech, Startups, Investors and Science." The summit focuses on AI4Good operating in healthcare, education, STEM and government / strategic level Artificial Intelligence.

Activities: Annual event
Crosscutting / SectoralEvent(s)
48
46VARIOUS (The Forum for Cooperation on Artificial Intelligence / AI & Emerging Technology Initiative / Global Forum on Democracy and Technology)The Brookings InstituteMultistakeholderInitiative Overview: The Forum for Cooperation on Artificial Intelligence (FCAI), a collaboration between the Brookings Institution and the Centre for European Policy Studies, The Forum on Cooperation on Artificial Intelligence hosts regular AI dialogues among high-level officials from Australia, Canada, EU, Japan, Singapore, UK and the US as well as experts from industry, civil society, and academia. Many of the ideas and policy recommendations from the dialogues are reflected in FCAI reports and blogs

The AI & Emerging Technology Initiative (AIET) strives to promote effective solutions to the most pressing challenges posed by A.I. and emerging technology.

The Global Forum on Democracy and Technology (GFCT) is an initiative within the Brookings Institute that draws on scholars from across the Institution to develop shared practices and applications for technology that can strengthen democratic societies around the world.

Mandate: The forum aims to identify opportunities for international cooperation on AI regulation, standards, and research and development.

The AIET initiative aims to advance good governance of transformative new technologies.

The GFCT aims to propose solutions to the challenge of how to govern advanced technologies in a way that reinforces liberal norms and values while outcompeting authoritarian models.

Activities:
Forum:
- host a series of roundtables
- issue annual progress reports

AIET:
- Research
- Convenings

GFCT
- Research across three areas: Governance (Multilateral Coalitions and Technology Governance, Platform Governance, and Trustworthy AI); Inequality (Digital Development, Disruptive Innovation, and the Fourth Industrial Revolution), and Security (Autonomous Weapons and Advanced Military Technology , Cybersecurity, Information Manipulation, Technology and Malicious Actors, Technology and Surveillance, Technology and Inequality)
- Convenings
Cross-cutting, MultidisciplinaryEvent(s)
49
47EU-China High-level Digital DialogueEU-China High-level Digital DialogueGov't-led (Bilateral)Initiative Overview: The EU-China high-Level Digital Dialogue covers issues such as platforms and data regulation, Artificial Intelligence, research and innovation, cross-border flow of industrial data, or the safety of products sold online. The first EU-China High-level Digital Dialogue took place in September 2020, and it had not met again since. The resumption of this Dialogue was announced by President Ursula von der Leyen during her visit to Beijing on 6 April 2023. This initiative is part the EU's broader engagement with China in science, technology and innovation taking place within the framework of the Joint Roadmap for the future of EU-China cooperation in science, technology, and innovation (currently under discussion).

Mandate: To discuss crucial areas of digital policy and technologies

Activities: convenings
Cross-cutting, MultidisciplinaryPrinciples/Voluntary Guidance
50
48Open Community for Ethics in Autonomous and Intelligent SystemsOCEANISOCEANISMultistakeholderInitiative Overview: OCEANIS is a Global Forum for discussion, debate and collaboration for organizations interested in the development and use of standards to further the development of autonomous and intelligent systems. Working together to enhance the understanding of the role of standards in facilitating innovation while addressing problems that expand beyond technical solutions to addressing ethics and values. The founding members include a number of technical standard-setting organisations. OCEANIS aims to provide a high level global forum for discussion, debate and collaboration for organizations interested in the development and use of standards to further the development of autonomous and intelligent systems. The OCEANIS community is open to interested organizations from around the world.

Mandate: to address the need for coordination and collaboration related to the unprecedented challenges faced by those working in ICT standards and related spaces, challenges fueled by the rapid rate of technology development and convergence through digitization; however, the open community itself will not act as a standards development body.

Activities:
Members will:
- Share information and coordinate on respective initiatives and programs, starting with the areas of autonomous and intelligent systems,
- Enhance understanding on the role of standards in facilitating innovation, whilst addressing problems that expand beyond technical solutions to addressing ethics and values,
- Jointly organize events at local/regional/global level,
- Identify opportunities for collaborative activities that bolster the development and use of standards in supporting technical, business and policy communities in addressing technical, societal and ethical implications of technology expansion.

Outputs can take a variety of forms, including but not limited to articles, white papers and workshops.
Crosscutting; SafetyEvent(s)
51
49CET DialogueU.S.-Singapore Critical and Emerging Technology DialogueCETGov't-led (Bilateral)Initiative Overview: The US white house and Singapore issued joint vision statement on establishing the “U.S.-Singapore CET Dialogue”. Out of 6 lines of effort, 2 are AI and digital economy/data governance.

Mandate: to advance shared principles and deepening information exchanges for safe, trustworthy, and responsible AI innovation,

Activities:
The vision statement announces the launch of “a bilateral AI Governance Group, a potential multilateral AI Code of Conduct”, as well as a “a bilateral Roadmap for Digital Economic Cooperation.”

The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) AI Risk Management Framework and the Singapore Infocomm Media Development Authority’s AI Verify will also complete a mapping exercise that will build on shared principles. The launch of an AI Governance Group to complement the United States’ Voluntary AI Commitments and potential multilateral AI Code of Conduct will also contribute towards achieving these aims.

The next CET Dialogue in Singapore will take place in 2024, co-chaired by the National Security Council, Department of State, and Singapore’s Ministry of Foreign Affairs and Ministry of Communications and Information.
Security / Defense; Innovation; Cross-sectoralPrinciples/Voluntary Guidance
52
50Continental Strategy for AfricaAfrican UnionAUGov't-led (Multilateral)Initative Overview: The African Union High-Level Panel on Emerging Technologies (APET) and the African Union Development Agency (AUDA-NEPAD) convened African Artificial Intelligence experts at a Writing Workshop in Kigali, Rwanda, from February 27 to March 3, 2023, to finalise the drafting of the African Union Artificial Intelligence (AU-AI) Continental Strategy for Africa. The two groups met again in August 2023 to further the draft and ensure it encompasses legislative, regulatory, ethical, policy and infrastructural frameworks. The strategy also seeks to address the concerns regarding job losses and enhancing job creation opportunities through the integration of AI in various industries.

Mandate: to develop a comprehensive strategy that will guide African countries on how to support inclusive and sustainable AI-enabled socio-economic transformation

Activities:
A series of meetings that laid the foundation for the strategy; unclear what next steps will be
Development / Ethics / Cross-secturalRegulation/Policy Guidance