ABCDEFGHIJKLMNOP
1
THE SAFE, SECURE, AND TRUSTWORTHY AI EO TRACKER
2
Created and managed by the Stanford Institute for Human-Centered AI (HAI), Stanford RegLab, and the Stanford Center for Research on Foundation Models (CRFM)
Last updated: June 18, 2024
To provide comments or contribute to the tracker, please e-mail: HAI-Policy@stanford.edu
3
Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 30, 2023)
Full text:
Federal Register
White House fact sheets:
01/29/2024; 03/28/2024; 04/29/2024
Our EO tracker analyses: At EO release; At 90 Days
For a definition of the requirement types, see:
https://dl.acm.org/doi/10.1145/3600211.3604701
Below, we provide information on the implementation status of requirements based on 1) the White House’s Fact Sheet; 2) announcements, or public statements, made by the responsible federal entities or officials; and 3) official documents, media reports, and other conclusive evidence regarding specific line items publicly available. We mark requirements as“implemented” if there is sufficient public evidence of full implementation separate from the Fact Sheet;“not verifiably implemented” if we could not find conclusive evidence of full implementation; and “in progress” if they have been completed only partially, the extent of progress is ambiguous, or full completion of the requirements necessitate ongoing action.
For a full explanation of our methodology, see: https://hai.stanford.edu/news/transparency-ai-eo-implementation-assessment-90-days
4
SectionHAI/RegLab Section ShorthandHAI/RegLab Policy Issue Area(s)Responsible Stakeholder(s)Responsible Agency / EntitySupporting Stakeholder(s)Requirements (full text)Related Regulation(s) / Law(s) / Initiative(s)Public Consultation / Reporting Requirements (if any)Type of RequirementDeadline (or frequency)Date of Deadline (if applicable)Implementation Status According to White HouseImplementation Status According to Responsible StakeholderExternally Verifiable Implementation Status
5
Sec. 2. Policy and Principles
6
Sec. 2PrinciplesGeneral PrinciplesExecutive departments and agenciesAll federal agencies"When undertaking the actions set forth in this order, executive departments and agencies (agencies) shall, as appropriate and consistent with applicable law, adhere to these principles, while, as feasible, taking into account the views of other agencies, industry, members of academia, civil society, labor unions, international allies and partners, and other relevant organizations:
(a) Artificial Intelligence must be safe and secure [...]
(b) Promoting responsible innovation, competition, and collaboration will allow the United States to lead in AI and unlock the technology’s potential to solve some of society’s most difficult challenges [...]
(c) The responsible development and use of AI require a commitment to supporting American workers [...]
(d) Artificial Intelligence policies must be consistent with my Administration’s dedication to advancing equity and civil rights [...]
(e) The interests of Americans who increasingly use, interact with, or purchase AI and AI-enabled products in their daily lives must be protected [...]
(f) Americans’ privacy and civil liberties must be protected as AI continues advancing. Artificial Intelligence is making it easier to extract, re-identify, link, infer, and act on sensitive information about people’s identities, locations, habits, and desires [...]
(g) It is important to manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans [...]
(h) The Federal Government should lead the way to global societal, economic, and technological progress, as the United States has in previous eras of disruptive innovation and change [...]"
Ongoing requirementUnspecifiedN/A
7
Sec. 4. Ensuring the Safety and Security of AI Technology
8
Sec. 4.1(a)(i)SafetySafety Standards, Evaluations, & BenchmarkingSecretary of Commerce through the Director of the National Institute of Standards and Technology (NIST)Department of CommerceIn coordination with: Secretaries of Energy and Homeland Security, and the heads of other agencies, deemed appropriate by Secretary of CommerceTo "help ensure the development of safe, secure, and trustworthy AI systems [...] shall"

(i) Establish guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems, including:
(A) developing a companion resource to the AI Risk Management Framework, NIST AI 100-1, for generative AI;
(B) developing a companion resource to the Secure Software Development Framework to incorporate secure development practices for generative AI and for dual-use foundation models; and
(C) launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities through which AI could cause harm, such as in the areas of cybersecurity and biosecurity."
NIST AI Risk Management Framework (NIST AI 100-1)Time-boxed requirementWithin 270 daysJuly 26, 2024In progress

04/29/2024: White House Fact Sheet states: "Released for public comment draft documents on managing generative AI risks, securely developing generative AI systems and dual-use foundation models, expanding international standards development in AI, and reducing the risks posed by AI-generated content."
In progress

11/01/2023: Secretary of Commerce Gina Ramondo announces the Department of Commerce will establish, under NIST, a U.S. AI Safety Institute to support implementaiton of the EO, including by facilitating "the development of standards for safety, security, and testing of AI models and "provide testing environments for researchers to evaluate emerging AI risks and address known impacts."

12/21/2023: NIST issues a request for information related to its assignments under Sections 4.1, 4.5, and 11 of the EO.

01/17/2024: NIST holds a workshop to discuss NIST's creation of a Secure Software Development Framework for Generative AI and for Dual Use Foundation Models.

02/07/2023: Secretary Raimondo announces the leadership of the U.S. AI Safety Institute.

02/08/2023: Secretary Raimondo and NIST Director Laurie Locascio announces the U.S. AI Safety Institute Consortium (AISIC).

04/29/2024: NIST issues an initial public draft of the "Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile" (NIST AI 600-1), a companion to the AI Risk Management Framework (NIST AI 100-1) for generative AI. NIST also issues an initial public draft of the "Secure Software Development Practices for Generative AI and Dual-Use Foundation Models: An SSDF Community Profile" (NIST SP 800-218A), a companion to the prior Secure Software Development Framework (NIST SP 800-218). NIST also announced the NIST GenAI Challenge, a new program to evaluate and measure generative AI technologies that will inform the work of the U.S. AI Safety Institute at NIST.
In progress

12/21/2023: NIST RFI signals that implementation efforts related to this ongoing requirement have already started, long before the deadline. This is only a first step—once NIST has received public input, it will still need to develop and publish the required guidance documents and companion resources.

01/17/2024: NIST workshop indicates that subsection (B) of this requirement is in progress.

02/08/2024: NIST, as well as companies and organizations, confirm launch of the U.S. AI Safety Institute and its goals to support the safe and secure development and testing of AI.

04/29/2024: NIST AI Risk Management Framework for Generative AI and Secure Software Development Practices for Generative AI and Dual-Use Foundation Models indicate that subsections (A) and (B) of this requirement are in progress. The launch of the NIST GenAI Challenge is another indicator that subsection (C) of this requirement is in progress.
9
Sec. 4.1(a)(ii)SafetySafety Standards, Evaluations, & BenchmarkingSecretary of Commerce through the Director of the National Institute of Standards and Technology (NIST)Department of CommerceIn coordination with: Secretaries of Energy and Homeland Security, Directof of the National Science Foundation (NSF), and the heads of other agencies, deemed relevant by Secretary of Commerce"(ii) [Shall] establish appropriate guidelines (except for AI used as a component of a national security system), including appropriate procedures and processes, to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems. These efforts shall include:
(A) coordinating or developing guidelines related to assessing and managing the safety, security, and trustworthiness of dual-use foundation models; and
(B) in coordination with the Secretary of Energy and the Director of the National Science Foundation (NSF), developing and helping to ensure the availability of testing environments, such as testbeds, to support the development of safe, secure, and trustworthy AI technologies, as well as to support the design, development, and deployment of associated PETs, consistent with section 9(b) of this order."
Time-boxed requirementWithin 270 daysJuly 26, 2024N/A

No statement on implementation as of 05/31/2024.
In progress

12/21/2023: NIST issues a request for information related to its assignments under Sections 4.1, 4.5, and 11 of Executive Order 14110.

02/08/2023: Announcements by Secretary of Commerce Gina Raimondo and NIST Director Laurie Locascio note that the U.S. AI Safety Institute will include the development of red-teaming guidelines.
In progress

12/21/2023: NIST RFI signals that implementation efforts related to this ongoing requirement have already started, long before the deadline. This is only a first step—once NIST has received public input, it will still need to develop and publish the required guidance documents and companion resources.

02/08/2024: Reporting by some U.S. AI Safety Institute Consortium members confirms the Consortium's efforts will include developing guidelines, such as on red-teaming
10
Sec. 4.1(b)SafetySafety Standards, Evaluations, & BenchmarkingSecretary of EnergyDepartment of EnergyIn coordination with: the heads of other Sector Risk Management Agencies (SRMAs) deemed appropriate by Secretary of Energy

Note: SRMAs are currently the Departments of Agriculture, Defense, Health and Human Services, Homeland Security, Transportation, the Treasury; the General Services Administration; and the Environmental Protection Agency
"[S]hall develop and, to the extent permitted by law and available appropriations, implement a plan for developing the Department of Energy’s AI model evaluation tools and AI testbeds.

The Secretary shall undertake this work using existing solutions where possible, and shall develop these tools and AI testbeds to be capable of assessing near-term extrapolations of AI systems’ capabilities. At a minimum, the Secretary shall develop tools to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards.

The Secretary shall do this work solely for the purposes of guarding against these threats, and shall also develop model guardrails that reduce such risks. The Secretary shall, as appropriate, consult with private AI laboratories, academia, civil society, and third-party evaluators, and shall use existing solutions."
6 U.S.C. 650(23) (defines Sector Risk Management Agency)

Note: Presidential Policy Directive-21 designates the SRMA's but the White House has stated it is rewriting PPD-21
Time-boxed requirementWithin 270 daysJuly 26, 2024
11
Sec. 4.2(a)(i)SafetySafety Standards, Evaluations, & BenchmarkingSecretary of CommerceDepartment of CommerceTo "ensure and verify the continuous availability of safe, reliable, and effective AI in accordance with the Defense Production Act, as amended, 50 U.S.C. 4501 et seq., including for the national defense and the protection of critical infrastructure, the Secretary of Commerce shall require:

(i) Companies developing or demonstrating an intent to develop potential dual-use foundation models to provide the Federal Government, on an ongoing basis, with information, reports, or records regarding the following:
(A) any ongoing or planned activities related to training, developing, or producing dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats;
(B) the ownership and possession of the model weights of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights; and
(C) the results of any developed dual-use foundation model’s performance in relevant AI red-team testing based on guidance developed by NIST pursuant to subsection 4.1(a)(ii) of this section, and a description of any associated measures the company has taken to meet safety objectives, such as mitigations to improve performance on these red-team tests and strengthen overall model security. Prior to the development of guidance on red-team testing standards by NIST pursuant to subsection 4.1(a)(ii) of this section, this description shall include the results of any red-team testing that the company has conducted relating to lowering the barrier to entry for the development, acquisition, and use of biological weapons by non-state actors; the discovery of software vulnerabilities and development of associated exploits; the use of software or tools to influence real or virtual events; the possibility for self-replication or propagation; and associated measures to meet safety objectives"
Defense Production Act, 50 U.S.C. 4501 et seqTime-boxed requirementWithin 90 daysJanuary 28, 2024Implemented

01/29/2024: White House Fact Sheet states: Department of Commerce "Used Defense Production Act authorities to compel developers of the most powerful AI systems to report vital information, especially AI safety test results, to the Department of Commerce. These companies now must share this information on the most powerful AI systems, and they must likewise report large computing clusters able to train these systems."
Implemented / In progress

01/26/2024: Secretary of Commerce Gina Raimondo announces during a public event that the Defense Production Act is being used to "do a survey requiring companies to share with us every time they train a new large language model, and share with us the results—the safety data—so we can review it."

01/29/2024: Official social media statements from the Department of Commerce claim implementation using the same language as the White House Fact Sheet.
In progress

01/29/2024: Secretary of Commerce's statement and public reporting signal that the agency has started using the Defense Production Act to implement this requirement, though more details are not available. For example, the agency has not shared which companies are required to comply with these new rules. Full implementation will also require ongoing collection and evaluation of developers' information, particularly given pushback by companies, lobbyists, and members of Congress. No further conclusive evidence found as of 05/31/2024.

02/07/2024: NIST announces key executive leadership at the U.S. AI Safety Institute (housed at NIST), which will support the implementation of this and other EO requirements through work on evaluation guidances and benchmarks, testing environments, and industry standards. The launch of the Institute indicates that the Commerce Department is actively working toward the ongoing implementation of this requirement.
12
Sec. 4.2(a)(ii)SafetySafety Standards, Evaluations, & BenchmarkingSecretary of CommerceDepartment of Commerce"(ii) [Shall require] companies, individuals, or other organizations or entities that acquire, develop, or possess a potential large-scale computing cluster to report any such acquisition, development, or possession, including the existence and location of these clusters and the amount of total computing power available in each cluster."Defense Production Act, 50 U.S.C. 4501 et seqTime-boxed requirementWithin 90 daysJanuary 28, 2024Implemented

01/29/2024: White House Fact Sheet states: Department of Commerce "Used Defense Production Act authorities to compel developers of the most powerful AI systems to report vital information, especially AI safety test results, to the Department of Commerce. These companies now must share this information on the most powerful AI systems, and they must likewise report large computing clusters able to train these systems."
Implemented / In progress

1/26/2024: Secretary of Commerce Gina Raimondo announces during a public event that the Defense Production Act is being used to require the submission of information from developers, though no explicit mention is made of large-scale computing clusters.

1/29/2024: Official social media statements from the Department of Commerce claim implementation using the same language as the White House Fact Sheet.
In progress

01/26/2024: Secretary of Commerce's statement and public reporting signal that the agency has started using the Defense Production Act to implement this requirement, though more details are not available. For example, the agency has not shared which companies are required to comply with these new rules. Full implementation will also require ongoing collection and evaluation of developers' information, particularly given pushback by companies, lobbyists, and members of Congress. No further conclusive evidence found as of 05/31/2024.

02/07/2024: NIST announces key executive leadership at the U.S. AI Safety Institute (housed at NIST), which will support the implementation of this and other EO requirements through work on evaluation guidances and benchmarks, testing environments, and industry standards. The launch of the Institute indicates that the Commerce Department is actively working toward the ongoing implementation of this requirement.
13
Sec. 4.2(b)SafetySafety Standards, Evaluations, & BenchmarkingSecretary of CommerceDepartment of CommerceIn consultation with: Secretaries of State, Defense, Energy, and the Director of National Intelligence"[S]hall define, and thereafter update as needed on a regular basis, the set of technical conditions for models and computing clusters that would be subject to the reporting requirements of subsection 4.2(a) of this section. Until such technical conditions are defined, the Secretary shall require compliance with these reporting requirements for:

(i) any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations; and

(ii) any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI."
Open-ended requirementUnspecified

(Update as needed on a "regular basis")
N/A
14
Sec. 4.2(c)SafetyCritical Infrastructure and CybersecuritySecretary of CommerceDepartment of Commerce"Because I find that additional steps must be taken to deal with the national emergency related to significant malicious cyber-enabled activities declared in Executive Order 13694 of April 1, 2015 (Blocking the Property of Certain Persons Engaging in Significant Malicious Cyber-Enabled Activities), as amended by Executive Order 13757 of December 28, 2016 (Taking Additional Steps to Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities), and further amended by Executive Order 13984, to address the use of United States Infrastructure as a Service (IaaS) Products by foreign malicious cyber actors, including to impose additional record-keeping obligations with respect to foreign transactions and to assist in the investigation of transactions involving foreign malicious cyber actors, I hereby direct the Secretary of Commerce, within 90 days of the date of this order, to:

(i) Propose regulations that require United States IaaS Providers to submit a report to the Secretary of Commerce when a foreign person transacts with that United States IaaS Provider to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity (a “training run”). Such reports shall include, at a minimum, the identity of the foreign person and the existence of any training run of an AI model meeting the criteria set forth in this section, or other criteria defined by the Secretary in regulations, as well as any additional information identified by the Secretary.

(ii) Include a requirement in the regulations proposed pursuant to subsection 4.2(c)(i) of this section that United States IaaS Providers prohibit any foreign reseller of their United States IaaS Product from providing those products unless such foreign reseller submits to the United States IaaS Provider a report, which the United States IaaS Provider must provide to the Secretary of Commerce, detailing each instance in which a foreign person transacts with the foreign reseller to use the United States IaaS Product to conduct a training run described in subsection 4.2(c)(i) of this section. Such reports shall include, at a minimum, the information specified in subsection 4.2(c)(i) of this section as well as any additional information identified by the Secretary.

(iii) Determine the set of technical conditions for a large AI model to have potential capabilities that could be used in malicious cyber-enabled activity, and revise that determination as necessary and appropriate. Until the Secretary makes such a determination, a model shall be considered to have potential capabilities that could be used in malicious cyber-enabled activity if it requires a quantity of computing power greater than 1026 integer or floating-point operations and is trained on a computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum compute capacity of 1020 integer or floating-point operations per second for training AI."
Executive Order 13694, Executive Order 13757, Executive Order 13984

International Emergency Economic Powers Act, 50 U.S.C. 1701 et seq. (authorized to employ, as necessary, per Sec. 4.2 (e))
Time-boxed requirementWithin 90 daysJanuary 28, 2024Implemented

01/29/2024: White House Fact Sheet states: "Proposed a draft rule that proposes to compel U.S. cloud companies that provide computing power for foreign AI training to report that they are doing so. The Department of Commerce’s proposal would, if finalized as proposed, require cloud providers to alert the government when foreign clients train the most powerful models, which could be used for malign activity."
Implemented

01/29/2024: The Commerce Department's Bureau of Industry and Security issues a notice of proposed rulemaking and request for comment on a draft rule that proposes "a process for U.S IaaS providers to report to the Department when they have knowledge they will engage or have engaged in a transaction with a foreign person that could allow that foreign person to train a large AI model with potential capabilities that could be used in malicious cyber-enabled activity." The draft rule states that the Secretary will determine "the set of technical conditions that a large AI model must possess in order to have the potential capabilities that could be used in malicious cyber-enabled activity" and would give the Secretary the authority to enact prohibitions or conditions on customers, potential customers, or accounts within certain foreign jurisdictions.
Implemented

01/29/2024: Bureau of Industry and Security's NPRM/RFC satisfies this requirement.
15
Sec. 4.2(d)SafetyCritical Infrastructure and CybersecuritySecretary of CommerceDepartment of Commerce"[S]hall propose regulations that require United States IaaS Providers to ensure that foreign resellers of United States IaaS Products verify the identity of any foreign person that obtains an IaaS account (account) from the foreign reseller. These regulations shall, at a minimum:

(i) Set forth the minimum standards that a United States IaaS Provider must require of foreign resellers of its United States IaaS Products to verify the identity of a foreign person who opens an account or maintains an existing account with a foreign reseller, including:
(A) the types of documentation and procedures that foreign resellers of United States IaaS Products must require to verify the identity of any foreign person acting as a lessee or sub-lessee of these products or services;
(B) records that foreign resellers of United States IaaS Products must securely maintain regarding a foreign person that obtains an account, including information establishing:
(1) the identity of such foreign person, including name and address;
(2) the means and source of payment (including any associated financial institution and other identifiers such as credit card number, account number, customer identifier, transaction identifiers, or virtual currency wallet or wallet address identifier);
(3) the electronic mail address and telephonic contact information used to verify a foreign person’s identity; and
(4) the Internet Protocol addresses used for access or administration and the date and time of each such access or administrative action related to ongoing verification of such foreign person’s ownership of such an account; and
(C) methods that foreign resellers of United States IaaS Products must implement to limit all third-party access to the information described in this subsection, except insofar as such access is otherwise consistent with this order and allowed under applicable law;

(ii) Take into consideration the types of accounts maintained by foreign resellers of United States IaaS Products, methods of opening an account, and types of identifying information available to accomplish the objectives of identifying foreign malicious cyber actors using any such products and avoiding the imposition of an undue burden on such resellers; and

(iii) Provide that the Secretary of Commerce, in accordance with such standards and procedures as the Secretary may delineate and in consultation with the Secretary of Defense, the Attorney General, the Secretary of Homeland Security, and the Director of National Intelligence, may exempt a United States IaaS Provider with respect to any specific foreign reseller of their United States IaaS Products, or with respect to any specific type of account or lessee, from the requirements of any regulation issued pursuant to this subsection. Such standards and procedures may include a finding by the Secretary that such foreign reseller, account, or lessee complies with security best practices to otherwise deter abuse of United States IaaS Products."
International Emergency Economic Powers Act, 50 U.S.C. 1701 et seq. (authorized to employ, as necessary, per Sec. 4.2 (e))Time-boxed requirementWithin 180 daysApril 27, 2024N/A

No statement on implementation as of 05/31/2024.
Implemented

01/29/2024: The Commerce Department's Bureau of Industry and Security issues a notice of proposed rulemaking and request for comment on a draft rule that would "require U.S. IaaS providers to require foreign resellers of their U.S. IaaS products to verify the identity of foreign persons who open or maintain an account with a foreign reseller." It proposes minimum standards for identity verification, risk factors to be considered in abuse deterrence programs, and standards and procedures for exemptions.
Implemented

01/29/2024: Bureau of Industry and Security's NPRM/RFC satisfies this requirement, around 3 months ahead of the deadline.
16
Sec. 4.3(a)(i)SafetyCritical Infrastructure and CybersecurityThe head of each agency with relevant regulatory authority over critical infrastructure, the heads of relevant SRMAsOther federal entitiesIn coordination with: Director of the Cybersecurity and Infrastructure Security Agency (CISA)"To ensure the protection of critical infrastructure, the following actions shall be taken:"

(i) "[S]hall evaluate and provide to the Secretary of Homeland Security an assessment of potential risks related to the use of AI in critical infrastructure sectors involved, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyber attacks, and shall consider ways to mitigate these vulnerabilities. Independent regulatory agencies are encouraged, as they deem appropriate, to contribute to sector-specific risk assessments."
Time-boxed / Ongoing requirementWithin 90 days

(And "at least annually thereafter")
January 28, 2024Implemented

01/29/2024: White House
Fact Sheet states: Sector Risk Management Agencies "Completed risk assessments covering AI’s use in every critical infrastructure sector. Nine agencies—including the Department of Defense, the Department of Transportation, the Department of Treasury, and Department of Health and Human Services—submitted their risk assessments to the Department of Homeland Security. These assessments, which will be the basis for continued federal action, ensure that the United States is ahead of the curve in integrating AI safely into vital aspects of society, such as the electric grid."
Implemented

04/29/2024: DOE issues a summary report on the "Potential Benefits and Risks of AI for Critical Energy Infrastructure" in collaboration with the Lawrence Livermore National Laboratory (LLNL) and energy sector partners.
In progress

01/12/2024: Cybersecurity Insider reporting claims that the requirement has been mostly fulfilled, but does not confirm submission of information to DHS. Could not find conclusive details on implementation independent from White House statement as of 02/08/2024, though the sensitivity and national security implications of the topic likely precludes further public reporting.

04/29/2024: DOE summary report satisfies DOE's responsibilities under this requirement, though no conclusive evidence found regarding other agencies' fulfillment of this requirement as of 05/31/2024.
17
Sec. 4.3(a)(ii)SafetyCritical Infrastructure and CybersecuritySecretary of the TreasuryDepartment of Treasury(ii) "[S]hall issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks."Public report on AI cybersecurity risks for financial institutionsTime-boxed requirementWithin 150 daysMarch 28, 2024Implemented

03/28/2024: White House Fact Sheet states: The Department of Treasury "published a report examining AI-related cybersecurity and fraud risks and best practices for financial institutions."
Implemented

03/27/2024: Treasury announces the release of a report on "Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector" that outlines best practices for managing AI-related cybersecurity risks.
Implemented

03/27/2024: The Treasury's report on managing AI-specific cybersecurity risks in the financial services sector satisfies this requirement.
18
Sec. 4.3(a)(iii)SafetyCritical Infrastructure and CybersecuritySecretary of Homeland SecurityDepartment of Homeland SecurityIn coordination with: Secretary of Commerce, SRMAs, and other regulators(iii) "[S]hall incorporate as appropriate the AI Risk Management Framework, NIST AI 100-1, as well as other appropriate security guidance, into relevant safety and security guidelines for use by critical infrastructure owners and operators."NIST AI Risk Management Framework, NSIT AI 100-1Time-boxed requirementWithin 180 daysApril 27, 2024Implemented

04/29/2024: White House
Fact Sheet states: The Department of Homeland Security "Incorporated NIST's AI Risk Management Framework, and other AI-related guidance, into security guidelines covering critical infrastructure." The guidelines are "the first AI safety and security guidelines for critical infrastructure owners and operators" and were informed by "the completed work of nine agencies to assess AI risks across all sixteen critical infrastructure sectors."
Implemented

04/26/2024: DHS announces the release of Safety and Security Guidelines for Critical Infrastructure Owners and Operators that aim to mitigate the AI risks to critical infrastructure identified by the Cybersecurity and Infrastructure Security Agency’s (CISA) cross-sector analysis of sector-specific AI risk assessments completed by SRMAs and relevant independent regulatory agencies in January 2024.
Implemented

04/26/2024: The DHS release of safety and security guidelines satisfies this requirement.
19
Sec. 4.3(a)(iv)SafetyCritical Infrastructure and CybersecurityAssistant to the President for National Security Affairs and the Director of OMBExecutive Office of the PresidentIn consultation with: Secretary of Homeland Security(iv) "[S]hall coordinate work by the heads of agencies with authority over critical infrastructure to develop and take steps for the Federal Government to mandate such guidelines, or appropriate portions thereof, through regulatory or other appropriate action. Independent regulatory agencies are encouraged, as they deem appropriate, to consider whether to mandate guidance through regulatory action in their areas of authority and responsibility."Time-boxed requirementWithin 240 days of completion of the guidelines described in 4.3(a)(iii)December 22, 2024
20
Sec. 4.3(a)(v)SafetyCritical Infrastructure and CybersecuritySecretary of Homeland SecurityDepartment of Homeland Security(v) "[S]hall establish an Artificial Intelligence Safety and Security Board as an advisory committee pursuant to section 871 of the Homeland Security Act of 2002 (Public Law 107-296). The Advisory Committee shall include AI experts from the private sector, academia, and government, as appropriate, and provide to the Secretary of Homeland Security and the Federal Government’s critical infrastructure community advice, information, or recommendations for improving security, resilience, and incident response related to AI usage in critical infrastructure."Homeland Security Act of 2002Open-ended requirementUnspecifiedN/AImplemented

04/29/2024: White House Fact Sheet states: "Launched the AI Safety and Security Board to advise the Secretary of Homeland Security, the critical infrastructure community, other private sector stakeholders, and the broader public on the safe and secure development and deployment of AI technology in our nation’s critical infrastructure."
Implemented

04/26/2024: DHS announces the establishment of the AI Safety and Security Board, which will "help critical infrastructure stakeholders, such as transportation service providers, pipeline and power grid operators, and internet service providers, more responsibly leverage AI technologies. It will also develop recommendations to prevent and prepare for AI-related disruptions to critical services that impact national or economic security, public health, or safety." The Board’s 22 inaugural members include representatives from a range of sectors, including software and hardware company executives, critical infrastructure operators, public officials, the civil rights community, and academia - including Stanford HAI's co-director, Dr. Fei-Fei Li.
Implemented

04/26/2024: DHS announcement of the establishment of its AI Safety and Security Board satisfies this requirement.
21
Sec. 4.3(b)(ii)SafetyCritical Infrastructure and CybersecuritySecretary of Defense (for national security systems), Secretary of Homeland Security (for non-national security systems)Department of Defense, Department of Homeland SecurityIn consultation with: the heads of other relevant agencies as deemed appropriate (see Sec. 4.3 (b)(i))(ii) "[S]hall, consistent with applicable law, each develop plans for, conduct, and complete an operational pilot project to identify, develop, test, evaluate, and deploy AI capabilities, such as large-language models, to aid in the discovery and remediation of vulnerabilities in critical United States Government software, systems, and networks."Time-boxed requirementWithin 180 daysApril 27, 2024Implemented

04/29/2024: White House
Fact Sheet states: "Piloted new AI tools for identifying vulnerabilities in vital government software systems. The Department of Defense (DoD) made progress on a pilot for AI that can find and address vulnerabilities in software used for national security and military purposes." DHS also "piloted different tools to identify and close vulnerabilities in other critical government software systems that Americans rely on every hour of every day."
In progress

04/29/2024: DHS announces it has implemented innovative pilot projects to depoly and test AI technology in specific mission areas. CISA completed an operational pilot using AI cybersecurity systems to aid in the detection and remediation of vulnerabilities in critical U.S. Government software, systems, and networks. Other uses include: the Homeland Security Investigations' use of AI to enhance investigative processes focused on detecting fentanyl and increasing efficiency of investigations related to combatting child sexual exploitation; the Federal Emergency Management Agency's use of AI to help communities plan for and develop hazard mitigation plans to build resilience and minimize risks; and the United States Citizenship and Immigration Services' use of AI to improve immigration officer training.

05/24/2024: DefenseScoop reports that an Army Cyber Command spokesperson said the DoD has designated its Panoptic Junction tool to pilot AI capabilities that can enhance the detection of anomalous and malicious cyber activity.
In progress

04/29/2024: DHS pilot projects satisfy DHS' responsibilities as part of this requirement.

05/24/2024: DefenseScoop report on DoD piloting its Panoptic Junction tool to identify and close vulnerabilities indicates that this requirement is being implemented. However, we could not find official DoD statements on implementation independent from the White House and media reporting as of 05/31/2024, though the sensitivity and national security implications of the topic likely precludes further public reporting.
22
Sec. 4.3(b)(iii)SafetyCritical Infrastructure and CybersecuritySecretary of Defense (for national security systems), Secretary of Homeland Security (for non-national security systems)Department of Defense, Department of Homeland SecurityIn consultation with: the heads of other relevant agencies as deemed appropriate (see Sec. 4.3 (b)(i))(iii) "[S]hall each provide a report to the Assistant to the President for National Security Affairs on the results of actions taken pursuant to the plans and operational pilot projects required by subsection 4.3(b)(ii) of this section, including a description of any vulnerabilities found and fixed through the development and deployment of AI capabilities and any lessons learned on how to identify, develop, test, evaluate, and deploy AI capabilities effectively for cyber defense."Time-boxed requirementWithin 270 daysJuly 26, 2024
23
Sec. 4.4(a)(i)SafetyCBRN ThreatsSecretary of Homeland SecurityDepartment of Homeland SecurityIn consultation with: Secretary of Energy, Director of the Office of Science and Technology Policy (OSTP)"To better understand and mitigate the risk of AI being misused to assist in the development or use of CBRN threats — with a particular focus on biological weapons — the following actions shall be taken:'

(i) [S]hall evaluate the potential for AI to be misused to enable the development or production of CBRN threats, while also considering the benefits and application of AI to counter these threats, including, as appropriate, the results of work conducted under section 8(b) of this order. The Secretary of Homeland Security shall:
(A) consult with experts in AI and CBRN issues from the Department of Energy, private AI laboratories, academia, and third-party model evaluators, as appropriate, to evaluate AI model capabilities to present CBRN threats — for the sole purpose of guarding against those threats — as well as options for minimizing the risks of AI model misuse to generate or exacerbate those threats; and
(B) submit a report to the President that describes the progress of these efforts, including an assessment of the types of AI models that may present CBRN risks to the United States, and that makes recommendations for regulating or overseeing the training, deployment, publication, or use of these models, including requirements for safety evaluations and guardrails for mitigating potential threats to national security."
Time-boxed requirementWithin 180 daysApril 27, 2024Implemented

4/29/2024: White House Fact Sheet states: "Evaluated -- and submitted a report to the President that discusses -- AI's potential to cause or exacerbate chemical, biological, radiological, and nuclear threats, as well as its ability to help counter such threats."
Implemented

04/29/2024: DHS announces the completion of an AI CBRN Report, led by the Countering Weapons of Mass Destruction Office, that evaluates “the potential for AI to be misused to enable the development or production of CBRN threats, while also considering the benefits and application of AI to counter these threats.” In a Fact Sheet, DHS releases selected findings from the report to the President.
Implemented

03/27/2024: DHS release of selected findings from its AI CBRN report satisfies this requirement.
24
Sec. 4.4(a)(ii)SafetyCBRN ThreatsSecretary of DefenseDepartment of DefenseIn consultation with: Assistant to the President for National Security Affairs, Director of OSTP

In contract with: National Academies of Sciences, Engineering, and Medicine
"(ii) [S]hall . . . enter into a contract with the National Academies of Sciences, Engineering, and Medicine to conduct — and submit to the Secretary of Defense, the Assistant to the President for National Security Affairs, the Director of the Office of Pandemic Preparedness and Response Policy, the Director of OSTP, and the Chair of the Chief Data Officer Council — a study that:
(A) assesses the ways in which AI can increase biosecurity risks, including risks from generative AI models trained on biological data, and makes recommendations on how to mitigate these risks;
(B) considers the national security implications of the use of data and datasets, especially those associated with pathogens and omics studies, that the United States Government hosts, generates, funds the creation of, or otherwise owns, for the training of generative AI models, and makes recommendations on how to mitigate the risks related to the use of these data and datasets;
(C) assesses the ways in which AI applied to biology can be used to reduce biosecurity risks, including recommendations on opportunities to coordinate data and high-performance computing resources; and
(D) considers additional concerns and opportunities at the intersection of AI and synthetic biology that the Secretary of Defense deems appropriate."
Time-boxed requirementWithin 120 daysFebruary 27, 2024Implemented

03/28/2024: White House
Fact Sheet states: The Department of Defense "entered into a contract with the National Academies of Sciences, Engineering, and Medicine to conduct a study regarding AI, biological data, and biosecurity risks."
In progress

[Undated, first accessed 04/30/2024]: The National Academies of Sciences, Engineering, and Medicine
announce they are in the committee formation phase of a project on "Assessing and Navigating Biosecurity Concerns and Benefits of Artificial Intelligence Use in the Life Sciences." The study will focus specifically on concerns of transmissible biological threats that could pose significant epidemic and pandemic-scale consequences. The committee will hold its first closed-door meeting on 06/14/2024.
In progress

[Undated, first accessed 04/30/2024]: The National Academies of Sciences, Engineering, and Medicine
website indicates that this requirement is in progress.
25
Sec. 4.4(b)(i)SafetyCBRN ThreatsDirector of OSTPExecutive Office of the PresidentIn consultation with: Secretaries of State, Defense, the Attorney General, Secretaries of Commerce, Health and Human Services (HHS), Energy, Homeland Security, the Director of National Intelligence, and the heads of other relevant agencies deemed appropriate by Director of OSTP"To reduce the risk of misuse of synthetic nucleic acids, which could be substantially increased by AI’s capabilities in this area, and improve biosecurity measures for the nucleic acid synthesis industry, the following actions shall be taken:"

"(i) [S]hall establish a framework, incorporating, as appropriate, existing United States Government guidance, to encourage providers of synthetic nucleic acid sequences to implement comprehensive, scalable, and verifiable synthetic nucleic acid procurement screening mechanisms, including standards and recommended incentives. As part of this framework, the Director of OSTP shall:
(A) establish criteria and mechanisms for ongoing identification of biological sequences that could be used in a manner that would pose a risk to the national security of the United States; and
(B) determine standardized methodologies and tools for conducting and verifying the performance of sequence synthesis procurement screening, including customer screening approaches to support due diligence with respect to managing security risks posed by purchasers of biological sequences identified in subsection 4.4(b)(i)(A) of this section, and processes for the reporting of concerning activity to enforcement entities."
Time-boxed requirementWithin 180 daysApril 27, 2024Implemented

04/29/2024: White House
Fact Sheet states: "Established a framework for nucleic acid synthesis screening to help prevent the misuse of AI for engineering dangerous biological materials."
Implemented

04/29/2024: OSTP issues a Framework on Nucleic Acid Synthesis Screening to encourage providers of synthetic nucleic acids to implement comprehensive, scalable, and verifiable screening mechanisms. The framework establishes six criteria/mechanisms for screening.
Implemented

04/29/2024: OSTP
release of Framework on Nucleic Acid Synthesis Screening satisfies this requirement.
26
Sec. 4.4(b)(ii)SafetyCBRN ThreatsSecretary of Commerce through the Director of NISTDepartment of CommerceIn coordination with: Director of OSTP

In consultation with: Secretaries of State, HHS, and the heads of other relevant agencies deemed appropriate by Secretary of Commerce
"(ii) [S]hall initiate an effort to engage with industry and relevant stakeholders, informed by the framework developed under subsection 4.4(b)(i) of this section, to develop and refine for possible use by synthetic nucleic acid sequence providers:
(A) specifications for effective nucleic acid synthesis procurement screening;
(B) best practices, including security and access controls, for managing sequence-of-concern databases to support such screening;
(C) technical implementation guides for effective screening; and
(D) conformity-assessment best practices and mechanisms."
Time-boxed requirementWithin 180 daysApril 27, 2024Implemented

04/29/2024: White House
Fact Sheet states: The Department of Commerce "launched an effort to engage the nucleic acid synthesis industry on necessary technical implementation details to facilitate adoption of the screening framework" established under 4.4(b)(i). It "has worked to engage the private sector to develop technical guidance to facilitate implementation. Starting 180 days after the framework is announced, agencies will require that grantees obtain synthetic nucleic acids from vendors that screen."
Implemented

02/16/2024: NIST
announces a two-year cooperative research agreement with the nonprofit Engineering Biology Research Consortium (EBRC) to develop screening and safety tools to defend against the potential misuse of AI related to nucleic acid synthesis. As part of the cooperative agreement, the organizations will solicit input from industry, universities, government agencies and other relevant stakeholders.
Implemented

02/16/2024: NIST
announcement of its research cooperation with EBRC indicates that this requirement is in the process of being fulfilled.
27
Sec. 4.4(b)(iii)SafetyCBRN ThreatsAll agencies that fund life-sciences researchOther federal entities"(iii) [S]hall, as appropriate and consistent with applicable law, establish that, as a requirement of funding, synthetic nucleic acid procurement is conducted through providers or manufacturers that adhere to the framework, such as through an attestation from the provider or manufacturer. The Assistant to the President for National Security Affairs and the Director of OSTP shall coordinate the process of reviewing such funding requirements to facilitate consistency in implementation of the framework across funding agencies."Time-boxed requirementWithin 180 days of the establishment of the framework pursuant to subsection 4.4(b)(i)October 26, 2024
28
Sec. 4.4(b)(iv)SafetyCBRN ThreatsSecretary of Homeland SecurityDepartment of Homeland SecurityIn consultation with: the heads of other relevant agencies deemed appropriate by Secretary of Homeland Security"(iv) In order to facilitate effective implementation of the measures described in subsections 4.4(b)(i)-(iii) of this section, the Secretary of Homeland Security, in consultation with the heads of other relevant agencies as the Secretary of Homeland Security may deem appropriate, shall:"

(A) "develop a framework to conduct structured evaluation and stress testing of nucleic acid synthesis procurement screening, including the systems developed in accordance with subsections 4.4(b)(i)-(ii) of this section and implemented by providers of synthetic nucleic acid sequences; and
(B) following development of the framework pursuant to subsection 4.4(b)(iv)(A) of this section, submit an annual report to the Assistant to the President for National Security Affairs, the Director of the Office of Pandemic Preparedness and Response Policy, and the Director of OSTP on any results of the activities conducted pursuant to subsection 4.4(b)(iv)(A) of this section, including recommendations, if any, on how to strengthen nucleic acid synthesis procurement screening, including customer screening systems."
Time-boxed requirementWithin 180 days of the establishment of the framework pursuant to subsection 4.4(b)(i)October 26, 2024
29
Sec. 4.5(a)SafetySynthetic ContentSecretary of CommerceDepartment of CommerceIn consultation with: the heads of other relevant agencies deemed appropriate by Secretary of Commerce"To foster capabilities for identifying and labeling synthetic content produced by AI systems, and to establish the authenticity and provenance of digital content, both synthetic and not synthetic, produced by the Federal Government or on its behalf:"

"[S]hall submit a report to the Director of OMB and the Assistant to the President for National Security Affairs identifying the existing standards, tools, methods, and practices, as well as the potential development of further science-backed standards and techniques, for:
(i) authenticating content and tracking its provenance;
(ii) labeling synthetic content, such as using watermarking;
(iii) detecting synthetic content;
(iv) preventing generative AI from producing child sexual abuse material or producing non-consensual intimate imagery of real individuals (to include intimate digital depictions of the body or body parts of an identifiable individual);
(v) testing software used for the above purposes; and
(vi) auditing and maintaining synthetic content."
Time-boxed requirementWithin 240 daysJune 26, 2024In progress

04/29/2024: White House Fact Sheet states: NIST "Released for public comment draft documents on [...] reducing the risks posed by AI-generated content."
In progress

12/21/2023: NIST issues a request for information related to its assignments under Sections 4.1, 4.5, and 11 of EO 14110.

02/08/2024: NIST announcement regarding the U.S. AI Safety Institute signals it will support research on watermarking synthetic content.

04/29/2024: NIST issues a draft for public comment of "Reducing Risks Posed by Synthetic Content: An Overview of Technical Approaches to Digital Content Transparency" (NIST AI 100-4), which "informs, and is complementary to, a separate report on understanding the provenance and detection of synthetic content" this requirement tasks NIST with providing to the White House.
In progress

12/21/2023: NIST RFI signals that implementation efforts related to this ongoing requirement have already started, long before the deadline, but there is no indication that the report has been finalized.

04/29/2024: NIST draft document indicates that this requirement is in progress, though the separate report on understanding the provenance and detection of synthetic content is yet to be completed.
30
Sec. 4.5(b)SafetySynthetic ContentSecretary of CommerceDepartment of CommerceIn coordination with: Director of OMB"[S]hall develop guidance regarding the existing tools and practices for digital content authentication and synthetic content detection measures. The guidance shall include measures for the purposes listed in subsection 4.5(a) of this section."Time-boxed / Ongoing requirementWithin 180 days of submitting the report required under subsection 4.5(a) of this section

(Updated "periodically thereafter")
Tbc

[Latest possible date: December 23, 2024]
31
Sec. 4.5(c)SafetySynthetic ContentDirector of OMBExecutive Office of the PresidentIn consultation with: Secretaries of State, Defense, the Attorney General, Secretary of Commerce, acting through the Director of NIST; Secretary of Homeland Security, Director of National Intelligence, and the heads of other agencies deemed appropriate by Director of OMB"[S]hall — for the purpose of strengthening public confidence in the integrity of official United States Government digital content — issue guidance to agencies for labeling and authenticating such content that they produce or publish."Time-boxed / Ongoing requirementWithin 180 days of the development of the guidance required under subsection 4.5(b)

(Updated "periodically thereafter")
Tbc

[Latest possible date: June 21, 2025]
32
Sec. 4.5(d)SafetySynthetic ContentFederal Acquisition Regulatory CouncilExecutive Office of the President"[S]hall, as appropriate and consistent with applicable law, consider amending the Federal Acquisition Regulation to take into account the guidance established under subsection 4.5 of this section."Federal Acquisition RegulationOpen-ended requirementUnspecifiedN/A
33
Sec. 4.6(a)SafetySafety Standards, Evaluations, & BenchmarkingSecretary of Commerce through the Assistant Secretary of Commerce for Communications and InformationDepartment of CommerceIn consultation with: Secretary of State"When the weights for a dual-use foundation model are widely available — such as when they are publicly posted on the Internet — there can be substantial benefits to innovation, but also substantial security risks, such as the removal of safeguards within the model. To address the risks and potential benefits of dual-use foundation models with widely available weights [...]"

"(a) [Shall] solicit input from the private sector, academia, civil society, and other stakeholders through a public consultation process on potential risks, benefits, other implications, and appropriate policy and regulatory approaches related to dual-use foundation models for which the model weights are widely available, including:

(i) risks associated with actors fine-tuning dual-use foundation models for which the model weights are widely available or removing those models’ safeguards;

(ii) benefits to AI innovation and research, including research into AI safety and risk management, of dual-use foundation models for which the model weights are widely available; and

(iii) potential voluntary, regulatory, and international mechanisms to manage the risks and maximize the benefits of dual-use foundation models for which the model weights are widely available"
Public consultation on potential risks and benefits of dual-use foundation modelsTime-boxed requirementWithin 270 daysJuly 26, 2024
34
Sec. 4.6(b)SafetySafety Standards, Evaluations, & BenchmarkingSecretary of Commerce through the Assistant Secretary of Commerce for Communications and InformationDepartment of CommerceIn consultation with: Secretary of State and the heads of other relevant agencies deemed appropriate by Sectretary of Commerce"(b) [Shall] based on input from the process described in subsection 4.6(a) of this section, and in consultation with the heads of other relevant agencies as the Secretary of Commerce deems appropriate, submit a report to the President on the potential benefits, risks, and implications of dual-use foundation models for which the model weights are widely available, as well as policy and regulatory recommendations pertaining to those models."Time-boxed requirementWithin 270 daysJuly 26, 2024
35
Sec. 4.7(a)SafetySafeguarding Federal DataChief Data Officer CouncilExecutive Office of the PresidentIn consultation with: Secretaries of Defense, Commerce, Energy, Homeland Security, and the Director of National Intelligence"To improve public data access and manage security risks, and consistent with the objectives of the Open, Public, Electronic, and Necessary Government Data Act (title II of Public Law 115-435) to expand public access to Federal data assets in a machine-readable format while also taking into account security considerations, including the risk that information in an individual data asset in isolation does not pose a security risk but, when combined with other available information, may pose such a risk:"

"(a) [S]hall develop initial guidelines for performing security reviews, including reviews to identify and manage the potential security risks of releasing Federal data that could aid in the development of CBRN weapons as well as the development of autonomous offensive cyber capabilities, while also providing public access to Federal Government data in line with the goals stated in the Open, Public, Electronic, and Necessary Government Data Act (title II of Public Law 115-435)"
Open, Public, Electronic, and Necessary Government Data ActTime-boxed requirementWithin 270 days July 26, 2024
36
Sec. 4.7(b)SafetySafeguarding Federal DataFederal agenciesAll federal agencies"(b) [S]hall conduct a security review of all data assets in the comprehensive data inventory required under 44 U.S.C. 3511(a)(1) and (2)(B) and shall take steps, as appropriate and consistent with applicable law, to address the highest-priority potential security risks that releasing that data could raise with respect to CBRN weapons, such as the ways in which that data could be used to train AI systems."44 U.S.C. 3511 - Data inventory and Federal data catalogueTime-boxed requirementWithin 180 days of the development of the initial guidelines required by subsection 4.7(a) Tbc

[Latest possible date: January 22, 2025]
37
Sec. 4.8(a)-(b)SafetyMilitary Use of AIAssistant to the President for National Security Affairs, Assistant to the President and Deputy Chief of Staff for PolicyExecutive Office of the President"To develop a coordinated executive branch approach to managing AI’s security risks"

"[S]hall oversee an interagency process with the purpose of [...] developing and submitting a proposed National Security Memorandum on AI to the President. The memorandum shall address the governance of AI used as a component of a national security system or for military and intelligence purposes. The memorandum shall take into account current efforts to govern the development and use of AI for national security systems. The memorandum shall outline actions for the Department of Defense, the Department of State, other relevant agencies, and the Intelligence Community to address the national security risks and potential benefits posed by AI. In particular, the memorandum shall:

(a) provide guidance to the Department of Defense, other relevant agencies, and the Intelligence Community on the continued adoption of AI capabilities to advance the United States national security mission, including through directing specific AI assurance and risk-management practices for national security uses of AI that may affect the rights or safety of United States persons and, in appropriate contexts, non-United States persons; and

(b) direct continued actions, as appropriate and consistent with applicable law, to address the potential use of AI systems by adversaries and other foreign actors in ways that threaten the capabilities or objectives of the Department of Defense or the Intelligence Community, or that otherwise pose risks to the security of the United States or its allies and partners."
Time-boxed requirementWithin 270 days July 26, 2024
38
Sec. 5. Promoting Innovation and Competition
39
Sec. 5.1(a)InnovationImmigration PolicySecretary of State, Secretary of Homeland SecurityDepartment of State, Department of Homeland SecurityTo "attract and retain talent in AI and other critical and emerging technologies in the United States economy"
"[S]hall take appropriate steps to:

(i) streamline processing times of visa petitions and applications, including by ensuring timely availability of visa appointments, for noncitizens who seek to travel to the United States to work on, study, or conduct research in AI or other critical and emerging technologies;
(ii) facilitate continued availability of visa appointments in sufficient volume for applicants with expertise in AI or other critical and emerging technologies."
Time-boxed requirementWithin 90 daysJanuary 28, 2024Implemented

01/29/2024: White House
Fact Sheet states: Department of State "Streamlined visa processing, including by renewing and expanding interview-waiver authorities."
Implemented

12/21/2023: Update on Bureau of Consular Affairs website lists categories of interview waivers that have been determined to be in the national interest.

12/21/2023: State Department media note confirms renewed and expanded interview-waiver authorities in consultation with DHS.
In progress

12/21/2023: State Department
confirms interview-waiver authorities have been renewed and expanded. Though no further evidence could be found to show that other efforts to streamline the visa process and ensure visa appointment availability have been made as of 05/31/2024.
40
Sec. 5.1(b)(i)InnovationImmigration PolicySecretary of StateDepartment of State"(i) [Shall] consider initiating a rulemaking to establish new criteria to designate countries and skills on the Department of State’s Exchange Visitor Skills List as it relates to the 2-year foreign residence requirement for certain J-1 nonimmigrants, including those skills that are critical to the United States"Department of State’s Exchange Visitor Skills ListTime-boxed requirementWithin 120 daysFebruary 27, 2024Implemented

03/28/2024: White House
Fact Sheet states: The Department of State "evaluated steps for updating - and establishing new criteria for - the countries and skills on the Exchange Visitor Skills List, including those skills critical to the United States."
N/A

No statement on implementation as of 05/31/2024.
Not Verifiably Implemented

No verifiable evidence of implementation as of 05/31/2024.
41
Sec. 5.1(b)(ii)InnovationImmigration PolicySecretary of StateDepartment of State"(ii) [Shall] consider publishing updates to the 2009 Revised Exchange Visitor Skills List (74 FR 20108)"2009 Revised Exchange Visitor Skills List (74 FR 20108)Time-boxed requirementWithin 120 daysFebruary 27, 2024Implemented

03/28/2024: White House
Fact Sheet states: The Department of State "evaluated steps for updating - and establishing new criteria for - the countries and skills on the Exchange Visitor Skills List, including those skills critical to the United States."
N/A

No statement on implementation as of 05/31/2024.
Not Verifiably Implemented

No verifiable evidence of implementation as of 05/31/2024.
42
Sec. 5.1(b)(iii)InnovationImmigration PolicySecretary of StateDepartment of State"(iii) [Shall] consider implementing a domestic visa renewal program under 22 C.F.R. 41.111(b) to facilitate the ability of qualified applicants, including highly skilled talent in AI and critical and emerging technologies, to continue their work in the United States without unnecessary interruption."2009 Revised Exchange Visitor Skills List, 74 FR 20108Time-boxed requirementWithin 120 daysFebruary 27, 2024Implemented

03/28/2024: White House
Fact Sheet states: The Department of State "launched a pilot program for a [sic] domestic visa renewals."
Implemented

12/21/23: State Department provides
notice of its "Pilot Program To Resume Renewal of H-1B Nonimmigrant Visas in the United States for Certain Qualified Noncitizens."
Implemented

12/21/23: State Department
notice of its pilot program satisfies this requirement.
43
Sec. 5.1(c)(i)InnovationImmigration PolicySecretary of StateDepartment of State"(i) [Shall] consider initiating a rulemaking to expand the categories of nonimmigrants who qualify for the domestic visa renewal program covered under 22 C.F.R. 41.111(b) to include academic J-1 research scholars and F-1 students in science, technology, engineering, and mathematics (STEM)"22 C.F.R. 41.111 - Authority to issue visa Time-boxed requirementWithin 180 daysApril 27, 2024N/A

No statement on implementation as of 05/31/2024.
N/A

No statement on implementation as of 05/31/2024.
Not Verifiably Implemented

No verifiable evidence of implementation as of 05/31/2024.
44
Sec. 5.1(c)(ii)InnovationImmigration PolicySecretary of StateDepartment of State"(ii) [Shall] establish, to the extent permitted by law and available appropriations, a program to identify and attract top talent in AI and other critical and emerging technologies at universities, research institutions, and the private sector overseas, and to establish and increase connections with that talent to educate them on opportunities and resources for research and employment in the United States, including overseas educational components to inform top STEM talent of nonimmigrant and immigrant visa options and potential expedited adjudication of their visa petitions and applications."Time-boxed requirementWithin 180 daysApril 27, 2024N/A

No statement on implementation as of 05/31/2024.
N/A

No statement on implementation as of 05/31/2024.
Not Verifiably Implemented

No verifiable evidence of implementation as of 05/31/2024.
45
Sec. 5.1(d)(i)InnovationImmigration PolicySecretary of Homeland SecurityDepartment of Homeland Security"(i) [Shall] review and initiate any policy changes the Secretary determines necessary and appropriate to clarify and modernize immigration pathways for experts in AI and other critical and emerging technologies, including O-1A and EB-1 noncitizens of extraordinary ability; EB-2 advanced-degree holders and noncitizens of exceptional ability; and startup founders in AI and other critical and emerging technologies using the International Entrepreneur Rule"Time-boxed requirementWithin 180 daysApril 27, 2024N/A

No statement on implementation as of 05/31/2024.
N/A

No statement on implementation as of 05/31/2024.
Not Verifiably Implemented

No verifiable evidence of implementation as of 05/31/2024.
46
Sec. 5.1(d)(ii)InnovationImmigration PolicySecretary of Homeland SecurityDepartment of Homeland Security"(ii) [Shall] continue its rulemaking process to modernize the H-1B program and enhance its integrity and usage, including by experts in AI and other critical and emerging technologies, and consider initiating a rulemaking to enhance the process for noncitizens, including experts in AI and other critical and emerging technologies and their spouses, dependents, and children, to adjust their status to lawful permanent resident."Time-boxed requirementWithin 180 daysApril 27, 2024Implemented

03/28/2024: White House
Fact Sheet states: The Department of Homeland Security "published a final rule to strengthen the integrity of the H-1B program and enhance its use, including by experts in AI and related fields" and "published new policy guidance for international students, clarifying and modernizing this pathway for experts in AI and other critical and emerging technologies."
Implemented

11/16/2023: USCIS
announces increased utilization of immigration system in this context.

01/30/2024: DHS
announces a final rule on "Improving the H-1B Registration Selection Process and Program Integrity."

04/29/2024: DHS
publishes a fact sheet claiming it has focused on strengthening the integrity of the H-1B program.
Implemented

01/30/24: DHS final rule satisfies this requirement.
47
Sec. 5.1(e)InnovationImmigration PolicySecretary of LaborDepartment of LaborFor "purposes of considering updates to the “Schedule A” list of occupations, 20 C.F.R. 656.5"

"[S]hall publish a request for information (RFI) to solicit public input, including from industry and worker-advocate communities, identifying AI and other STEM-related occupations, as well as additional occupations across the economy, for which there is an insufficient number of ready, willing, able, and qualified United States workers."
“Schedule A” list of occupations, 20 C.F.R. 656.5Public input on potential updates to the "Schedule A" list of occupationsTime-boxed requirementWithin 45 daysDecember 14, 2023Implemented

01/29/2024: White House Fact Sheet states: Department of Labor "Published a Request for Information (RFI) on whether to revise the list of Schedule A job classifications that do not require permanent labor certifications."
Implemented

12/15/2023: Employment and Training Administration published a request for information on the Labor Department's website on Labor Certification for Permanent Employment of Foreign Workers in the United States; Modernizing Schedule A To Include Consideration of Additional Occupations in Science, Technology, Engineering, and Mathematics (STEM) and Non-STEM Occupations. Press release announcing the RFI is published as well.

12/21/2023: Department of Labor published the Request for Information in the Federal Register.
Implemented

12/21/2023: Department of Labor issuance of the RFI satisfies this requirement.
48
Sec. 5.1(f)InnovationImmigration PolicySecretary of State, Secretary of Homeland SecurityDepartment of State, Department of Homeland Security"[S]hall, consistent with applicable law and implementing regulations, use their discretionary authorities to support and attract foreign nationals with special skills in AI and other critical and emerging technologies seeking to work, study, or conduct research in the United States."Ongoing requirementUnspecifiedN/AImplemented

03/28/2024: White House Fact Sheet states: The Department of Homeland Security "published updated policy guidance regarding international student visas, applicable to students in AI-related fields."
Implemented

04/29/2024: The DHS website claims that USCIS "streamlined processing times of petitions and applications for those seeking to work, study, or conduct research in the United States" and "clarified and modernized policies for: O-1A noncitizens of extraordinary ability, EB-1 noncitizens of extraordinary ability and outstanding professors and researchers, EB-2 advanced-degree holders and noncitizens of exceptional ability, Startup founders using the International Entrepreneur Rule, and International students."
Implemented

05/09/2024: DHS website announcement satisfies this requirement.
49
Sec. 5.1(g)InnovationImmigration PolicySecretary of Homeland SecurityDepartment of Homeland SecurityIn consultation with: Secretary of State, Secretary of Commerce, Director of OSTP"[S]hall develop and publish informational resources to better attract and retain experts in AI and other critical and emerging technologies, including:
(i) a clear and comprehensive guide for experts in AI and other critical and emerging technologies to understand their options for working in the United States, to be published in multiple relevant languages on AI.gov; and
(ii) a public report with relevant data on applications, petitions, approvals, and other key indicators of how experts in AI and other critical and emerging technologies have utilized the immigration system through the end of Fiscal Year 2023."
Public report on immigration date related to experts in AI and other critical and emerging technologiesTime-boxed requirementWithin 120 daysFebruary 27, 2024Implemented

03/28/2024: White House
Fact Sheet states: Department of Homeland Security "published information accessible on AI.gov to help experts in AI understand options for working in the United States" and "published a report with data on how experts in AI and other critical and emerging technologies have utilized the immigration system."
Implemented

03/17/2024: DHS publishes its 2024 AI roadmap.

05/09/2024: DHS website
https://ai.gov/immigrate/ details how immigrants can come work in the United States.

05/09/2024: DHS published a
report on how STEM professional have utilized the immigration system.
In progress

05/09/2024: While "clear and comprehensive," the ai.gov/immigrate guide does not have an apparent option to switch languages. The website links to various visa options on the USCIS website, which allows easy translation to Spanish, but no other relevant languages. The "Multilingual Resources" link redirects away from the original visa page to a new search page. The DHS report satisfies the second half of this requirement.
50
Sec. 5.2(a)(i) InnovationResource InvestmentDirector of NSFNational Science FoundationIn coordination with: the heads of agencies deemed appropriate by Director of NSF"To develop and strengthen public-private partnerships for advancing innovation, commercialization, and risk-mitigation methods for AI, and to help promote safe, responsible, fair, privacy-protecting, and trustworthy AI systems"

"(i) [Shall] launch a pilot program implementing the National AI Research Resource (NAIRR), consistent with past recommendations of the NAIRR Task Force. The program shall pursue the infrastructure, governance mechanisms, and user interfaces to pilot an initial integration of distributed computational, data, model, and training resources to be made available to the research community in support of AI-related research and development. The Director of NSF shall identify Federal and private sector computational, data, software, and training resources appropriate for inclusion in the NAIRR pilot program."
NAIRR Task ForceTime-boxed requirementWithin 90 daysJanuary 28, 2024Implemented

01/29/2024: White House Fact Sheet states: "Launched a pilot of the National AI Research Resource—catalyzing broad-based innovation, competition, and more equitable access to AI research. The pilot, managed by the U.S. National Science Foundation (NSF), is the first step toward a national infrastructure for delivering computing power, data, software, access to open and proprietary AI models, and other AI training resources to researchers and students. These resources come from 11 federal-agency partners and more than 25 private sector, nonprofit, and philanthropic partners."
Implemented

01/24/2024: NSF announces pilot launch: "Today, the U.S. National Science Foundation and collaborating agencies launched the National Artificial Intelligence Research Resource (NAIRR) pilot, a first step towards realizing the vision for a shared research infrastructure that will strengthen and democratize access to critical resources necessary to power responsible AI discovery and innovation." NSF website indicates this is in fulfillment of the EO. NSF officials comment on the pilot and the goals of NAIRR.

05/06/2024: NSF and the Department of Energy (DOE)
announce the first 35 projects that will be supported with computational time through the NAIRR Pilot, with 27 of the projects receiving support through resources on NSF-funded advanced computing systems, and 8 projects receiving access to DOE-supported systems.
Implemented

01/24/2024: NSF
announcement is coupled with launch of NAIRR pilot website.

01/26/2024: Reporting by various outlets like
Time and FedScoop note the launch and support from partners, including from industry.
51
Sec. 5.2(a)(i) InnovationResource InvestmentThe heads of appropriate agenciesOther federal entities(i) (cont'd)

"[S]hall each submit to the Director of NSF a report identifying the agency resources that could be developed and integrated into such a pilot program. These reports shall include a description of such resources, including their current status and availability; their format, structure, or technical specifications; associated agency expertise that will be provided; and the benefits and risks associated with their inclusion in the NAIRR pilot program. The heads of independent regulatory agencies are encouraged to take similar steps, as they deem appropriate."
Time-boxed requirementWithin 45 daysDecember 14, 2023Implemented

01/29/2024: White House Fact Sheet claims all NSF and NAIRR-related deadlines were met, though it does not specifically reference this requirement.
Implemented

01/08/2024: Per FedScoop reporting, an NSF spokesperson said that they "are working closely with a wide range of federal partners who submitted proposals for how their agencies can contribute to the pilot per the direction in the executive order" and that they "expect to make the full breadth of those contributions public upon the launch of the pilot in January." NSF website further lists the contributions of each NAIRR pilot partner.
Implemented

01/08/2024: NSF website details on NAIRR pilot partners and contributors, alongside announcements from individual agency partners (e.g., the Department of Energy) satisfy this requirement.
52
Sec. 5.2(a)(ii)InnovationResource InvestmentDirector of NSFNational Science Foundation"(ii) [Shall] fund and launch at least one NSF Regional Innovation Engine that prioritizes AI-related work, such as AI-related research, societal, or workforce needs."Time-boxed requirementWithin 150 daysMarch 28, 2024Implemented

01/29/2024: White House Fact Sheet states: National Science Foundation "Announced the funding of new Regional Innovation Engines (NSF Engines), including with a focus on advancing AI. For example, with an initial investment of $15 million over two years and up to $160 million over the next decade, the Piedmont Triad Regenerative Medicine Engine will tap the world’s largest regenerative medicine cluster to create and scale breakthrough clinical therapies, including by leveraging AI."
Implemented

01/29/2024: NSF announces the establishment of the first-ever NSF Regional Innovation Engines (NSF Engines), awarding 10 teams spanning 18 states.
Implemented

01/29/2024: NSF announcement on the establishment of the first-ever NSF Regional Innovation Engines (NSF Engines) satisfies this requirement, around two months ahead of the deadline. At least one of the NSF engines can be considered related to AI: Central Florida Semiconductor Innovation Engine.
53
Sec. 5.2(a)(iii)InnovationResource InvestmentDirector of NSFNational Science Foundation"(iii) [Shall] establish at least four new National AI Research Institutes, in addition to the 25 currently funded as of the date of this order."National AI Research InstitutesTime-boxed requirementWithin 540 daysApril 22, 2025
54
Sec. 5.2(b)InnovationResource InvestmentSecretary of EnergyDepartment of EnergyIn coordination with: Director of NSFTo "support activities involving high-performance and data-intensive computing"

"[S]hall, in a manner consistent with applicable law and available appropriations, establish a pilot program to enhance existing successful training programs for scientists, with the goal of training 500 new researchers by 2025 capable of meeting the rising demand for AI talent."
Time-boxed requirementWithin 150 daysMarch 28, 2024Implemented

03/28/2024: White House Fact Sheet states: The National Science Foundation and Department of Energy "established a pilot program to enhance existing successful training initiatives for training additional scientists in AI."
Implemented

02/27/2024: DOE launches new web portal that highlights a non-exhaustive list of NSF and DOE's AI education, training, and workforce opportunities.
Implemented

02/27/2024: NSF and DOE's education and training initiatives highlighted in the new web portal satisfy this requirement.
55
Sec. 5.2(c)(i)InnovationIntellectual PropertyUnder Secretary of Commerce for Intellectual Property, Director of the United States Patent and Trademark Office (USPTO Director)Department of Commerce"To promote innovation and clarify issues related to AI and inventorship of patentable subject matter"

"(i) [Shall] publish guidance to USPTO patent examiners and applicants addressing inventorship and the use of AI, including generative AI, in the inventive process, including illustrative examples in which AI systems play different roles in inventive processes and how, in each example, inventorship issues ought to be analyzed"
Time-boxed requirementWithin 120 daysFebruary 27, 2024Implemented

03/28/2024: White House Fact Sheet states: The U.S. Patent and Trademark Office "published guidance on patentability of AI-assisted inventions."
Implemented

02/12/2024: USPTO announces release of new Inventorship Guidance for AI-Assisted Inventions, published in the Federal Register and effective on 02/13/2024, that provides instructions to examiners and stakeholders on how to determine whether the human contribution to an innovation is significant enough to qualify for a patent when AI also contributed. In conjuction with the guidance, USTPO also published examples to provide assistance on the aplication of the guidance in specific situations.
Implemented

02/12/2024: USTPO's new inventorship guidance satisfies this requirement two weeks ahead of the deadline.
56
Sec. 5.2(c)(ii)InnovationIntellectual PropertyUnder Secretary of Commerce for Intellectual Property, USPTO DirectorDepartment of Commerce"(ii) [Shall] subsequently [...] issue additional guidance to USPTO patent examiners and applicants to address other considerations at the intersection of AI and IP, which could include, as the USPTO Director deems necessary, updated guidance on patent eligibility to address innovation in AI and critical and emerging technologies"Time-boxed requirementWithin 270 daysJuly 26, 2024N/A

No statement on implementation as of 05/31/2024.
In progress

04/30/2024: USPTO issues a Request for Comments "Regarding the Impact of the Proliferation of Artificial Intelligence on Prior Art, the Knowledge of a Person Having Ordinary Skill in the Art, and Determinations of Patentability Made in View of the Foregoing." The RFC builds on the recent "Inventorship Guidance for AI-Assisted Inventions."
In progress

04/30/2024: USTPO request for comments indicates that the impementation of this requirement is in progress.
57
Sec. 5.2(c)(iii)InnovationIntellectual PropertyUnder Secretary of Commerce for Intellectual Property, USPTO DirectorDepartment of CommerceIn consultation with: Director of United States Copyright Office"(iii) [Shall] consult with the Director of the United States Copyright Office and issue recommendations to the President on potential executive actions relating to copyright and AI. The recommendations shall address any copyright and related issues discussed in the United States Copyright Office’s study, including the scope of protection for works produced using AI and the treatment of copyrighted works in AI training."Time-boxed requirementWithin 270 days or 180 days after the United States Copyright Office of the Library of Congress publishes its forthcoming AI study that will address copyright issues raised by AI, whichever comes laterTbc

[July 26, 2024 or later date]
58
Sec. 5.2(d)InnovationIntellectual PropertySecretary of Homeland Security through the Director of the National Intellectual Property Rights Coordination CenterDepartment of Homeland SecurityIn consultation with: the Attorney GeneralTo "assist developers of AI in combatting AI-related IP risks"

"[S]hall develop a training, analysis, and evaluation program to mitigate AI-related IP risks. Such a program shall:

(i) include appropriate personnel dedicated to collecting and analyzing reports of AI-related IP theft, investigating such incidents with implications for national security, and, where appropriate and consistent with applicable law, pursuing related enforcement actions;
(ii) implement a policy of sharing information and coordinating on such work, as appropriate and consistent with applicable law, with the Federal Bureau of Investigation; United States Customs and Border Protection; other agencies; State and local agencies; and appropriate international organizations, including through work-sharing agreements;
(iii) develop guidance and other appropriate resources to assist private sector actors with mitigating the risks of AI-related IP theft;
(iv) share information and best practices with AI developers and law enforcement personnel to identify incidents, inform stakeholders of current legal requirements, and evaluate AI systems for IP law violations, as well as develop mitigation strategies and resources; and
(v) assist the Intellectual Property Enforcement Coordinator in updating the Intellectual Property Enforcement Coordinator Joint Strategic Plan on Intellectual Property Enforcement to address AI-related issues.
Time-boxed requirementWithin 180 daysApril 27, 2024Implemented

04/29/2024: White House
Fact Sheet divides this requirement into three separate tasks, stating: The Department of Homeland Security "Established a training program to help industry and domestic law enforcement better understand and respond to AI-related IP theft," "Increased information sharing related to AI technology theft, AI-enabled IP theft, and AI-enabled digital piracy with state, local, and international law enforcement," and "Dedicated peprsonnel to collect, analyze, and investigate AI-enabled digital piracy and continued to investigate theft of AI IP and trade secrets, as well as insider threats."
Implemented

04/29/2024: DHS launches a training program to combat intellectual property theft through AI-generated material. According to the DHS website, Homeland Security Investigations "has partnered with Michigan State University’s Center for Anti-Counterfeiting and Product Protection (A-CAPP) to create a training program to help industry and domestic law enforcement better understand and respond to AI-related IP theft." DHS also has an Intellectual Property Rights Center (IPR Center) that "encourages members of the public, industry, trade associations, law enforcement, and government agencies to report potential violations of intellectual property rights involving AI through their website. The IPR center serves as whole of government center for the criminal enforcement of IP theft, to include AI-enabled digital piracy, product counterfeiting, trade fraud, and the theft of trade secrets."
Implemented

04/29/2024: DHS website details training program and IPR Center's work, which satisfies this requirement.
59
Sec. 5.2(e)InnovationResearch & DevelopmentSecretary of HHSDepartment of Health and Human ServicesTo "advance responsible AI innovation by a wide range of healthcare technology developers that promotes the welfare of patients and workers in the healthcare sector"

"[S]hall identify and, as appropriate and consistent with applicable law and the activities directed in section 8 of this order, prioritize grantmaking and other awards, as well as undertake related efforts, to support responsible AI development and use, including:
(i) collaborating with appropriate private sector actors through HHS programs that may support the advancement of AI-enabled tools that develop personalized immune-response profiles for patients, consistent with section 4 of this order;
(ii) prioritizing the allocation of 2024 Leading Edge Acceleration Project cooperative agreement awards to initiatives that explore ways to improve healthcare-data quality to support the responsible development of AI tools for clinical care, real-world-evidence programs, population health, public health, and related research; and
(iii) accelerating grants awarded through the National Institutes of Health Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) program and showcasing current AIM-AHEAD activities in underserved communities.
2024 Leading Edge Acceleration Project cooperative agreement, Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD)Ongoing requirementUnspecifiedN/A
60
Sec. 5.2(f)InnovationResearch & DevelopmentSecretary of Veterans AffairsDepartment of Veterans AffairsTo "advance the development of AI systems that improve the quality of veterans’ healthcare, and in order to support small businesses’ innovative capacity"

"(i) host two 3-month nationwide AI Tech Sprint competitions; and
(ii) as part of the AI Tech Sprint competitions and in collaboration with appropriate partners, provide participants access to technical assistance, mentorship opportunities, individualized expert feedback on products under development, potential contract opportunities, and other programming and resources."
Time-boxed requirementWithin 365 daysOctober 29, 2024
61
Sec. 5.2(g)(i)InnovationResearch & DevelopmentSecretary of EnergyDepartment of EnergyIn consultation with: Chair of the Federal Energy Regulatory Commission, Director of OSTP, Chair of the Council on Environmental Quality, Assistant to the President and National Climate Advisor, and the heads of other relevant agencies deemed apporpriate by Secretary of EnergyTo "support the goal of strengthening our Nation’s resilience against climate change impacts and building an equitable clean energy economy for the future"

"(i) [Shall] issue a public report describing the potential for AI to improve planning, permitting, investment, and operations for electric grid infrastructure and to enable the provision of clean, affordable, reliable, resilient, and secure electric power to all Americans"
Public report on the potential for AI applications in electric grid infrastructureTime-boxed requirementWithin 180 daysApril 27, 2024Implemented

04/29/2024: White House Fact Sheet states: The Department of Energy "Published a report on AI's potential to improve planning, permitting, investment, and operations for electric grid infrastructure and to enable clean power provision." DOE "Prepared convenings for the next several months with utilities, clean energy developers, data center owners and operators, and regulators in localities experiencing large load growth" and "announced new actions to assess the potential energy opportunities and challenges of AI, accelerate deployment of clean energy, and advance AI innovation to manage the growing energy demand of AI."
Implemented

03/01/2024: The Department of Energy issues a request for information related to its responsibilities otulined in the EO.

04/29/2024: DOE announces the public release of a report on "AI and Energy: Opportunities for a Modern Grid and Clean Energy Economy" that summarizes the potential of AI to assist in providing clean, affordable, resilient, and secure electric power to all Americans and the role AI can play in building an innovative clean energy economy. DOE also published a report on "Advanced Research Directions in AI For Energy" that identifies key challenges for harnessing the potential transformative power of AI for energy over the next decade.
Implemented

03/01/2024: DOE issuance of
RFI indicates that implementation of this requirement is in progress.

04/29/2024: DOE public release of its
report satisfies this requirement.
62
Sec. 5.2(g)(ii)InnovationResearch & DevelopmentSecretary of EnergyDepartment of EnergyIn consultation with: Chair of the Federal Energy Regulatory Commission, Director of OSTP, Chair of the Council on Environmental Quality, Assistant to the President and National Climate Advisor, and the heads of other relevant agencies deemed apporpriate by Secretary of Energy"(ii) [Shall] develop tools that facilitate building foundation models useful for basic and applied science, including models that streamline permitting and environmental reviews while improving environmental and social outcomes"Time-boxed requirementWithin 180 daysApril 27, 2024Implemented

04/29/2024: White House Fact Sheet states: The Department of Energy "Launched pilots, partnerships, and new AI tools to address energy challenges and advance clean energy. For example, DOE is piloting AI tools to streamline permitting processes and improving siting for clean energy infrastructure, and it has developed other powerful AI tools with applications at the intersection of energy, science, and security."
Implemented

04/29/2024: DOE announces a new $13 million VoltAIc Initiative to "use AI to help streamline siting and permitting at the Federal, state, and local level." The initiative will "build AI-powered tools to improve siting and permitting of clean energy infrastructure and has partnered with Pacific Northwest National Laboratory (PNNL) to develop PolicyAI, a policy-specific Large Language Model test bed that will be used to develop software to augment National Environmental Policy Act and related reviews." A DOE web portal highlights some of the DOE's AI tools, foundation models, and partnerships for applications in science, energy, climate, and security.
Implemented

04/29/2024: DOE's VoltAIc Initiative and development of other AI tools indicates that this requirement is in progress.
63
Sec. 5.2(g)(iii)InnovationResearch & DevelopmentSecretary of EnergyDepartment of EnergyIn consultation with: Chair of the Federal Energy Regulatory Commission, Director of OSTP, Chair of the Council on Environmental Quality, Assistant to the President and National Climate Advisor, and the heads of other relevant agencies deemed apporpriate by Secretary of Energy"(iii) [Shall] collaborate, as appropriate, with private sector organizations and members of academia to support development of AI tools to mitigate climate change risks"Time-boxed requirementWithin 180 daysApril 27, 2024Implemented

04/29/2024: White House Fact Sheet states: The Department of Energy "Collaborated with private-sector actors to develop AI tools that can address climate change and other challenges."
In progress

04/29/2024: DOE announces plans to "expand its engagement with energy sector partners on AI, from a security and resilience perspective, over the course of 2024" and will host listening sessions on AI with energy sector partners and technical experts this summer."
In progress

04/29/2024: DOE announcement indicates this requirement is in progress.
64
Sec. 5.2(g)(iv)InnovationResearch & DevelopmentSecretary of EnergyDepartment of EnergyIn consultation with: Chair of the Federal Energy Regulatory Commission, Director of OSTP, Chair of the Council on Environmental Quality, Assistant to the President and National Climate Advisor, and the heads of other relevant agencies deemed apporpriate by Secretary of Energy"(iv) [Shall] take steps to expand partnerships with industry, academia, other agencies, and international allies and partners to utilize the Department of Energy’s computing capabilities and AI testbeds to build foundation models that support new applications in science and energy, and for national security, including partnerships that increase community preparedness for climate-related risks, enable clean-energy deployment (including addressing delays in permitting reviews), and enhance grid reliability and resilience"Time-boxed requirementWithin 180 daysApril 27, 2024Implemented

04/29/2024: White House Fact Sheet states: The Department of Energy "Launched partnerships to expand the use of Department of Energy testbeds and computing capabilities for new applications in science, energy, and national security." It also states that the DOE "Initiated a sustained effort to analyze the potential risks that deployment of AI may pose to the grid. DOE has started the process of convening energy stakeholders and technical experts over the coming months to collaboratively assess potential risks to the grid, as well as ways in which AI could potentially strengthen grid resilience and our ability to respond to threats—building off a new public assessment."
Implemented

01/24/2024: DOE announces that it will "offer access to the Argonne Leadership Computing Facility’s AI Testbed, a growing collection of some of the world’s most advanced AI accelerators for open scientific research" housed at DOE’s Argonne National Laboratory. The testbed will "enable researchers to explore and accelerate next-generation applications to advance the use of AI for science and discovery."

05/09/2024: DOE's report titled
"AI for Energy: Opportunities for a Modern Grid and Clean Energy Economy" states, "DOE’s Argonne National Lab already supports AI-specific hardware through its AI testbed, which gives U.S. researchers early access to next-generation hardware while also supporting early-stage AI hardware companies which also offer energy efficiency improvements." More information about the AI testbed can be found on the Argonne National Lab website.
Implemented

05/09/2024: The DOE report and information found on the Argonne National Lab website satisfy this requirement. The website states "The AI Testbed aims to help evaluate the usability and performance of machine learning-based high-performance computing applications running on these accelerators. The goal is to better understand how to integrate with existing and upcoming supercomputers at the facility to accelerate science insights. We are currently offering allocations on our Groq, Graphcore Bow IPUs, Cerebras CS-2, and SambaNova DataScale systems."
65
Sec. 5.2(g)(v)InnovationResearch & DevelopmentSecretary of EnergyDepartment of EnergyIn consultation with: Chair of the Federal Energy Regulatory Commission, Director of OSTP, Chair of the Council on Environmental Quality, Assistant to the President and National Climate Advisor, and the heads of other relevant agencies deemed apporpriate by Secretary of Energy"(v) [Shall] establish an office to coordinate development of AI and other critical and emerging technologies across Department of Energy programs and the 17 National Laboratories."Time-boxed requirementWithin 180 daysApril 27, 2024Implemented

01/29/2024: White House Fact Sheet states: Department of Energy "Established an office to coordinate development of AI and other critical and emerging technologies across the agency."
Implemented

12/12/2023: Department of Energy issues press release announcing the launch of the Office of Critical and Emerging Technology, which is to ensure investment in areas including AI.

01/17/2024: Director of new office discusses its responsibilities.
Implemented

12/12/2023: DOE announcement on the creation of the office indicates that this requirement has already been satisfied, though additional details on its next steps and activities could not be found.
66
Sec. 5.2(h)InnovationResearch & DevelopmentPresident’s Council of Advisors on Science and TechnologyExecutive Office of the PresidentTo "understand AI’s implications for scientific research"

"[S]hall submit to the President and make publicly available a report on the potential role of AI, especially given recent developments in AI, in research aimed at tackling major societal and global challenges. The report shall include a discussion of issues that may hinder the effective use of AI in research and practices needed to ensure that AI is used responsibly for research."
Public report on the potential role of AI in research that tackles major societal and global challengesTime-boxed requirementWithin 180 daysApril 27, 2024Implemented

04/29/2024: White House Fact Sheet states: The President’s Council of Advisors on Science and Technology "Authored a report on AI’s role in advancing scientific research to help tackle major societal challenges."
Implemented

04/29/2024: PCAST releases a report on Supercharging Research: Harnessing AI to Meet Global Challenges that recommends new actions to help the United States responsibly harness the power of AI to accelerate scientific discovery. The report provides examples of specific research areas in which AI is already having important impacts and also discusses practices needed to ensure effective and responsible use of AI technologies.
Implemented

04/29/2024: PCAST report on Supercharging Research: Harnessing AI to Meet Global Challenges satisfies this requirement.
67
Sec. 5.3(a)InnovationCompetitionThe head of each agency developing policies and regulations related to AIOther federal entities"[S]hall use their authorities, as appropriate and consistent with applicable law, to promote competition in AI and related technologies, as well as in other markets. Such actions include addressing risks arising from concentrated control of key inputs, taking steps to stop unlawful collusion and prevent dominant firms from disadvantaging competitors, and working to provide new opportunities for small businesses and entrepreneurs. In particular, the Federal Trade Commission is encouraged to consider, as it deems appropriate, whether to exercise the Commission’s existing authorities, including its rulemaking authority under the Federal Trade Commission Act, 15 U.S.C. 41 et seq., to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI."Federal Trade Commission Act, 15 U.S.C. 41 et seq.Ongoing requirementUnspecifiedN/AN/A

No statement on implementation as of 05/31/2024.
In progress

01/25/2024: FTC launches inquiry into generative AI investments and partnerships, requiring five companies to provide information regarding recent investments and partnerships involving generative AI companies and major cloud service providers.
In progress

01/25/2024: FTC inquiry into generative AI investments and partnerships does not explicitly reference EO 14110 but signals that implementation of this ongoing requirement has started. FTC also holds a virtual Tech Summit focused on AI that features discussions on AI competition and innovation. Though it's important to note that the FTC is not the only agency that is required to implement this requirement on an ongoing basis.
68
Sec. 5.3(b)(i)InnovationCompetitionSecretary of CommerceDepartment of CommerceTo "promote competition and innovation in the semiconductor industry, recognizing that semiconductors power AI technologies and that their availability is critical to AI competition"

"[S]hall, in implementing division A of Public Law 117-167, known as the Creating Helpful Incentives to Produce Semiconductors (CHIPS) Act of 2022, promote competition by:
(i) implementing a flexible membership structure for the National Semiconductor Technology Center that attracts all parts of the semiconductor and microelectronics ecosystem, including startups and small firms;
Creating Helpful Incentives to Produce Semiconductors (CHIPS) Act of 2022Ongoing requirementUnspecifiedN/A
69
Sec. 5.3(b)(ii)InnovationCompetitionSecretary of CommerceDepartment of CommerceShall [...] promote competition by:

"(ii) implementing mentorship programs to increase interest and participation in the semiconductor industry, including from workers in underserved communities"
Creating Helpful Incentives to Produce Semiconductors (CHIPS) Act of 2022Ongoing requirementUnspecifiedN/A
70
Sec. 5.3(b)(iii)-(iv)InnovationCompetitionSecretary of CommerceDepartment of CommerceShall [...] promote competition by:

"(iii) increasing, where appropriate and to the extent permitted by law, the availability of resources to startups and small businesses, including:
(A) funding for physical assets, such as specialty equipment or facilities, to which startups and small businesses may not otherwise have access;
(B) datasets — potentially including test and performance data — collected, aggregated, or shared by CHIPS research and development programs;
(C) workforce development programs;
(D) design and process technology, as well as IP, as appropriate; and
(E) other resources, including technical and intellectual property assistance, that could accelerate commercialization of new technologies by startups and small businesses, as appropriate; and

(iv) considering the inclusion, to the maximum extent possible, and as consistent with applicable law, of competition-increasing measures in notices of funding availability for commercial research-and-development facilities focused on semiconductors, including measures that increase access to facility capacity for startups or small firms developing semiconductors used to power AI technologies."
Creating Helpful Incentives to Produce Semiconductors (CHIPS) Act of 2022Ongoing requirementUnspecifiedN/A
71
Sec. 5.3(c)(i)InnovationCompetitionAdministrator of the Small Business AdministrationSmall Business AdministrationTo "support small businesses innovating and commercializing AI, as well as in responsibly adopting and deploying AI"

"(i) [Shall] prioritize the allocation of Regional Innovation Cluster program funding for clusters that support planning activities related to the establishment of one or more Small Business AI Innovation and Commercialization Institutes that provide support, technical assistance, and other resources to small businesses seeking to innovate, commercialize, scale, or otherwise advance the development of AI"
Ongoing requirementUnspecifiedN/A
72
Sec. 5.3(c)(ii)InnovationCompetitionAdministrator of the Small Business AdministrationSmall Business Administration"(ii) [Shall] prioritize the allocation of up to $2 million in Growth Accelerator Fund Competition bonus prize funds for accelerators that support the incorporation or expansion of AI-related curricula, training, and technical assistance, or other AI-related resources within their programming"Growth Accelerator Fund Competition bonus prize fundsOngoing requirementUnspecifiedN/AImplemented

01/29/2024: White House Fact Sheet states: Small Business Administration "Defined AI as a focus area for prize funds through the 2024 Growth Accelerator Fund Competition."
Implemented

01/08/2024: Small Business Administtation defines AI as an area of focus in a press release announcing the opening of the Growth Accelerator Fund Competition.
Implemented

01/08/2024: SBA launch of its Growth Accelerator Fund Competition lists AI as a focus area. While it does not specify the amount of funding allocated specifically to AI-related organizations, the language in the EO is vague enough for this requirement to be satisfied.
73
Sec. 5.3(c)(iii)InnovationCompetitionAdministrator of the Small Business AdministrationSmall Business Administration"(iii) [Shall] assess the extent to which the eligibility criteria of existing programs, including the State Trade Expansion Program, Technical and Business Assistance funding, and capital-access programs — such as the 7(a) loan program, 504 loan program, and Small Business Investment Company (SBIC) program — support appropriate expenses by small businesses related to the adoption of AI and, if feasible and appropriate, revise eligibility criteria to improve support for these expenses."State Trade Expansion Program, 7(a) loan program, 504 loan program, Small Business Investment Company (SBIC) programOngoing requirementUnspecifiedN/AImplemented

01/29/2024: White House Fact Sheet states: Small Business Administration "Confirmed the eligibility of AI-related expenditures for support via key programs that benefit small businesses."
N/A

No statement on implementation as of 05/31/2024.
Not verifiably implemented

No evidence independent from the White House statement found as of 05/31/2024.
74
Sec. 5.3(d)InnovationCompetitionAdministrator of the Small Business Administration, in coordination with resource partnersSmall Business Administration"[S]hall conduct outreach regarding, and raise awareness of, opportunities for small businesses to use capital-access programs described in subsection 5.3(c) of this section for eligible AI-related purposes, and for eligible investment funds with AI-related expertise — particularly those seeking to serve or with experience serving underserved communities — to apply for an SBIC license."SBIC programOngoing requirementUnspecifiedN/A
75
Sec. 6. Supporting Workers
76
Sec. 6(a)(i)WorkersWorkforce DisruptionChairman of the Council of Economic AdvisersExecutive Office of the President"To advance the Government’s understanding of AI’s implications for workers, the following actions shall be taken"

"(i) [Shall] prepare and submit a report to the President on the labor-market effects of AI."
Time-boxed requirementWithin 180 daysApril 27, 2024Implemented

04/29/2024: White House Fact Sheet states: The Council of Economic Advisors "Submitted a report to the President on AI's labor-market effects."
Implemented

03/21/2024: The Council of Economic Advisors published an Economic Report of the President that includes a chapter on the effects of AI development on labor markets.
Implemented

03/21/2024: The Council of Economic Advisors's publication of its report satisfies this requirement.
77
Sec. 6(a)(ii)WorkersWorkforce DisruptionSecretary of Labor Department of LaborIn consultation with: Secretary of Commerce, Secretary of Education"(ii) To evaluate necessary steps for the Federal Government to address AI-related workforce disruptions, [...] shall submit to the President a report analyzing the abilities of agencies to support workers displaced by the adoption of AI and other technological advancements. The report shall, at a minimum:
(A) assess how current or formerly operational Federal programs designed to assist workers facing job disruptions — including unemployment insurance and programs authorized by the Workforce Innovation and Opportunity Act (Public Law 113-128) — could be used to respond to possible future AI-related disruptions; and
(B) identify options, including potential legislative measures, to strengthen or develop additional Federal support for workers displaced by AI and, in consultation with the Secretary of Commerce and the Secretary of Education, strengthen and expand education and training opportunities that provide individuals pathways to occupations related to AI."
Workforce Innovation and Opportunity ActTime-boxed requirementWithin 180 daysApril 27, 2024Implemented

04/29/2024: White House Fact Sheet states: The Department of Labor "Submitted a report to the President evaluating policy options for supporting workers displaced by the adoption of AI and other technologies -- including assessments of current and former federal programs."
N/A

No statement on implementation as of 05/31/2024.
Not verifiably implemented

No evidence independent from the White House statement found as of 05/31/2024.
78
Sec. 6(b)(i)WorkersWorkforce DisruptionSecretary of LaborDepartment of LaborIn consultation with: other agencies and with outside entities, including labor unions and workers, deemed appropriate by Secretary of Labor"To help ensure that AI deployed in the workplace advances employees’ well-being:"

"(i) [Shall] develop and publish principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits. The principles and best practices shall include specific steps for employers to take with regard to AI, and shall cover, at a minimum:
(A) job-displacement risks and career opportunities related to AI, including effects on job skills and evaluation of applicants and workers;
(B) labor standards and job quality, including issues related to the equity, protected-activity, compensation, health, and safety implications of AI in the workplace; and
(C) implications for workers of employers’ AI-related collection and use of data about them, including transparency, engagement, management, and activity protected under worker-protection laws."
Time-boxed requirementWithin 180 daysApril 27, 2024Implemented

04/29/2024: White House
Fact Sheet states: "Developed bedrock principles and practices for employers and developers to build and deploy AI safely and in ways that empower workers. Agencies all across government are now starting work to establish these practices as requirements, where appropriate and authorized by law, for employers that receive federal funding."
Implemented

05/09/2024: DOL's OFCCP published a FAQ on their website that shares promising practices to clarify federal contractors’ legal obligations, promote EEO, and mitigate the potentially harmful impacts of AI in employment decisions.
Implemented

05/09/2024: DOL's publication of FAQ satisfies this requirement, though there is no mention of job displacement risks.
79
Sec. 6(b)(ii)WorkersWorkforce DisruptionThe heads of agenciesOther federal entitiesIn consultation with the Secretary of Labor "(ii) [S]hall consider [...] encouraging the adoption of these guidelines in their programs to the extent appropriate for each program and consistent with applicable law."Open-ended requirementUnspecified

[After principles and best practices are developed pursuant to subsection (b)(i)]
N/A
80
Sec. 6(b)(iii)WorkersWorkforce DisruptionSecretary of Labor Department of Labor"(iii) To support employees whose work is monitored or augmented by AI in being compensated appropriately for all of their work time, [...] shall issue guidance to make clear that employers that deploy AI to monitor or augment employees’ work must continue to comply with protections that ensure that workers are compensated for their hours worked, as defined under the Fair Labor Standards Act of 1938, 29 U.S.C. 201 et seq., and other legal requirements."Fair Labor Standards Act of 1938, 29 U.S.C. 201 et seq.Ongoing requirementUnspecifiedN/AImplemented

04/29/2024: White House Fact Sheet states: "Released guidance to assist federal contractors and employers comply with worker protection laws as they deploy AI in the workplace." The Department of Labor also "provided guidance regarding the application of the Fair Labor Standards Act and other federal labor standards as employers increasingly use of AI and other automated technologies in the workplace."
Implemented

04/29/2024: The Department of Labor issues a memorandum providing guidance regarding the application of the Fair Labor Standards Act and other federal labor standards as employers increasingly use AI and other automated systems in the workplace. It highlights that the federal laws administered and enforced by the Wage and Hour Division continue to apply, and employees are entitled to the protections these laws provide, regardless of the tools and systems used in their workplaces.
Implemented

04/29/2024: The Department of Labor's memorandum satisfies this requirement.
81
Sec. 6(c)WorkersWorkforce DisruptionDirector of NSFNational Science FoundationIn consultation with: agencies"To foster a diverse AI-ready workforce, [...] shall prioritize available resources to support AI-related education and AI-related workforce development through existing programs. The Director shall additionally consult with agencies, as appropriate, to identify further opportunities for agencies to allocate resources for those purposes. The actions by the Director shall use appropriate fellowship programs and awards for these purposes."Ongoing requirementUnspecifiedN/AIn progress

01/29/2024: White House
Fact Sheet states: National Science Foundation "Began the EducateAI initiative to help fund educators creating high-quality, inclusive AI educational opportunities at the K-12 through undergraduate levels. The initiative’s launch helps fulfill the Executive Order’s charge for NSF to prioritize AI-related workforce development—essential for advancing future AI innovation and ensuring that all Americans can benefit from the opportunities that AI creates."
In progress

12/01/2023: NSF issues Dear Collegue Letter announcing the launch of the EducateAI initiative "to support educators to make state-of-the-art, inclusive AI educational experiences available nationwide." The letter encourages the Directorates for Computer and Information Science and Engineering and STEM Education to submit proposals that "advance inclusive computing education that prepares preK-12 and undergraduate students for the AI workforce."
In progress

12/01/2023: NSF announced the launch of its EducateAI initiative, but full implementation requires ongoing action (e.g., with respect to "prioritiz[ing] available resources to support AI-related education and AI-related workforce development through existing programs").
82
Sec. 7. Advancing Equity and Civil Rights
83
Sec. 7.1(a)(i)Civil RightsEquity & Civil RightsThe Attorney GeneralDepartment of Justice"To address unlawful discrimination and other harms that may be exacerbated by AI"

"(i) [Shall] consistent with Executive Order 12250 of November 2, 1980 (Leadership and Coordination of Nondiscrimination Laws), Executive Order 14091, and 28 C.F.R. 0.50-51, coordinate with and support agencies in their implementation and enforcement of existing Federal laws to address civil rights and civil liberties violations and discrimination related to AI;"
Executive Order 12250, Executive Order 14091Ongoing requirementUnspecifiedN/A
84
Sec. 7.1(a)(ii)Civil RightsEquity & Civil RightsThe Attorney GeneralDepartment of Justice"(ii) [Shall] direct the Assistant Attorney General in charge of the Civil Rights Division to convene a meeting of the heads of Federal civil rights offices — for which meeting the heads of civil rights offices within independent regulatory agencies will be encouraged to join — to discuss comprehensive use of their respective authorities and offices to: prevent and address discrimination in the use of automated systems, including algorithmic discrimination; increase coordination between the Department of Justice’s Civil Rights Division and Federal civil rights offices concerning issues related to AI and algorithmic discrimination; improve external stakeholder engagement to promote public awareness of potential discriminatory uses and effects of AI; and develop, as appropriate, additional training, technical assistance, guidance, or other resources"Time-boxed requirementWithin 90 daysJanuary 28, 2024Implemented

01/29/2024: White House Fact Sheet states: White House Chief of Staff's Office "Convened federal agencies' civil rights offices to discuss the intersection of AI and civil rights."
Implemented

01/11/2024: DOJ website provides a readout from the meeting--"The Justice Department’s Civil Rights Division convened a meeting yesterday with the heads of civil rights offices and senior officials from multiple federal agencies to discuss the critical intersection of [AI] and civil rights as directed by President Biden’s Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence."
Implemented

01/11/2024: DOJ readout provides verification that the meeting took place and addressed the topics the EO mandated.
85
Sec. 7.1(a)(iii)Civil RightsEquity & Civil RightsThe Attorney GeneralDepartment of Justice" (iii) [Shall] consider providing, as appropriate and consistent with applicable law, guidance, technical assistance, and training to State, local, Tribal, and territorial investigators and prosecutors on best practices for investigating and prosecuting civil rights violations and discrimination related to automated systems, including AI."Ongoing requirementUnspecifiedN/A
86
Sec. 7.1(b)Civil RightsEquity & Civil RightsThe Attorney GeneralDepartment of JusticeIn consultation with: Secretary of Homeland Security, Director of OSTP"To promote the equitable treatment of individuals and adhere to the Federal Government’s fundamental obligation to ensure fair and impartial justice for all, with respect to the use of AI in the criminal justice system"

"(i) [Shall] submit to the President a report that addresses the use of AI in the criminal justice system, including any use in:
(A) sentencing;
(B) parole, supervised release, and probation;
(C) bail, pretrial release, and pretrial detention;
(D) risk assessments, including pretrial, earned time, and early release or transfer to home-confinement determinations;
(E) police surveillance;
(F) crime forecasting and predictive policing, including the ingestion of historical crime data into AI systems to predict high-density “hot spots”;
(G) prison-management tools; and
(H) forensic analysis;

(ii) [Shall] within the report set forth in subsection 7.1(b)(i) of this section:
(A) identify areas where AI can enhance law enforcement efficiency and accuracy, consistent with protections for privacy, civil rights, and civil liberties; and
(B) recommend best practices for law enforcement agencies, including safeguards and appropriate use limits for AI, to address the concerns set forth in section 13(e)(i) of Executive Order 14074 as well as the best practices and the guidelines set forth in section 13(e)(iii) of Executive Order 14074; and

(iii) [Shall] supplement the report set forth in subsection 7.1(b)(i) of this section as appropriate with recommendations to the President, including with respect to requests for necessary legislation."
Executive Order 14074Time-boxed requirementWithin 365 daysOctober 29, 2024
87
Sec. 7.1(c)(i)Civil RightsEquity & Civil RightsThe interagency working group created pursuant to section 3 of Executive Order 14074Other federal entities"To advance the presence of relevant technical experts and expertise (such as machine-learning engineers, software and infrastructure engineering, data privacy experts, data scientists, and user experience researchers) among law enforcement professionals"

"(i) [Shall] identify and share best practices for recruiting and hiring law enforcement professionals who have the technical skills mentioned in subsection 7.1(c) of this section, and for training law enforcement professionals about responsible application of AI."
Executive Order 14074Time-boxed requirementWithin 180 daysApril 27, 2024Implemented

04/29/2024: White House
Fact Sheet states: The Office of Personnel Management "Identified -- and shared with appropriate agencies -- best practices for federal law enforcement agencies to hire professionals with technical skills and train professionals in the responsible use of AI."
Not verifiably implemented

No evidence independent from the White House statement found as of 05/31/2024.
Not verifiably implemented

No evidence independent from the White House statement found as of 05/31/2024.
88
Sec. 7.1(c)(ii)Civil RightsEquity & Civil RightsThe Attorney GeneralDepartment of JusticeIn consultation with: Secretary of Homeland Security"(ii) [Shall] consider those best practices and the guidance developed under section 3(d) of Executive Order 14074 and, if necessary, develop additional general recommendations for State, local, Tribal, and territorial law enforcement agencies and criminal justice agencies seeking to recruit, hire, train, promote, and retain highly qualified and service-oriented officers and staff with relevant technical knowledge. In considering this guidance, the Attorney General shall consult with State, local, Tribal, and territorial law enforcement agencies, as appropriate."Executive Order 14074Time-boxed requirementWithin 270 daysJuly 26, 2024
89
Sec. 7.1(c)(iii)Civil RightsEquity & Civil RightsThe Attorney GeneralDepartment of Justice"(iii) [Shall] review the work conducted pursuant to section 2(b) of Executive Order 14074 and, if appropriate, reassess the existing capacity to investigate law enforcement deprivation of rights under color of law resulting from the use of AI, including through improving and increasing training of Federal law enforcement officers, their supervisors, and Federal prosecutors on how to investigate and prosecute cases related to AI involving the deprivation of rights under color of law pursuant to 18 U.S.C. 242."Executive Order 14074

18 U.S.C. 242 - Deprivation of rights under color of law
Time-boxed requirementWithin 365 daysOctober 29, 2024
90
Sec. 7.2(a)Civil RightsEquity & Civil RightsFederal agenciesAll federal agencies"To advance equity and civil rights, consistent with the directives of Executive Order 14091, and in addition to complying with the guidance on Federal Government use of AI issued pursuant to section 10.1(b) of this order"

"[S]hall use their respective civil rights and civil liberties offices and authorities — as appropriate and consistent with applicable law — to prevent and address unlawful discrimination and other harms that result from uses of AI in Federal Government programs and benefits administration. This directive does not apply to agencies’ civil or criminal enforcement authorities. Agencies shall consider opportunities to ensure that their respective civil rights and civil liberties offices are appropriately consulted on agency decisions regarding the design, development, acquisition, and use of AI in Federal Government programs and benefits administration. To further these objectives, agencies shall also consider opportunities to increase coordination, communication, and engagement about AI as appropriate with community-based organizations; civil-rights and civil-liberties organizations; academic institutions; industry; State, local, Tribal, and territorial governments; and other stakeholders."
Executive Order 14091Ongoing requirementUnspecifiedN/A
91
Sec. 7.2(b)(i)Civil RightsEquity & Civil RightsSecretary of HHSDepartment of Health and Human ServicesIn consultation with relevant agencies"To promote equitable administration of public benefits:"

"(i) [Shall] publish a plan, informed by the guidance issued pursuant to section 10.1(b) of this order, addressing the use of automated or algorithmic systems in the implementation by States and localities of public benefits and services administered by the Secretary, such as to promote: assessment of access to benefits by qualified recipients; notice to recipients about the presence of such systems; regular evaluation to detect unjust denials; processes to retain appropriate levels of discretion of expert agency staff; processes to appeal denials to human reviewers; and analysis of whether algorithmic systems in use by benefit programs achieve equitable and just outcomes."
Time-boxed requirementWithin 180 daysApril 27, 2024Implemented

04/29/2024: White House Fact Sheet states: The Department of Health and Human Services "Published guidance and principles that set guardrails for the responsible and equitable use of AI in administering public benefits programs" that provides guidelines on managing risks for uses of AI and automated systems in benefits programs it oversees.
Implemented

04/29/2024: HHS shares its plan for promoting responsible use of AI in automated and algorithmic systems by state, local, tribal, and territorial governments in the administration of public benefits that "provides more detail about how the rights-impacting and/or safety-impacting risk framework established in OMB Memorandum M-24-10 applies to public benefits delivery, provides information about existing guidance that applies to AI-enabled systems, and lays out topics that HHS is considering providing future guidance on."
Implemented

04/29/2024: HHS plan satisfies this requirement.
92
Sec. 7.2(b)(ii)Civil RightsEquity & Civil RightsSecretary of AgricultureDepartment of Agriculture"(ii) [Shall], as informed by the guidance issued pursuant to section 10.1(b) of this order, issue guidance to State, local, Tribal, and territorial public-benefits administrators on the use of automated or algorithmic systems in implementing benefits or in providing customer support for benefit programs administered by the Secretary, to ensure that programs using those systems:
(A) maximize program access for eligible recipients;
(B) employ automated or algorithmic systems in a manner consistent with any requirements for using merit systems personnel in public-benefits programs;
(C) identify instances in which reliance on automated or algorithmic systems would require notification by the State, local, Tribal, or territorial government to the Secretary;
(D) identify instances when applicants and participants can appeal benefit determinations to a human reviewer for reconsideration and can receive other customer support from a human being;
(E) enable auditing and, if necessary, remediation of the logic used to arrive at an individual decision or determination to facilitate the evaluation of appeals; and
(F) enable the analysis of whether algorithmic systems in use by benefit programs achieve equitable outcomes."
Time-boxed requirementWithin 180 daysApril 27, 2024Implemented

04/29/2024: White House Fact Sheet states: The Department of Agriculture "Published guidance and principles that set guardrails for the responsible and equitable use of AI in administering public benefits programs" that "explains how State, local, Tribal, and territorial governments should manage risks for uses of AI and automated systems in benefits programs such as SNAP."
Implemented

04/29/2024: USDA issues a framework for state, local, tribal, and territorial use of AI for public benefit administration that outlines USDA's principles and approach to support states, localities, tribes, and territories in responsibly using AI in the implementation and administration of USDA’s nutrition benefits and services.
Implemented

04/29/2024: USDA framework satisfies this requirement.
93
Sec. 7.3(a)Civil RightsEquity & Civil RightsSecretary of LaborDepartment of LaborTo "prevent unlawful discrimination from AI used for hiring"

"[S]hall publish guidance for Federal contractors regarding nondiscrimination in hiring involving AI and other technology-based hiring systems."
Time-boxed requirementWithin 365 daysOctober 29, 2024Implemented

04/29/2024: White House Fact Sheet states: The Department of Labor "Released guidance to assist federal contractors and employers comply with worker protection laws as they deploy AI in the workplace." It "developed a guide for federal contractors and subcontractors to answer questions and share promising practices to clarify federal contractors’ legal obligations, promote equal employment opportunity, and mitigate the potentially harmful impacts of AI in employment decisions."
Implemented

04/29/2024: The Department of Labor issues guidance on"Artificial Intelligence and Equal Employment Opportunity for Federal Contractors" to clarify federal contractors' legal obligations, promote EEO, and mitigate the potentially harmful impacts of AI in employment decisions."
Implemented

04/29/2024: The Department of Labor guidance satisfies this requirement.
94
Sec. 7.3(b)Civil RightsEquity & Civil RightsDirector of the Federal Housing Finance Agency, Director of the Consumer Financial Protection BureauFederal Housing Finance Agency, Consumer Financial Protection BureauTo "address discrimination and biases against protected groups in housing markets and consumer financial markets"

"[E]ncouraged to consider using their authorities, as they deem appropriate, to require their respective regulated entities, where possible, to use appropriate methodologies including AI tools to ensure compliance with Federal law and:
(i) evaluate their underwriting models for bias or disparities affecting protected groups; and
(ii) evaluate automated collateral-valuation and appraisal processes in ways that minimize bias."
Ongoing requirementUnspecifiedN/A
95
Sec. 7.3(c)Civil RightsEquity & Civil RightsSecretary of Housing and Urban Development, Director of Consumer Financial Protection BureauFederal Housing Finance Agency, Consumer Financial Protection BureauTo "combat unlawful discrimination enabled by automated or algorithmic tools used to make decisions about access to housing and in other real estate-related transactions, the Secretary of Housing and Urban Development shall, and the Director of the Consumer Financial Protection Bureau is encouraged to, issue additional guidance:

(i) addressing the use of tenant screening systems in ways that may violate the Fair Housing Act (Public Law 90-284), the Fair Credit Reporting Act (Public Law 91-508), or other relevant Federal laws, including how the use of data, such as criminal records, eviction records, and credit information, can lead to discriminatory outcomes in violation of Federal law; and

(ii) addressing how the Fair Housing Act, the Consumer Financial Protection Act of 2010 (title X of Public Law 111-203), or the Equal Credit Opportunity Act (Public Law 93-495) apply to the advertising of housing, credit, and other real estate-related transactions through digital platforms, including those that use algorithms to facilitate advertising delivery, as well as on best practices to avoid violations of Federal law."
Fair Housing Act, Fair Credit Reporting Act, Consumer Financial Protection Act of 2010, Equal Credit Opportunity Act, other relevant Federal lawsTime-boxed requirementWithin 180 daysApril 27, 2024Implemented

01/29/2024: White House Fact Sheet states: Consumer Financial Protection Bureau "issued an advisory opinion to highlight that false, incomplete, and old information must not appear in background check reports, including for tenant screening."

04/29/2024: White House
Fact Sheet states: The Department of Housing and Urban Development "Issued guidance on AI’s nondiscriminatory use in the housing sector." The two guidance documents "affirmed that existing prohibitions against discrimination apply to AI’s use for tenant screening and advertisement of housing opportunities, and it explained how deployers of AI tools can comply with these obligations."
Implemented

01/11/2024: CFPB announces the issuance of two new guidance documents for consumer reporting companies to address "inaccurate background check reports, as well as sloppy credit file sharing practices": Fair Credit Reporting; Background Screening and Fair Credit Reporting; File Disclosure.

05/02/2024: HUD
announces the issuance of two new guidance documents on AI's nondiscriminatory use in the housing sector: "Guidance on Application of the Fair Housing Act to the Screening of Applicants for Rental Housing" and "Guidance on Application of the Fair Housing Act to the Advertising of Housing, Credit, and Other Real Estate-Related Transactions through Digital Platforms."
Implemented

01/11/2024: CFPB issuance of two guidance documents satisfies the first half of this requirement.

05/02/2024: HUD
issuance of two guidance documents satisfies the second half of this requirement.
96
Sec. 7.3(d)Civil RightsEquity & Civil RightsArchitectural and Transportation Barriers Compliance BoardArchitectural and Transportation Barriers Compliance Board"To help ensure that people with disabilities benefit from AI’s promise while being protected from its risks, including unequal treatment from the use of biometric data like gaze direction, eye tracking, gait analysis, and hand motions [...]"

"[E]ncouraged, as it deems appropriate, to solicit public participation and conduct community engagement; to issue technical assistance and recommendations on the risks and benefits of AI in using biometric data as an input; and to provide people with disabilities access to information and communication technology and transportation services."
Ongoing requirementUnspecifiedN/A
97
Sec. 8. Protecting Consumers, Patients, Passengers, and Students
98
Sec. 8(a)ConsumersConsumer ProtectionIndependent regulatory agenciesIndependent regulatory agencies"[E]ncouraged, as they deem appropriate, to consider using their full range of authorities to protect American consumers from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of AI, including risks to financial stability, and to consider rulemaking, as well as emphasizing or clarifying where existing regulations and guidance apply to AI, including clarifying the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use, and emphasizing or clarifying requirements and expectations related to the transparency of AI models and regulated entities’ ability to explain their use of AI models."Ongoing requirementUnspecifiedN/AIn progress

01/29/2024: White House Fact Sheet states: Federal Trade Commission "proposed changes to a privacy rule that would further limit companies' ability to monetize children's data, including by limiting targeted advertising."

03/28/2024: White House
Fact Sheet states: FTC "proposed a new rule to provide for penalties and redress when AI is used to impersonate an individual for commercial purposes."
In progress

12/20/2023: FTC issues a notice of proposed rulemaking and request for comment on its proposed changes to the Children’s Online Privacy Protection Rule (COPPA Rule) that "would place new restrictions on the use and disclosure of children’s personal information and further limit the ability of companies to condition access to services on monetizing children’s data."

03/01/2024: FTC issues a new
Trade Regulation Rule on Impersonation of Government and Businesses that prohibits impersonation, including AI-generated impersonation.
In progress

12/20/2023: FTC's NPRM/RFI signals that implementation of this ongoing requirement has begun, though a draft rule has not yet been issued and the NPRM/RFI relates narrowly to children's privacy. Additionally, the FTC is not the only agency that is required to implement this requirement on an ongoing basis.

03/01/2024: FTC
rule on impersonation indicates that another agency is taking action under this ongoing requirement.
99
Sec. 8(b)(i)ConsumersConsumer ProtectionSecretary of HHSDepartment of Health and Human ServicesIn consultation with: Secretary of Defense, Secretary of Veterans Affairs"To help ensure the safe, responsible deployment and use of AI in the healthcare, public-health, and human-services sectors"

"(i) [Shall] establish an HHS AI Task Force [...]"
Time-boxed requirementWithin 90 daysJanuary 28, 2024Implemented

01/29/2024: White House Fact Sheet states: "Established an AI Task Force at the Department of Health and Human Services to develop policies to provide regulatory clarity and catalyze AI innovation in health care. The Task Force will, for example, develop methods of evaluating AI-enabled tools and frameworks for AI’s use to advance drug development, bolster public health, and improve health care delivery. Already, the Task Force coordinated work to publish guiding principles for addressing racial biases in healthcare algorithms."
Implemented

12/19/2023: The co-lead of HHS's efforts in AI stated in his oral testimony before the House Energy and Commerce Committee that HHS has launched a department-wide task force looking at how AI will affect 8 areas of healthcare, including healthcare delivery, human services, and R&D.
Implemented

12/19/2023: Senior HHS official's oral testimony before the House Energy and Commerce Committee indicates that implementation is in progress. But the task force was not cited in the written testimony.

02/01/2024: Various
media reporting and Stanford HAI interactions with HHS confirm that the task force is active, led by Micky Tripathi and Syed Mohiuddin (since replaced by Erin Szulman). While a formal announcement of the task force's leadership, members, and next steps has still not been made, this requirement can be confirmed as implemented.
100
Sec. 8(b)(i)ConsumersConsumer ProtectionHHS AI Task Force established in subsection 8.(b)(i)Department of Health and Human Services(i) cont'd

"[S]hall develop a strategic plan that includes policies and frameworks — possibly including regulatory action, as appropriate — on responsible deployment and use of AI and AI-enabled technologies in the health and human services sector (including research and discovery, drug and device safety, healthcare delivery and financing, and public health), and identify appropriate guidance and resources to promote that deployment, including in the following areas:
(A) development, maintenance, and use of predictive and generative AI-enabled technologies in healthcare delivery and financing — including quality measurement, performance improvement, program integrity, benefits administration, and patient experience — taking into account considerations such as appropriate human oversight of the application of AI-generated output;
(B) long-term safety and real-world performance monitoring of AI-enabled technologies in the health and human services sector, including clinically relevant or significant modifications and performance across population groups, with a means to communicate product updates to regulators, developers, and users;
(C) incorporation of equity principles in AI-enabled technologies used in the health and human services sector, using disaggregated data on affected populations and representative population data sets when developing new models, monitoring algorithmic performance against discrimination and bias in existing models, and helping to identify and mitigate discrimination and bias in current systems;
(D) incorporation of safety, privacy, and security standards into the software-development lifecycle for protection of personally identifiable information, including measures to address AI-enhanced cybersecurity threats in the health and human services sector;
(E) development, maintenance, and availability of documentation to help users determine appropriate and safe uses of AI in local settings in the health and human services sector;
(F) work to be done with State, local, Tribal, and territorial health and human services agencies to advance positive use cases and best practices for use of AI in local settings; and
(G) identification of uses of AI to promote workplace efficiency and satisfaction in the health and human services sector, including reducing administrative burdens."
Time-boxed requirementWithin 365 daysOctober 29, 2024Implemented

04/29/2024: White House Fact Sheet states: HHS "Developed a strategy for ensuring the safety and effectiveness of AI deployed in the health care sector. The strategy outlines rigorous frameworks for AI testing and evaluation, and it outlines future actions for HHS to promote responsible AI development and deployment."
N/A

No statement on implementation as of 05/31/2024.
Not verifiably implemented

05/30/2024: The Administration for Children and Families (ACF), a division of the HHS, issues a request for information on "Enhancing AI Integration in Human and Health Services," which references the EO and seeks input on industry offerings and practices for enhancing human services delivery with AI-enabled technologies. However, this is not an HHS-wide initiative and does not yet represent a strategy. No additional evidence independent from the White House statement found as of 05/31/2024.