I have started a new dynamic document to track legislative efforts from the 118th Congress:

Federal Legislative Proposals Pertaining to Generative AI (118th)

On June 14th 2023, the open source list for bills from the 117th Congress (2021/2022) was closed and Anna Lenhart released a report which includes the bills originally circulated along with analysis on Congress’ progress and areas for further consideration.

Please reference the final report from here on out:

I know several people have cited the Google Doc so I will keep it up, but here is the citation for the final report.

Lenhart, A. (2023). Federal AI Legislation: An Analysis of Proposals from the 117th Congress Relevant to Generative AI tools. Institute for Data, Democracy & Politics, The George Washington University. https://iddp.gwu.edu/federal-ai-legislation

Thanks everyone!

ARCHIVED

Compiled and open for crowdsourcing by Anna Lenhart, Policy Fellow, Institute for Data, Democracy & Politics (IDDP) at George Washington University (annalenhart@gwu.edu) First Circulated: April 12, 2023, Last Updated: May 10, 2023

Federal Legislative Proposals Pertaining to Generative AI

There is a narrative floating around that Congress has yet to propose legislation “to protect individuals or thwart the development of A.I.’s potentially dangerous aspects.” While it is true that the potential harms (and benefits) from generative AI are vast and arguably no bill clearly covers the full range, the misperception of the state of AI policy has led people to overlook the wealth of proposals for legislation on AI that already exist. This chart aims to outline important provisions that have been drafted at the federal level.

The bills listed below represent “proposals” from the 117th Congress, as they did not make it to the President’s desk and have not been subject to legal interpretations in a court of law. The interpretations below represent best understanding based on congressional intent.

For the sake of this analysis, we consider the definition of generative AI to be a subset of artificial intelligence algorithms that are used to create new content based on patterns in large amounts of

existing content including chatbots such as ChatGPT, video synthesis and image generators such as DALL-E, both as stand alone tools or integrated into information technology (search engines, email, productivity tools, etc). A primary part of this analysis is understanding which definitions of covered platforms/covered entities include generative AI tools. Definitions in legislative text often include cross references and carve outs. I did my best to simplify in the chart below, but termtabs.com is an interactive tool for more deeply understanding these definitions (Marissa Gerchick is also open to feedback on this tool).

Many regulators in the executive branch have highlighted that there are existing laws that are applicable to these tools today, therefore it is important for policy analysis to focus on specific authorities regulators may need to guide the proper use and development of increasingly advanced AI technologies.

How to contribute:

If you believe there is a piece of legislation (117th Congress) missing from the list or you believe an interpretation listed below is incorrect/incomplete please send me an email with your suggestion. If you would like to be listed as a contributor include your name and affiliation in the email.

Includes contributions from: B Cavello, Former TechCongress Fellow, Eleanor Tursman, Former TechCongress Fellow

Acknowledgments: All the hill staffers that worked tirelessly on drafting these proposals while navigating political pressures and power dynamics that most simply could not tolerate. Thank you.

Creation of New Agencies

The United States has a long history of regulating innovative technologies ranging from airplanes to medical devices to advanced energy generation through dedicated regulatory agencies. Members of Congress have proposed a variety of regulatory agencies solely dedicated to the digital market (including generative AI tools). These proposals include two notable features: a) the ability to hire technical staff and b) the ability to promulgate rules using the Administrative Procedure Act (APA), colloquially referred to as notice-and-comment. Lawmakers argue that this approach creates capacity that will allow the government to keep up with new technologies and their impact on society (...at least faster than Congress can).

Digital Platform Commission Act (H.R.7858) (S.4201)

Rep. (Now Sen.) Welch (D-VT), Sen. Bennet (D-CO)

Creates a new agency with commissioners to oversee digital platforms where “digital platform means an online service that serves as an intermediary facilitating interactions—(i) between users; and(ii) between users and—(I) entities offering goods and services through the online service; or (II) the online service with respect to goods and services offered directly by the online service.” (likely generative AI)

The Agency can promulgate rules to protect consumers and initiate investigations.

Data Protection Act (S.2134)

Sen. Gillibrand (D-NY), Sen. Brown (D-OH)

Creates a director led agency with a range of rulemaking authority. The bill specifically denotes “high risk data practices” as including “a systematic processing of publicly accessible data on a large scale.”

Online Privacy Act (H.R. 6027)

Rep. Eshoo (D-CA), Rep. Lofgren (D-CA)

Establishes a director led  Digital Privacy Agency with a range of rule making authority related to data protections.

Covered Entities “shall not process personal information or contents of communication for advertising, marketing, soliciting, offering, selling, leasing, licensing, renting, or otherwise commercially contracting for employment, finance, health care, credit, insurance, housing, or education opportunities in a manner that discriminates against or otherwise makes opportunities unavailable on the basis of an individual’s protected class status.”

Note: Senator Graham and Warren have announced that they plan to release legislation to create a new digital regulator in the 118th Congress.

Risk Assessment & Transparency

Lawmakers often describe tools like generative AI as a “black box” highlighting that it is difficult to understand why the tools respond in specific ways and how they were designed. Members of Congress have proposed a variety of legislation aimed at mandating that entities (private companies, public organizations, and/or governments) assess the risks posed by information technologies (civil rights being the most common to date), mitigate those risks (disparate impact tests, red teaming, etc) and document that process. Some proposals include third party auditing of these assessments. Additionally, lawmakers have considered a range of public facing disclosures and labels to help users better understand technical interfaces and have proposed the disclosure of data sets to enable deeper research into digital platforms.

Algorithm Accountability Act (H.R.6580) (S.3572)

Rep. Clarke (D-NY), Sen. Wyden (D-OR), Sen. Booker (D-NJ)

Requires generative AI systems that are involved in “critical decisions” (eg. education, employment, essential utilities, family planning, financial services, healthcare, housing, legal service, etc) to assess impacts both prior and after deployment by a covered entity.

Requirements for assessments are incredibly detailed and broadly require covered entities to:

 In the case of a new augmented critical decision process, evaluate any previously existing critical decision-making process used for the same critical decision prior to the deployment of the new augmented critical decision process, along with any related documentation or information

 Identify and describe any consultation with relevant stakeholders as required

In accordance with any relevant National Institute of Standards and Technology or other Federal Government best practices and standards, perform ongoing testing and evaluation of the privacy risks and privacy-enhancing measures of the automated decision system or augmented critical decision process

Perform ongoing testing and evaluation of the current and historical performance of the automated decision system or augmented critical decision process using measures such as benchmarking datasets, representative examples from the covered entity’s historical data, and other standards, including by documenting

Support and perform ongoing training and education for all relevant employees, contractors, or other agents regarding any documented material negative impacts on consumers from similar automated decision systems or augmented critical decision processes and any improved methods of developing or performing an impact assessment for such system or process based on industry best practices and relevant proposals and publications from experts, such as advocates, journalists, and academics.

Assess the need for and possible development of any guard rail for or limitation on certain uses or applications of the automated decision system or augmented critical decision process, including whether such uses or applications ought to be prohibited or otherwise limited through any terms of use, licensing agreement, or other legal agreement between entities.

Maintain and keep updated documentation of any data or other input information used to develop, test, maintain, or update the automated decision system or augmented critical decision process

Evaluate the rights of consumers

Identify any likely material negative impact of the automated decision system or augmented critical decision process on consumers and assess any applicable mitigation strategy

Describe any ongoing documentation of the development and deployment process with respect to the automated decision system or augmented critical decision process

Identify any capabilities, tools, standards, datasets, security protocols, improvements to stakeholder engagement, or other resources that may be necessary or beneficial to improving the automated decision system, augmented critical decision process, or the impact assessment of such system or process

Document any of the impact assessment requirements described in paragraphs [above] that were attempted but were not possible to comply with because they were infeasible, as well as the corresponding rationale for not being able to comply with such requirements

DEEP FAKES Accountability Act (H.R.2395)

Rep. Clarke (D-NY)

Requires“any person who, using any means or facility of interstate or foreign commerce, produces an advanced technological false personation record with the intent to distribute such record over the internet or knowledge that such record shall be so distributed” to embed a “digital watermark” and additional disclosures.

Additionally, “any manufacturer of software, who in the course of conducting such business produces software, in or affecting interstate or foreign commerce, which such manufacturer reasonably believes, in the context of their intended distribution of the product, will be used to produce deep fakes…shall—

ensure such software has the technical capability to insert watermarks and disclosures of the nature described in such section into such deep fakes”

Digital Services Oversight and Safety Act (H.R. 6796)

Rep. Trahan (D-MA), Rep. Schiff (D-CA), Rep. Casten (D-IL)

Outlines a comprehensive set of disclosures for interactive computer services (ICS).

[IF a generative AI tool does not meet the ICS definition]

Sec 7 mandates that large covered platforms (aka large social media sites) conduct comprehensive risk assessment and risk mitigation audits, meaning social media companies would need to assess and mitigate (subject to audit) systemic risks including:

(A) The dissemination of illegal content or illegal goods, or the facilitation of illegal activity, through a hosting service.

(B) Discrimination against individuals based on race, color, religion or creed, national origin or ancestry, sex (including gender, pregnancy status, sexual orientation, or gender identity), age, physical or mental disability, veteran status, genetic information, or citizenship by, or resulting from the activities of, a provider of a hosting service.

(C) Any malfunctioning or intentional manipulation of a hosting service, including by means of inauthentic use or coordinated, automated, or other exploitation of the service or risks inherent to the intended operation of the service, including the amplification of illegal content, and of content that is in breach of the community standards of the provider of the service and has an actual or foreseeable negative effect on the protection of public health, minors, civic discourse, electoral processes, public security, or the safety of vulnerable and marginalized communities.

Many platforms will interpret systemic risks to include the spread of dangerous mis/disinformation created at scale by generative AI. Additionally, to the extent that generative AI is used within the hosting service it would be covered in the risk assessments and at least tangentially in the other transparency requirements of the bill.

The first mitigation technique listed: “(A) Integrating threat modeling and red-teaming processes to guard against systemic risks identified under paragraph (2)(A) in the early stages of product design, and to test potential mitigations prior to the release of a product.”

[If Generative AI does meet the legal definition of ICS]

Depending on the size of generative AI system in question it may be subject to a range of oversight and reporting including: clear community standards, transparency reports, internal complaint systems, researcher access, risk assessment & mitigation reports, independent audits for all harms, advertisement libraries, high reach public content stream

Platform Accountability and Consumer Transparency (PACT) Act (S.797)

Sen. Schatz (D-HI), Sen. Thune (R-SD)

Outlines transparency requirements and narrows Section 230’s safe harbor.

[IF a generative AI tool does not meet the interactive computer service definition]

Platforms that host user generated content will be required to submit biannual transparency reports which would likely be impacted by the amount and types of content created through generative AI.

The bill states that Section 230 shall not “apply to a provider of an interactive computer service, with respect to illegal content shared or illegal activity occurring on the interactive computer service, if the provider—

“(i) has actual knowledge of the illegal content or illegal activity; and

“(ii) does not remove the illegal content or stop the illegal activity—

“(I) within 4 days of acquiring that knowledge, subject to reasonable exceptions based on concerns about the legitimacy of the notice; or

“(II) if the knowledge is acquired from a notice that emanates from a default judgment or stipulated agreement—

“(aa) within 10 days of acquiring that knowledge; or

“(bb) if the provider seeks to vacate the default judgment or stipulated agreement under subparagraph (B)(i)(III) and the proceeding initiated under that subparagraph results in a determination that the default judgment or stipulated agreement should remain intact, within 24 hours of that determination.”

To the extent content created by generative AI is illegal the interactive computer service hosting the content would need to remove.

[If a generative AI tool does meet the legal definition of interactive computer service]

The generative AI tool would be required to have a complaint system for “potentially policy-violating content, illegal content, or illegal activity,” complete transparency reports on the treatment of such content, and would lose Section 230 for actions taken by federal agencies.

FTC Whistleblower Act (H.R.6093)

Rep. Schakowsky (D-IL), Rep. Trahan (D-MA)

Allows for whistleblowers within companies building generative AI to bring information to the FTC when they notice illegal activity within the FTC’s jurisdiction (structured similarly to the SEC whistleblower program)

Platform Accountability and Transparency Act (S.5339)

Sen. Coons (DE-D), Sen. Portman (R-OH), Sen. Klobuchar (D-MN), Sen. Cassidy (R-LA)

Requires covered platforms to make data available to researchers, journalists and the public.

Covers platforms that “(i) permits a person to become a registered user, establish an account, or create a profile for the purpose of allowing the user to create, share, and view user-generated content through such an account or profile; (ii) enables one or more users to generate content that can be viewed by other users of the platform; and (iii) primarily serves as a medium for users to interact with content generated by other users of the platform and for the platform to deliver ads to users”

This definition will likely not cover stand alone generative AI tools but would cover social media platforms that integrate generative AI or spread content created by generative AI.

Terms-of-service Labeling, Design, and Readability Act or the TLDR Act (H.R.6407) (S.3501)

Rep. Trahan (D-MA), Sen. Cassidy (R-LA), Sen. Lujan (D-MN)

Mandates summary statements and structured data formats for terms of service. To the extent generative AI tools have privacy statements, terms pertaining to acceptable use those would be structured and easy to read.

Kids Online Safety Act (KOSA) (S.3663)

Sen. Blumenthal (D-CT), Sen. Blackburn (R-TN)

Requires platforms to disclose clear terms (for minors and parents), how algorithms are used, advertising labels, transparency reports, systemic risk assessments and mitigation (description of safeguards), and third-party audit of these reports.

The text also creates a program for eligible researchers to get access to platform (generative AI tools) data to study harms to minors.

The provisions in this bill are tied to covered platforms where the term “covered platform means a social media service, social network, video game, messaging application, video streaming service, educational service, or an online platform that connects to the internet and that is used, or is reasonably likely to be used, by a minor.” This definition will likely cover generative AI tools.

I think it is safe to assume “an online platform that connects to the internet” would cover most generative AI tools.

Algorithmic Justice and Online Platform Transparency Act (H.R.3611) (S.1896)

Rep. Matsui (D-CA), Sen. Markey (D-MA)

Requires online platforms release disclosures and assessments to ensure the product does discriminate.

Generative AI tools would likely be included under the definition of “Algorithmic Process” noted in the bill, however the definition for “Online Platform” includes the phrase “and provides a community forum for user generated content.” Community forum is not defined in the text but I can imagine many stand alone generative AI tools would argue they are not providing a community forum.

To the extent an “Online Platform” utilizes an “Algorithmic Process” (including but not limited to generative AI) to “withhold, amplify, recommend, or promote content (including a group) to a user of the online platform” they would need to provide a notice regarding personal information and its use in the algorithmic process, content moderation transparency reports, advertisement libraries, and data portability.

Additionally, “if the online platform (except for a small business) utilizes an algorithmic process that relates to opportunities for housing, education, employment, insurance, credit, or the access to or terms of use of any place of public accommodations, an assessment of whether the type of algorithmic process produces disparate outcomes on the basis of an individual’s or class of individuals’ actual or perceived race, color, ethnicity, sex, religion, national origin, gender, gender identity, sexual orientation, familial status, biometric information, or disability status.”

Section 6 of the bill prohibits conduct related to discrimination in public accommodations, equal opportunity, voting rights, discriminatory advertising.

Notably, the text also includes this safety provision:

(e) Safety and effectiveness of algorithmic processes.—

(1) IN GENERAL.—It shall be unlawful for an online platform to employ an algorithmic process in a manner that is not safe and effective.

(2) SAFE.—For purposes of paragraph (1), an algorithmic process is safe—

(A) if the algorithmic process does not produce any disparate outcome as described in the assessment conducted under section 4(a)(2)(A)(iv); or

(B) if the algorithmic process does produce a disparate outcome as described in the assessment conducted under section 4(a)(2)(A)(iv), any such disparate outcome is justified by a non-discriminatory, compelling interest, and such interest cannot be satisfied by less discriminatory means.

(3) EFFECTIVE.—For purposes of paragraph (1), an algorithmic process is effective if the online platform employing or otherwise utilizing the algorithmic process has taken reasonable steps to ensure that the algorithmic process has the ability to produce its desired or intended result.

Stopping Unlawful Negative Machine Impacts through National Evaluation Act (S.5351)

Sen. Portman (R-OH)

Clarifies that “A covered entity that uses artificial intelligence to make or inform a decision that has an impact on a person that is addressed by a covered civil rights law, including whether to provide a program or activity or accommodation to a person, shall be liable for a claim of discrimination under the corresponding covered civil rights law in the same manner and to the same extent (including being liable pursuant to that law’s standard of culpability) as if the covered entity had made such decision without the use of artificial intelligence.”

The definition of Artificial Intelligence System defined in the bill would include generative AI tools and Covered Entity would include companies deploying generative AI.

The bill also directs NIST to “establish a program for conducting technology evaluations to assess and assist in mitigating bias and discrimination in artificial intelligence systems of covered entities with respect to race, sex, age, disability, and other classes or characteristics protected by covered civil rights laws. In establishing such program, the Director shall ensure that such evaluations effectively approximate real-world applications of artificial intelligence systems.”

Advancing American AI Act (S.1353) [Passed as part of the 2023 NDAA]

Sen. Peters (D-MI), Sen. Portman (R-OH)

The definition of AI would include generative AI tools. Today most use of generative AI by government agencies will be through procurement.

This bill requires specified federal agencies to take steps to promote artificial intelligence (AI) while aligning with U.S. values, such as the protection of privacy, civil rights, and civil liberties.

Specifically, the bill directs the Office of Management and Budget (OMB) to develop “the guidance required under section 104(a) of the AI in Government Act of 2020 (title I of division U of Public Law 116–260), consider—

  1. the considerations and recommended practices identified by the National Security Commission on Artificial Intelligence in the report entitled Key Considerations for the Responsible Development and Fielding of AI, as updated in April 2021;
  2. the principles articulated in Executive Order 13960 (85 Fed. Reg. 78939; relating to promoting the use of trustworthy artificial intelligence in Government); and
  3. the input of—(A) the Privacy and Civil Liberties Oversight Board; (B) relevant interagency councils, such as the Federal Privacy Council, the Chief Information Officers Council, and the Chief Data Officers Council; (C) other governmental and nongovernmental privacy, civil rights, and civil liberties experts; and (D)any other individual or entity the Director determines to be appropriate.”

The bill also directs Homeland Security to “issue policies and procedures for the Department related to—(A) the acquisition and use of artificial intelligence; and (B) considerations for the risks and impacts related to artificial intelligence-enabled systems, including associated data of machine learning systems, to ensure that full consideration is given to—(i) the privacy, civil rights, and civil liberties impacts of artificial intelligence-enabled systems; and (ii) security against misuse, degradation, or rending inoperable of artificial intelligence-enabled systems;”

Section 5 requires the head of each agency to “prepare and maintain an inventory of the artificial intelligence use cases of the agency, including current and planned uses”

Artificial Intelligence for Agency Impact Act (H.R. 4468)

Rep. Maloney (D-NY)

Directs the head of select agencies to “establish an AI Strategy, Objectives, and Metrics Plan that contains strategies, objectives, and metrics for the trustworthy adoption of artificial intelligence by the agency to better achieve the mission of the agency to serve the people of the United States”

“Promoting Responsibility Over Moderation In the Social-media Environment Act” or the “PROMISE Act”. (S.427) (HR.5803)

Sen. Lee (R-UT), Sen. Moran (R-KS), Sen. Braun (R-IN), Rep. Rice (R-SC), Rep. Joyce (R-OH), Rep. Norman (R-SC)

[In the case that generative AI tools are considered by the courts to be an ICS] 

Requires covered entities to “ implement and operate in accordance with an information moderation policy…disclose such information moderation policy in a publicly available and easily accessible manner; and shall not make a deceptive policy statement with respect to such information moderation policy.”

Where an “information moderation policy” includes “a policy that accurately describes, in plain, easy to understand language, information regarding the business practices of a covered entity with respect to the standards, processes, and policies of the covered entity on moderating information provided by a user or other information content provider…”

[If Generative AI does meet the legal definition of ICS]

Any platform that hosts user generated content including that created with generative AI tools would likely need to disclose how they moderate such content.

Congress has successfully directed and funded NIST to publish standards and frameworks regarding AI, most notably in Division E of the National Defence Authorization Act for Fiscal Year 2021 and in the Chips & Science Act. Expect to see continued work related to risk assessment, reporting and testing over the next few years.

Data Protection

The interaction between data protection/privacy laws and generative AI is perhaps the most challenging to assess because the proposal’s definitions of covered data, personal data and even public data matter tremendously. Regulators in Italy are testing the bounds of the General Data Protection Regulation (GDPR) now in their ChatGPT ban. For the analysis below I looked at the definitions for categories of data and considered the data being collected by most generative AI tools: device IDs, query inputs (that can be argued are reasonably linkable to a user), inferences made from public data. I believe the proposals below will capture the data practices of most generative AI tools. To the extent a generative AI tool can truly separate user queries from the user (no collection of device ID) they may be able to escape coverage of these provisions depending on the exact language of the text.

Many of the proposals below include right to access and/or right to deletion/erasure. Unfortunately, it is not always clear if those rights would extend to information generated about an individual (true or false) especially if the generative AI tool’s training data is entirely “public data.”

Some of the proposals also include provisions such as duty of care and obligations to test for and uphold civil rights that will likely extend to generative AI tools and require companies building these tools to carefully consider and test the outputs generated by their technology (aka the ways these tools process covered data/personal data).

American Data Privacy Protection Act (ADPPA) (H.R. 8152)

Rep. Pallone (D-NJ), Rep. McMorris Rodgers (R-WA), Rep. Schakowsky (D-IL), Rep. Bilirakis (R-FL)

Outlines a comprehensive set of obligations for covered entities and protections for consumers.

Generative AI tools would be considered covered entities and would have a range of obligations under the bill.

Includes a clarification that covered data would include “any inference made exclusively from multiple independent sources of publicly available information that reveals sensitive covered data with respect to an individual.” Therefore to the extent public data is used to train a generative AI tool reveals sensitive data about an individual, that output is protected.

Additionally, Sec 207 would clarify that a generative AI  “may not collect, process, or transfer covered data in a manner that discriminates in or otherwise makes unavailable the equal enjoyment of goods or services on the basis of race, color, religion, national origin, sex, or disability.”

Similar to Algorithm Accountability Act, depending on the size of the generative AI  platform, the company may have to conduct algorithm impact assessments that outlines the steps the company “has taken or will take to mitigate potential harms from the covered algorithm to an individual or group of individuals, including related to—

(I) covered minors;

(II) making or facilitating advertising for, or determining access to, or restrictions on the use of housing, education, employment, healthcare, insurance, or credit opportunities;

(III) determining access to, or restrictions on the use of, any place of public accommodation, particularly as such harms relate to the protected characteristics of individuals, including race, color, religion, national origin, sex, or disability;

(IV) disparate impact on the basis of individuals’ race, color, religion, national origin, sex, or disability status; or

(V) disparate impact on the basis of individuals’ political party registration status.”

To the extent that a generative AI tool uses data that is “licensed” or otherwise non-public they would potentially be considered a “third party” or a “third party collecting agency” and would be implicated by several provisions of the bill.

The bill also outlines mandates for data deletion and portability and includes important text acknowledging that exemptions may be needed for some technologies.

(D) FURTHER EXCEPTIONS.—The Commission may, by regulation as described in subsection (g), establish additional permissive exceptions necessary to protect the rights of individuals, alleviate undue burdens on covered entities, prevent unjust or unreasonable outcomes from the exercise of access, correction, deletion, or portability rights, or as otherwise necessary to fulfill the purposes of this section. In establishing such exceptions, the Commission should consider any relevant changes in technology, means for protecting privacy and other rights, and beneficial uses of covered data by covered entities.

Data Care Act (S.919)

Sen. Schatz (D-HI)

Mandates online services abide by a duty of care and duty of loyalty.

Most generative AI tools would hit the definition of “online service provider” because they collect what the bill refers to as “individual identifying data.”

This means they would be held to a “duty of care”: “An online service provider shall—

(A) reasonably secure individual identifying data from unauthorized access; and

(B) subject to subsection (d), promptly inform an end user of any breach of the duty described in subparagraph (A) of this paragraph with respect to sensitive data of that end user.”

And a “duty of loyalty”

“An online service provider may not use individual identifying data, or data derived from individual identifying data, in any way that—

(A) will benefit the online service provider to the detriment of an end user; and

(B) (i) will result in reasonably foreseeable and material physical or financial harm to an end user; or

(ii) would be unexpected and highly offensive to a reasonable end user.”

My best interpretation is that a query entered into a generative AI tool would count as “individual identifying data” and therefore could not be used “in reasonably foreseeable and material physical or financial harm to an end user” meaning potential output of say a chatbot that met this level of harm would be prohibited.

Consumer Online Privacy Rights Act (COPRA) (S.3195)

Sen. Cantwell (D-WA)

Outlines a comprehensive set of obligations for covered entities and protections for consumers. Generative AI tools would be considered covered entities and would have a range of obligations under the bill.

The covered entity definition will likely include generative AI tools both because they collect query data which I interpret to be “covered data” and because the training data while presumably “public data,” if it is combined to display personal data it would be captured in the “limitation” for “publicly available information” which does not include “information derived from publicly available information.”

As a covered entity, generative AI tools would have a duty of care and loyalty similar to that described in S.919, but slightly broader: “A covered entity shall not— (1) engage in a deceptive data practice or a harmful data practice…” where, “The term “deceptive data practice” means an act or practice involving the processing or transfer of covered data in a manner that constitutes a deceptive act or practice in violation of section 5(a)(1) of the Federal Trade Commission Act (15 U.S.C. 45(a)(1))” and “The term “harmful data practice” means the processing or transfer of covered data in a manner that causes or is likely to cause any of the following:(A) Financial, physical, or reputational injury to an individual. (B) Physical or other offensive intrusion upon the solitude or seclusion of an individual or the individual’s private affairs or concerns, where such intrusion would be offensive to a reasonable person. (C) Other substantial injury to an individual.” presumably covering generated responses that say, persuade someone to leave their wife.

The bill includes civil rights protections

“A covered entity shall not process or transfer covered data on the basis of an individual’s or class of individuals’ actual or perceived race, color, ethnicity, religion, national origin, sex, gender, gender identity, sexual orientation, familial status, biometric information, lawful source of income, or disability—

(A) for the purpose of advertising, marketing, soliciting, offering, selling, leasing, licensing, renting, or otherwise commercially contracting for a housing, employment, credit, or education opportunity, in a manner that unlawfully discriminates against or otherwise makes the opportunity unavailable to the individual or class of individuals; or

(B) in a manner that unlawfully segregates, discriminates against, or otherwise makes unavailable to the individual or class of individuals the goods, services, facilities, privileges, advantages, or accommodations of any place of public accommodation.”

The bill also requires algorithmic decision-making impact assessments:

“a covered entity engaged in algorithmic decision-making, or in assisting others in algorithmic decision-making for the purpose of processing or transferring covered data, solely or in part to make or facilitate advertising for housing, education, employment or credit opportunities, or an eligibility determination for housing, education, employment or credit opportunities or determining access to, or restrictions on the use of, any place of public accommodation, must annually conduct an impact assessment of such algorithmic decision-making that—

(A) describes and evaluates the development of the covered entity’s algorithmic decision-making processes including the design and training data used to develop the algorithmic decision-making process, how the algorithmic decision-making process was tested for accuracy, fairness, bias and discrimination; and

(B) assesses whether the algorithmic decision-making system produces discriminatory results on the basis of an individual’s or class of individuals’ actual or perceived race, color, ethnicity, religion, national origin, sex, gender, gender identity, sexual orientation, familial status, biometric information, lawful source of income, or disability.”

Under the bill, covered entities have a range of other duties and obligations.

Children and Teens’ Online Privacy Protection Act (S.1628)

Sen. Markey (D-MA), Sen. Cassidy (R-LA)

Amends the Children’s Online Privacy Protection Act of 1998.

Generative AI tools that are “directed to children or minors” (under age 17) as demonstrated by a set of criteria related to the marketing and appearance of the tool or because the tool is “used or reasonably likely to be used by children or minors.” (this last part will likely capture many of the generative AI tools out right now) would be subject to data collection and processing provisions related to “personal information.”

“The term "personal information" means individually identifiable information about an individual collected online, including-(A) a first and last name; (B) a home or other physical address including street name and name of a city or town; (C) an e-mail address; (D) a telephone number; (E) a Social Security number; (F) any other identifier that the Commission determines permits the physical or online contacting of a specific individual; or (G) information concerning the child or the parents of that child that the website collects online from the child and combines with an identifier described in this paragraph.”

Protecting the Information of our Vulnerable Children and Youth Act (Kids PRIVCY) (H.R. 4801)

Rep. Castor (D-FL)

Amends the Children’s Online Privacy Protection Act of 1998. Includes a full set of product design and data collection obligations.

A generative AI tool that processes covered information (“means any information, linked or reasonably linkable to a specific teenager [under age 18] or child, or specific consumer device of a teenager or child”) and is “directed to children” (“targeted to or attractive to children”) would likely be a children’s service.

Operators of a children's service have several obligations under the bill related to data minimization, transparency, consent, retention of data, sharing of data with third parties, rights to access, correct, delete covered information.

Additionally, the bill outlines “prohibited practices with respect to teenagers and children” which includes:

“An operator of a children’s service may not—

“(i) process any covered information in a manner that is inconsistent with what a reasonable teenager or parent of a child would expect in the context of a particular transaction or the teenager’s or parent’s relationship with such operator, or seek to obtain verifiable consent for such processing;

“(ii) process any covered information in a manner that is harmful or has been shown to be detrimental to the well-being of children or teenagers;

“(iii) process covered information for the purpose of providing for targeted personalized advertising or engage in other marketing to a specific child or teenager or group of children or teenagers based on—

“(I) using the covered information, online behavior, or group identifiers of such child or teenager or of the children or teenagers in such group; or

“(II) using the covered information or online behavior of children or teenagers who share characteristics with such child or teenager or with the children or teenagers in such group, including income level or protected characteristics or proxies thereof;

“(iv) condition the participation of a child or teenager in a game, sweepstakes, or other contest on consenting to the processing of more covered information than is necessary for such child or teenager to participate;

“(v) engage in cross-device tracking of a child or teenager unless the child or teenager is logged-in to a specific service, for the sole purpose of facilitating the primary purpose of the good or service or a specific feature thereof;

“(vi) engage in algorithmic processes that discriminate on the basis of race, age, gender, ability, or other protected characteristics;

“(vii) disclose biometric information;

“(viii) disclose geolocation information; or

“(ix) collect geolocation information by default or without making it clear to a user when geolocation tracking is in effect”

Point (ii) highlights that generative AI tools are responsible for their outputs in response to a child’s query. Point (vii) includes “biometric information” which the bill does not define but in other bills includes voice prints and facial mapping, generative AI tools would be prohibited from disclosing (which likely includes generating) this information.

Information Transparency and Personal Data Control Act (H.R.1816)

Rep. DelBene (D-WA)

Outlines conditions under which controls offer consumers opt-in and opt-out of data collection. Generative AI tools that collect “sensitive personal information” would be considered a controller.

Most generative AI tools will likely collect “sensitive personal information” because of the following:

“(xvi) web browsing history, application usage history, and the functional equivalent of either that is data described in this subparagraph that is not aggregated data.”

There are carve outs for de-identified information and publicly available information that may limit a generative AI tool’s responsibility for training data.

Social Media Privacy Protection and Consumer Rights Act (S. 1667)

Sen. Klobuchar (D-MN), Sen. Kennedy (R-LA)

Mandates online platforms implement certain privacy protections pertaining to transparency and terms of service, access rights, and actions when a violation of privacy occurs.

This bill uses the following definition of “online platform” and would likely capture most generative AI tools although the “and” may mean that generative AI tools are only covered if the courts determine they are “a search engine.”

The term “online platform”—

(A) means any public-facing website, web application, or digital application (including a mobile application); and

(B) includes a social network, an ad network, a mobile operating system, a search engine, an email service, or an internet access service.”

Balancing the Rights Of Web Surfers Equally and Responsibly Act of 2021 (S.113, H.R.4659)

Sen. Blackburn (R-TN)

Requires “edge services” obtain opt-in approval from a user to use, disclose, or permit access to the sensitive user information of the user.

Sensitive user information includes any of the following: (A) Financial information. (B) Health information. (C) Information pertaining to children under the age of 13. (D) Social Security number. (E) Precise geolocation information. (F) Content of communications. (G) Web browsing history, history of usage of a software program (including a mobile application), and the functional equivalents of either

The “functional equivalents” of browsing history would likely cover user queries into a generative AI tool.

Clean Slate for Kids Online Act of 2021 (S.1423)

Sen. Durbin (D-IL), Sen. Markey (D-MA), Sen. Blumenthal (D-CT), Sen. Hirono (D-HI)

Provides deletion rights for personal information regarding children 13 or under.

This bill would require “the operator of any website or online service directed to children” to give “individual over the age of 13, or a legal guardian of an individual over the age of 13 acting with the knowledge and consent of the individual,” the ability to request and delete “all personal information in the possession of the operator that was collected from or about the individual when the individual was a child notwithstanding any parental consent that may have been provided when the individual was a child;” where personal information has the definition in Section 1302 of the Children's Online Privacy Protection Act of 1998 (below)

Personal information as defined in COPPA:

The term "personal information" means individually identifiable information about an individual collected online, including-

(A) a first and last name;

(B) a home or other physical address including street name and name of a city or town;

(C) an e-mail address;

(D) a telephone number;

(E) a Social Security number;

(F) any other identifier that the Commission determines permits the physical or online contacting of a specific individual; or

(G) information concerning the child or the parents of that child that the website collects online from the child and combines with an identifier described in this paragraph.

First, it is interesting to note that the definition of personal information in COPPA does not have a carve out for public data, in fact in order to host an interactive computer services for children, the operator is responsible for deleting all individually identifiable information from postings by children before they

are made public, and also deletes such information from the operator’s records.’’ (Federal Register / Vol. 78, No. 12 / Thursday, January 17, 2013 / Rules and Regulations)

Regarding a generative AI tool’s training data, there should not be any public data posted by a child (assuming website have been following COPPA) however it is possible that a generative AI tool used data posted by adults that would contain personal information belonging to a child and as an operator under this law may need to process deletion requests. If this bill intends to include any and all personal information belonging to children under 13 including in training data sets based on public information created by adult, this text would benefit from clarity.

To the extent a generative AI tool is “directed to children” they would also likely be responsible for deleting any query data stored from the young users under this proposal.

Consumer Data Privacy and Security Act of 2021 (S.1494)

Sen. Moran (R-KS)

Outlines a comprehensive set of obligations for covered entities and protections for consumers.

Companies building generative AI tools would likely be considered covered entities because they “alone, or jointly with others, determine the purpose and means of collecting or processing personal data” where “personal data” means “information that identifies or is linked or reasonably linkable to a specific individual.” which I interpret to include query data linked to an account.

The proposal does exclude “publicly available information” from the definition of personal data where “The term “publicly available information” means any information that a covered entity or service provider has a reasonable basis to believe is lawfully made available to the general public from– (i) a Federal, State, or local government record; (ii) widely distributed media; or (iii) a disclosure to the general public that is made voluntarily by an individual, or required to be made by a Federal, State, or local law.” In the case that the inferences made by the generative AI tool are made with publicly available information that sensitive information may fall outside the bounds of this bill.

Generative AI tools or technologies that incorporate generative AI tools covered by this bill will also be required to provide data access, portability, ability to correct and erase. Due to the publicly available information exclusion, it is unclear if a provider of a generative AI tool would be required to provide erasure rights in the case of information (true or false) generated about an individual.

Setting an American Framework to Ensure Data Access, Transparency, and Accountability Act or the SAFE DATA Act (S.2499)

Sen. Wicker (R-MS), Sen. Blackburn (R-TN)

Outlines a comprehensive set of obligations for covered entities and protections for consumers.

This is a comprehensive data protection proposal and most companies building generative AI tools would be considered covered entities because they “collects, processes, or transfers covered data; and determines the purposes and means of such collection, processing, or transfer.” where covered data is “linked or reasonably linkable to an individual.” which I interpret to include query data linked to an account.

This proposal does exclude “publicly available information” from the definition of personal data where “the term “publicly available information” means any information that a covered entity has a reasonable basis to believe—(I) has been lawfully made available to the general public from Federal, State, or local government records;(II) is widely available to the general public, including information from—(aa) a telephone book or online directory;(bb) television, internet, or radio content or programming; or(cc) the news media or a website that is lawfully available to the general public on an unrestricted basis (for purposes of this subclause a website is not restricted solely because there is a fee or log-in requirement associated with accessing the website); or (III) is a disclosure to the general public that is required to be made by Federal, State, or local law.” This definition quite clearly includes any data posted publicly on a social media site or online forum regardless of sensitivity.

Generative AI tools or technologies that incorporate generative AI tools covered by this bill will also be required to provide “Access to, and correction, deletion, and portability of, covered data.” It is unclear if a provider of a generative AI tool would be required to provide erasure rights in the case of information (true or false) generated about an individual.

Note: There are several sector specific data protection bills (health data, student data, etc) that would likely be relevant to generative AI tools but are outside the scope of this list.

Product Design Considerations

Many lawmakers have begun to view digital platforms from the viewpoint of protecting consumers from faulty product design and/or business models. Members of Congress have proposed prohibitions on targeted advertising, dark patterns or manipulative interfaces (endless scroll, autoplay, badges, etc) and have outlined requirements for parental controls. The proposals below mostly target platforms that are understood to be protected from product liability cases under Section 230 of the Communication Decency Act, although many of the proposals below do not directly rely on the definition of Interactive Computer Service (ICS) defined in CDA 230, meaning generative AI tools would be covered regardless of whether courts consider generative AI tools to be ICS.

Banning Surveillance Advertising Act (H.R.6416) (S.3520)

Rep. Eshoo (D-CA), Sen. Booker (D-NJ)

Bans targeted advertising.

To the extent a generative AI tool uses “personal information with respect to the dissemination of the advertisement.” they would be banned from targeted advertising and allowed only to use contextual advertising.

“The term “target” means, with respect to the dissemination of an advertisement, to perform or cause to be performed any computational process designed to select an individual, connected device, or group of individuals or connected devices to which to disseminate the advertisement based on personal information pertaining to the individual or connected device or to the individuals or connected devices that make up the group.”

Deceptive Experiences to Online Users Reduction (DETOUR) Act (H.R.6083) (S.3330)

Rep. Blunt Rochester (D-DE), Rep. Gonzalez (R-OH), Sen. Warner (D-VA), Sen. Fischer (R-NE)

Prohibits certain manipulative user interfaces.

Large online services defined as “a website or a service, other than an internet access service, that is made available to the public over the internet, including a social network, a search engine, or an email service.” and has “more than 100,000,000 authenticated users of an online service in any 30-day period”

It would be unlawful for large online services:

(1) to design, modify, or manipulate a user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision making, or choice to obtain consent or user data;

(2) to subdivide or segment consumers of online services into groups for the purposes of behavioral or psychological experiment or research of users of an online service, except with the informed consent of each user involved; or

(3) to design, modify, or manipulate a user interface on a website or online service, or portion thereof, that is directed to an individual under the age of 13, with the purpose or substantial effect of causing, increasing, or encouraging compulsive usage, inclusive of video auto-play functions initiated without the consent of a user.

Kids Online Safety Act (KOSA) (S.3663)*

*Also listed in risk assessment & transparency

Sen. Blumenthal (D-CT), Sen. Blackburn (R-TN)

Outlines a duty of care, mandates certain product design features and parental controls.

The provisions in this bill are tied to covered platforms where the “term “covered platform” means a social media service, social network, video game, messaging application, video streaming service, educational service, or an online platform that connects to the internet and that is used, or is reasonably likely to be used, by a minor.”

I think it is safe to assume “an online platform that connects to the internet” would cover most generative AI tools.

Regarding design and product limitations, the bill includes a duty of care that would impact the design and testing of generative AI tools:

“(a) Best interests.—A covered platform shall act in the best interests of a minor that uses the platform's products or services, as described in subsection (b).

(b) Prevention of harm to minors.—In acting in the best interests of minors, a covered platform shall take reasonable measures in its design and operation of products and services to prevent and mitigate—

(1) mental health disorders or associated behaviors, including the promotion or exacerbation of self-harm, suicide, eating disorders, and substance use disorders;

(2) patterns of use that indicate or encourage addiction-like behaviors;

(3) physical violence, online bullying, and harassment of a minor;

(4) sexual exploitation, including enticement, grooming, sex trafficking, and sexual abuse of minors and trafficking of online child sexual abuse material;

(5) promotion and marketing of narcotic drugs (as defined in section 102 of the Controlled Substances Act (21 U.S.C. 802)), tobacco products, gambling, or alcohol; and

(6) predatory, unfair, or deceptive marketing practices, or other financial harms.”

Additionally, platforms must design in safeguards for minors and parental tools:

“a) Safeguards for minors.—

(1) IN GENERAL.—A covered platform shall provide a minor with readily-accessible and easy-to-use safeguards to, as applicable—

(A) limit the ability of other individuals to contact or find a minor, in particular individuals aged 17 or over with no relationship to the minor;

(B) prevent other users, whether registered or not, from viewing the minor’s personal data collected by or shared on the covered platform, in particular restricting public access to personal data;

(C) limit features that increase, sustain, or extend use of the covered platform by a minor, such as automatic playing of media, rewards for time spent on the platform, notifications, and other features that result in compulsive usage of the covered platform by a minor;

(D) control algorithmic recommendation systems that use a minor’s personal data, including the right to—

(i) opt out of such algorithmic recommendation systems; or

(ii) limit types or categories of recommendations from such systems;

(E) delete the minor's account and delete their personal data;

(F) restrict the sharing of the geolocation of a minor and provide notice regarding the tracking of a minor’s geolocation; and

(G) limit the amount of time spent by a minor on the covered platform.

b) Parental tools.—

(1) TOOLS.—A covered platform shall provide readily-accessible and easy-to-use tools for parents to supervise the use of the covered platform by a minor.

(2) REQUIREMENTS.—The tools provided by a covered platform shall include—

(A) the ability to control privacy and account settings, including the safeguards established [above];

(B) the ability to restrict purchases and financial transactions by a minor, where applicable;

(C) the ability to track metrics of total time spent on the platform; and

(D) control options that allow parents to address the harms described in [harm to minors section]”

Kids Internet Design and Safety Act (KIDS Act) (S.2918) (H.R.5439)

Sen. Markey (D-MA), Rep. Castor (D-FL)

Prohibits certain interface elements within online platforms directed to children.

Most generative AI tools would meet the definition of an online platform.

To the extent they are “directed to children” they would face a “prohibition on certain interface elements.” Notably this would include “(iv) Any interface element or setting that unfairly encourages a covered user, due to their age or inexperience, to share personal information, submit content, or spend more time engaging with the platform.”

Additionally, the text includes language that would make it unlawful for an online platform directed to children to use an:

“algorithmic process that amplifies, promotes, or encourages covered users' consumption of videos and other forms of content that—

(A) are of a non-educational nature (as determined by the Commission); and

(B) involve—

(i) sexual material;

(ii) promotion of physical or emotional violence or activities that can reasonably be assumed to result in physical or emotional harm, including self-harm, use of weapons, and bullying;

(iii) activities that are unlawful for covered users to engage in or the promotion of such activities; or

(iv) wholly commercial content that is not reasonably recognizable as such to a covered user.”

Where

“The term “algorithmic process” means a computational process, including one derived from machine learning or other artificial intelligence techniques, that processes personal information or other data for the purpose of determining the order or manner that a set of information is provided to a user of an online platform, including the provision of commercial content, the display of social media posts, or any other method of automated decision making, content selection, content recommendation, or content amplification.”

The phrase “determining the order or manner that a set of information is provided to a user of an online platform” makes me believe many generative AI tools would be covered. It is less clear to me if a response from a generative AI tool to a user’s query would qualify as “algorithmic process that amplifies, promotes, or encourages covered users' consumption of videos and other forms of content.”

Nudging Users to Drive Good Experiences on Social Media Act” or the “Social Media NUDGE Act” (S.3608)

Sen. Klobuchar (D-MN),  Sen. Lummis (R-WY)

The bill covers platforms, which includes “any public-facing website, desktop application, or mobile application that—

(A) is operated for commercial purposes;

(B) provides a forum for user-generated content;

(C) is constructed such that the core functionality of the website or application is to facilitate interaction between users and user-generated content; and

(D) has more than 20,000,000 monthly active users in the United States for a majority of the months in the previous 12-month period.

This definition will likely not cover stand alone generative AI tools but would cover social media platforms that integrate generative AI or spread content created by generative AI.

Bill directs the National Science Foundation (NSF) to work with the National Academy of Sciences, Engineering, and Medicine (NASEM) to conduct a study to identify “content-neutral interventions” aimed at reducing the spread of “harms related to algorithmic amplification and social media addiction.”

The bill directs the FTC to conduct rulemaking on how covered platforms should apply the findings from the study.

“Promoting Rights and Online Speech Protections to Ensure Every Consumer is Heard Act” or the “PRO-SPEECH Act”. (S. 2031)

Sen. Wicker (R-MS)

Outlaws internet platforms from preventing access to lawful content.

Generative AI tools would likely be considered an internet platform under this bill because they “enables a user to initiate a search query for particular information using the internet and…[are] capable of returning at least 1 search result unaffiliated with the owner or operator of the search engine.”

The bill states that an internet platform may not “Blocks or otherwise prevents a user or entity from accessing any lawful content, application, service, or device that does not interfere with the internet platform’s functionality or pose a data privacy or data security risk to a user.” along with other outlawed practices. This type of language would mean that creators of generative AI tools would have to think about the types of queries they block/discourage and may have to consider if those restrictions prevent the users access to lawful content.

The bill also includes transparency requirements “an internet platform shall disclose, on a publicly available and easily accessible website, accurate information regarding the platform management practices, performance characteristics, and commercial terms of service of its app store, cloud computing service, operating system, search engine, or social media network sufficient to enable a reasonable user to make an informed choice regarding the purchase or use of such service and to develop, market, and maintain a product or service on the internet platform.”

Note: there are many bills that amend Section 230, in the case that a generative AI tool is deemed an Interactive Computer Service by the courts, these bills would be relevant. Additionally, bills that amend Section 230 may also impact the way social media companies treat content created by generative AI tools.

AI for the Public Benefit

Many lawmakers have come to understand that artificial intelligence capabilities could offer benefits for the public and that the government plays a role in funding research or applications that otherwise may not be addressed by the private market.

National Artificial Intelligence Research Resource (NAIRR)

(Initiated through National Defense Authorization Act for Fiscal Year 2021)

In January 2023, the NAIRR taskforce submitted its report to Congress and the President. The report includes an implementation roadmap which may inform a series of legislative proposals in the 118th Congress.

“The NAIRR is envisioned as a shared computing and data infrastructure that will provide AI researchers and students across scientific fields and disciplines with access to compute resources and high-quality data, along with appropriate educational tools and user support. The goal for such a national resource is to democratize access to the cyberinfrastructure that fuels AI research and development, enabling all of America’s diverse AI researchers to participate in exploring innovative ideas for advancing AI, including communities, institutions, and regions that have been traditionally underserved.”

Advancing American Artificial Intelligence Innovation Act (S.3175)

Sen. Rosen (D-NV), Sen. Portman (R-OH)

Encourages  the Department of Defence to “carry out a pilot program to assess the feasibility and advisability of establishing data libraries for developing and enhancing artificial intelligence capabilities to ensure that the Department of Defense is able to procure optimal artificial intelligence and machine learning software capabilities to meet Department requirements and technology development goals.”

Consumer Safety Technology Act (H.R.3723)

Rep. McNerney (D-CA), Rep. Burgess (R-TX)

Directs the Consumer Product Safety Commission to “establish a pilot program to explore the use of artificial intelligence by the Commission in support of the consumer product safety mission of the Commission.”

Such as using AI to “(A) Tracking trends with respect to injuries involving consumer products. (B) Identifying consumer product hazards. (C) Monitoring the retail marketplace (including internet websites) for the sale of recalled consumer products (including both new and used products). (D) Identifying consumer products required by section 17(a) of the Consumer Product Safety Act (15 U.S.C. 2066(a)) to be refused admission into the customs territory of the United States.”

Competition in Digital Markets

Several lawmakers have highlighted the sheer amount of resources, such as computational power, required to build generative AI tools and that these resources are concentrated in a handful of companies (see the House Judiciary Digital Markets Investigations Cloud Computing Chapter, Voice Assistants Chapter, and Amazon Web Services Chapter). Some of the competition policy proposals focus broadly on oxygenating the digital market space by providing antitrust regulators more resources to bring cases and scrutinize mergers, while others specifically prohibit anti competitive practices. I did my best to highlight a few proposals that would have clear implications for companies deploying generative AI tools today.

American Choice and Innovation Online Act (S.2992, H.R.3816)

Sen. Klobuchar (D-MN), Sen. Grassley (R-IA), Rep. Cicilline (D-NJ), Rep. Gooden (R-TX)

This bill only covers very large companies for which market cap is a key metric (and has been fluctuating over the last few years), for the sake of analyzing generative AI, I am assuming Amazon, Google and Microsoft are covered.

Makes a series of anti competitive discrimination unlawful including:

“(1) preference the products, services, or lines of business of the covered platform operator over those of another business user on the covered platform in a manner that would materially harm competition;

(2) limit the ability of the products, services, or lines of business of another business user to compete on the covered platform relative to the products, services, or lines of business of the covered platform operator in a manner that would materially harm competition;

(3) discriminate in the application or enforcement of the terms of service of the covered platform among similarly situated business users in a manner that would materially harm competition;

(4) materially restrict, impede, or unreasonably delay the capacity of a business user to access or interoperate with the same platform, operating system, or hardware or software features that are available to the products, services, or lines of business of the covered platform operator that compete or would compete with products or services offered by business users on the covered platform;

(5) condition access to the covered platform or preferred status or placement on the covered platform on the purchase or use of other products or services offered by the covered platform operator that are not part of or intrinsic to the covered platform;

(6) use nonpublic data that are obtained from or generated on the covered platform by the activities of a business user or by the interaction of a covered platform user with the products or services of a business user to offer, or support the offering of, the products or services of the covered platform operator that compete or would compete with products or services offered by business users on the covered platform;

(7) materially restrict or impede a business user from accessing data generated on the covered platform by the activities of the business user, or through an interaction of a covered platform user with the products or services of the business user, such as by establishing contractual or technical restrictions that prevent the portability by the business user to other systems or applications of the data of the business user;

(8) materially restrict or impede covered platform users from uninstalling software applications that have been preinstalled on the covered platform or changing default settings that direct or steer covered platform users to products or services offered by the covered platform operator, unless necessary—

…(9) in connection with any covered platform user interface, including search or ranking functionality offered by the covered platform, treat the products, services, or lines of business of the covered platform operator more favorably relative to those of another business user than under standards mandating the neutral, fair, and nondiscriminatory treatment of all business users; or…”

The provisions could be interpreted by the courts to cover situations such as:

  • A user enters a query into Bing’s generative AI chatbot (“what video games should I play this weekend”) that is trained to disproportionately respond with games produced by Activision Blizzard (This is an example of a Microsoft product preferencing another line of business)
  • Applications using  Bard work twice as fast on Android operating system  
  • A startup company building a generative AI tool uses AWS for the compute power, AWS can not intentionally slow service for the startup or use the metadata generated by the startup’s use of AWS to compete against it, etc
  • [send me your examples, it is a fun game]

Ending Platform Monopolies Act (H.R.3825)

Rep. Jayapal (D-WA), Rep. Gooden (R-TX)

This bill only covers very large companies for which market cap is a key metric (and has been fluctuating over the last few years), for the sake of analyzing generative AI, I am assuming Amazon, Google and Microsoft are covered.

Makes it illegal for these companies to give rise to a conflict of interest:

“it shall be unlawful for a covered platform operator to own, control, or have a beneficial interest in a line of business other than the covered platform that—

(1) utilizes the covered platform for the sale or provision of products or services;

(2) offers a product or service that the covered platform requires a business user to purchase or utilize as a condition for access to the covered platform, or as a condition for preferred status or placement of a business user’s product or services on the covered platform; or

(3) gives rise to a conflict of interest.

(b) Conflict of interest.—For purposes of this section, the term “conflict of interest” includes the conflict of interest that arises when—

(1) a covered platform operator owns or controls a line of business, other than the covered platform; and

(2) the covered platform’s ownership or control of that line of business creates the incentive and ability for the covered platform to—

(A) advantage the covered platform operator’s own products, services, or lines of business on the covered platform over those of a competing business or a business that constitutes nascent or potential competition to the covered platform operator; or

(B) exclude from, or disadvantage, the products, services, or lines of business on the covered platform of a competing business or a business that constitutes nascent or potential competition to the covered platform operator.”

Given that data storage and compute is a primary input into generative AI tools, and that the companies covered by this bill build generative AI and compete against other companies building generative AI tools, cloud computing business lines (AWS, Google Cloud, Azure), at a minimum, would likely need to be structurally separated from the parent companies.

Not Included for analysis at this time:

  • Legislation related to hardware (semiconductors)
  • Legislation related to intellectual property law
  • Legislation related to media literacy
  • Legislation related to financial services
  • Legislation to create “task forces” on various data topics (there are dozens of these, often floating in omnibus bills)
  • Legislation related to workforce development (including immigration reform)
  • Legislation related to worker protections
  • Legislation related to national security (use in cyber attacks or weapon development)

Beyond the US Congressional level:

Other organizations have been tracking developments regarding AI at the international and state level. To understand the implications for generative AI, I suggest interrogating the definitions similar to my analysis above.