Last month, OpenAI sent a letter to Governor Gavin Newsom essentially advocating for California to give up its current efforts to regulate AI in the style recommended by an expert panel he convened (i.e. SB 53), and instead treat either signing EU AI Act’s General-Purpose AI Code of Practice or collaborating with the US government agency CAISI (formerly the US AI Safety Institute) as sufficient to qualify as following state regulations.

Their letter was misleading at best. The letter has been copied below (original text here) and commentary has been added in a subset of cases where, without that additional context, a reader of the letter might come to incorrect conclusions.

OpenAI performs a lot of lobbying, and will soon be putting substantial amounts of money into essentially threatening politicians who disagree with their take on AI policy (an executive recently launched a new $100M PAC in partnership with a16z with the goal of stopping what they consider excessive AI regulations, and the author of the letter below was reportedly involved in the formation of this PAC). It’s sometimes unclear what OpenAI says behind the scenes and what kinds of policies they will require politicians to subscribe to in order for OpenAI-affiliated PACs to not throw money at these politicians’ election opponents. Letters like these give some sense of what their agenda is.

While SB 53 is not mentioned by name, it is the clear subtext of the letter. An annotated review of SB 53 can be found here (note that a few amendments were passed to weaken it just in the past few days, perhaps spurred in part by industry pressure).

---

Re: Recognition of International and Federal AI Safety Frameworks for State Law Compliance

Dear Governor Newsom,

We want to begin by thanking you and your Administration for your robust support for the state's technology sector. As we outlined recently in our California Economic Impact Report, the AI sector is driving tremendous innovation in our state’s economy while adding billions of dollars of revenue to the budget — and has the potential to drive even more economic opportunity, more jobs, and more revenue. As you have recognized, California is incredibly well positioned to maintain and strengthen its status as the fourth largest economy in the world because it is home to the AI builders, entrepreneurs, and researchers who will shape the Intelligence Age.

We are writing today to recommend that California take the lead in harmonizing state-based AI regulation with emerging global standards that will spur adoption and innovation, and drive huge leaps in productivity and prosperity for Californians.

In particular, we recommend policies that avoid duplication

⚠️Note: SB 53 contains provisions that aren’t present in the EU AI Act’s General-Purpose AI Code of Practice. For example, SB 53 requires certain safety documentation to be published, full stop, whereas the General-Purpose AI Code of Practice requires this only insofar as it is “necessary” for certain purposes. It is very plausible that companies like OpenAI will lobby for weak interpretations of what this means, or non-enforcement of certain provisions.

and inconsistencies between state requirements and those of similar democratic regimes governing frontier model safety.

Last week, we became the first US Al company to announce our intent to sign the EU AI Act Code of Practice (CoP), which already creates requirements similar to many being contemplated in California.

⚠️Note: While it is good that OpenAI signed earlier rather than later and didn’t e.g. try to work with the White House to encourage the EU to stand down from enforcement (yet), signing is still hardly an act of benevolence showing proactiveness in AI governance. In order to serve products in the EU, OpenAI will have to follow with the AI Act one way or the other to the extent it is enforced at all, and following the Code of Practice is the most legally safe way to do so, since it is specified in more detail than the original AI Act itself (the Code of Practice was intended to tie up some loose ends left open in the original Act, specifically related to general-purpose AI systems).

OpenAI also formally committed to working with the US federal government and its Center for AI Standards and Innovation (CAISI) to conduct evaluations of frontier models’ national security-related capabilities.

⚠️Note: what OpenAI is referring to here is a voluntary agreement for OpenAI to allow US CAISI to evaluate their models before deployment. OpenAI could exit this if they wanted, and there is no guarantee OpenAI would listen to CAISI if CAISI’s evaluations turned up something concerning. It is also unlikely that CAISI would try to exert pressure on OpenAI in any way given that they are not a regulatory agency, they do not have a director under the new administration, and more generally they are seen as being in a politically challenging situation that would be imperiled by fights with industry.

Similar to the EU Code of Practice, we were one of the first large language model companies to partner with the US federal government.

We believe that California’s leadership in technology regulation is most effective when it complements effective global and federal safety ecosystems.

⚠️Note: OpenAI often compains about state-based AI regulation because it purportedly prefers global and federal regulation. And a few years ago, OpenAI did sometimes talk about binding international regulation. However, OpenAI hasn’t proposed or supported any substantive global or federal regulation that is comparable to the EU AI Act or SB 53 in ambition and reasonably well specified. They have instead called for the federal government to “Work with both large AI companies and start-ups on a purely voluntary and optional basis” and more generally given the impression that industry has things under control other needing help on building datacenters.

By integrating the provisions of the EU CoP and agreements with the US Center for AI Standards and Innovation (CAISI) into any compliance pathway, the state can protect residents, uphold democratic values, and promote innovation on a global scale.

In order to make California a leader in global, national and state-level AI policy, we encourage the state to consider frontier model developers compliant with its state requirements when they sign onto a parallel regulatory framework like the CoP or enter into a safety-oriented agreement with a relevant US federal government agency.

⚠️Note: Signing onto the EU AI CoP does not necessarily mean OpenAI would be actually complying with that Code of Practice. They could violate it in ways the EU chooses to ignore (and which OpenAI might encourage the EU to ignore, since they have a lot of leeway in enforcement and the US government could bring significant pressure on the EU as it currently is doing with many countries). In such a scenario, California would have no way to verify they were complying or to enforce compliance. OpenAI could also sign onto the CoP and serve models that don’t meet the standards of the CoP to people outside the EU (such as Californians), as it has threatened to do in the past in response to the AI Act. OpenAI has delayed launches of products in the EU before and could return to doing this to put pressure on the EU regarding the implementation of the AI Act, which has a lot of room for interpretation (hence the CoP, and hence the CoP likely being updated over time).

At the same time, we also encourage the state to continue supporting smaller developers and startups to ensure that they do not face challenges, such as a liability regime that pushes them out of California, and by exempting them from state regulations with compliance requirements that larger companies are in a better position to bear. We can all agree that we don’t want to inadvertently create a “California Environmental Quality Act (CEQA) for AI innovation” that would result in California dropping from leading in AI to lagging behind other states or even countries.

⚠️Note: This appears to be an implicit threat that OpenAI would leave California if SB 53 was passed. SB 53 is not particularly burdensome (it requires companies implement and adhere to a safety policy, publish some documentation on deployed systems, and have specific whistleblower protections). It also only applies to companies training very expensive models and making significant revenue (over $100M respectively when this letter was written, and now the revenue threshold is even higher). It seems very unlikely any AI companies would find it easier to move than follow these requirements.

Moreover, national frameworks such as the CAISI or CoP

⚠️Note: CAISI is not a “national framework.” It is a government institution that has strong incentives to be friendly to industry. Right now OpenAI voluntarily allows CAISI to conduct evaluations, but they could change their mind any time. And if OpenAI thought this kind of testing was important, they would presumably want their competitors to be required to follow the same requirements.

have the capacity for the kind of safety reviews and testing

⚠️Note: While CAISI and the EU AI Office have talented staff, they are small government organizations and do not necessarily have the capacity to evaluate every model regularly, and indeed when the Code of Practice was published, the authors of it called attention to this capacity issue. CAISI has ~12 technical staff, and is struggling to hire more due to the government’s hiring freeze. OpenAI seems to be implying that the existence of these agencies makes it unnecessary to take further steps such as the requirement for third-party auditing of companies’ safety claims, a provision which has been stripped since this letter (and would not have even started until 2030).

that a state simply cannot do, including testing that requires access to sensitive information and national security expertise. We believe that states, if genuinely interested in safety, should encourage and incentivize frontier model developers to partner with those agencies best equipped to conduct the most sophisticated and advanced reviews.

Finally, aligning California with the global standards being adopted by the leading democracies, including the US and EU, will help ensure that California is supporting the imperative to build on democratic AI as opposed to autocratic AI.

Companies operating in the communist-led People’s Republic of China are unlikely to abide by US state laws, and in fact will benefit from regulations that burden their US competitors with inconsistent standards. Imagine how hard it would have been during the Space Race had California’s aerospace and technology industries been encumbered by regulations that impeded rapid innovation and transition technology, instead of strengthening national security and economic competitiveness. Given our fervent belief that it is critical that US-led democratic AI prevails, we also believe that as California considers its approach to AI regulation, it must continue to be the engine of US economic competitiveness and play to win when it comes to national security interests.

As we have seen many times, California is most effective when it sets the path for other states and countries on major issues. This path allows the state to ensure that companies adhere to global and federal safety ecosystems while creating a national model for states to follow.

Since OpenAI is a non-profit dedicated to building AI that benefits all of humanity, we think that building democratic AI anchored in democratic values, inclusive of safety standards, is foundational to our ability to deliver on our mission.

⚠️Note: OpenAI has attempted to convert into a for-profit and only backed down from these plans after a large amount of backlash (it still intends to make some kind of related transition, but the details remain opaque).

Also, this is not OpenAI’s non-profit mission – their actual mission is to ensure that AGI benefits all of humanity.

OpenAI generally and the Global Affairs team specifically has a long track record of rephrasing the mission in a way that makes it easier to follow and more consistent with OpenAI’s commercial activities. As just one example, “building” as the focus implies that shipping products is a sufficient way to achieve the mission, whereas “ensuring” would imply that e.g., OpenAI has a responsibility to advocate for policies that would create strong incentives for safety, rather than opposing or weakening such policies.

Thank you for your consideration of this proposal. I would welcome the opportunity to discuss this further with your staff. Should California be interested in pursuing this approach, then similar to how OpenAI has been among the first to sign onto the safety frameworks established by the world’s leading democracies, we would be excited to consider being the first to sign onto a “California Approach” that reinforces those standards. We believe this would be a powerful win for California, a win for global safety standards, and a win for democracy.

Sincerely,

Christopher Lehane

Chief Global Affairs Officer, OpenAI