close
close

How CA and the EU are working together to regulate AI


In summary

California’s policies could have a huge impact on the future of artificial intelligence. The EU wants to advise and coordinate.

While the federal government seems content to sit and wait, more than 40 U.S. states are considering hundreds of bills to regulate artificial intelligence.

California, with its status as a technologically advanced country and a huge economy, has a chance to take the lead. So much so that the European Union is trying to coordinate AI regulations with the state. The EU opened an office in San Francisco in 2022 and deployed a technology envoy, Gerard de Graaf, to better communicate artificial intelligence laws and regulations.

We live in what de Graaf calls “the year of artificial intelligence.” De Graaf and deputy head of the EU’s San Francisco office, Joanna Smolinska, told CalMatters that if California lawmakers pass AI regulations in the coming months, the state could become the standard bearer of AI regulation in the United States. In other words: California’s laws could impact the future of artificial intelligence as we know it.

Last month, de Graaf traveled to Sacramento to talk with several state lawmakers key to regulating artificial intelligence:

The meeting to discuss the bills was at least the sixth trip by de Graaf or other EU officials to Sacramento in two months. EU officials who helped write the AI ​​law and European Commission Vice President Josep Fontelles have also made visits to Sacramento and Silicon Valley in recent weeks.

This week, EU leaders concluded a multi-year process by passing the Artificial Intelligence Act, which regulates the use of artificial intelligence in 27 countries. It bans emotion recognition in school and in the workplace, prohibits social scoring such as those used in China to reward or punish certain types of behavior, and some predictive policing. The Artificial Intelligence Act applies high-risk labels to artificial intelligence in health care, employment and government benefits.

There are some notable differences between EU law and what California lawmakers are considering. The Artificial Intelligence Act regulates how law enforcement can use artificial intelligence, while the Bauer-Kahan bill does not, and the Wicks Watermarking Act may prove stronger than the requirements of the Artificial Intelligence Act. However, both the California Act and the Artificial Intelligence Act take a risk-based approach to regulation, both recommend further testing and evaluation of forms of AI considered high risk, and both call for watermarking of generated AI outputs.

“If you take these three bills together, we probably cover 70-80% of what the AI ​​bill covers,” de Graaf said. “It’s a very solid relationship that we both benefit from.”

During the meeting, de Graaf said draft laws on artificial intelligence, bias and risk assessments related to artificial intelligence, advanced artificial intelligence models, the state of watermarked images and videos created by artificial intelligence and issues that should be prioritized were discussed. The San Francisco office works under the authority of the EU Delegation in Washington to promote EU technology policy and strengthen cooperation with influential technology and political figures in the United States.

AI can make predictions about people, such as what movies they want to watch on Netflix or the next words in a sentence, but without high standards and constant testing, AI making critical decisions about people’s lives could automate discrimination. Artificial intelligence has a history of harming people of color, such as police using facial recognition to decide whether to grant an apartment or house loan. This technology has been shown to have the potential to negatively impact the lives of most people, including women, people with disabilities, young and old, and people claiming government benefits.

In a recent interview with KQED, Umberg talked about the importance of finding balance, emphasizing that “We can get it wrong.” Too little regulation could have disastrous consequences for society, and too much could “strangle the artificial intelligence industry” that calls California home.

The coordination between California and EU officials aims to combine regulatory initiatives in two uniquely influential markets.

Gerard de Graaf, senior digital envoy to the US and head of the European Union office in San Francisco. Photo via X Graaf’s account. Illustration: Adriana Heldiz, CalMatters; iStock

Most of the top AI companies are based in California, and over the past eight months, San Francisco Bay Area companies have raised more money for AI investments than the rest of the world combined, according to Crunchbase, a startup tracker.

The General Data Protection Regulation, better known as GDPR, is the European Union’s best-known privacy legislation. This has also given rise to the term “Brussels effect”, when the enforcement of one law leads to undue influence in other countries. In this case, EU law forced tech companies to adopt stricter user protections if they wanted to gain access to the region’s 450 million inhabitants. This law went into effect in 2018, the same year California passed a similar law. More than a dozen US states followed suit.

Definition of artificial intelligence

Coordination is necessary, de Graaf said, because technology is a global industry and it is important to avoid policies that make it difficult for companies to comply with regulations that apply around the world.

One of the first steps to cooperation is to jointly define artificial intelligence to agree on what technology is covered by the law. De Graaf said his office worked with Bauer-Kahan and Umberg on the definition of artificial intelligence “because if we have very different definitions to start with, convergence or harmonization is almost impossible.”

Given the recent passage of the AI ​​Act, the lack of federal action and the complexity of AI regulation, Senate Judiciary lawyers have held numerous meetings with EU officials and staff, Umberg told CalMatters in a statement. The California Senate Judiciary Committee’s definition of artificial intelligence is informed by multiple voices, including federal agencies, the Organization for Economic Co-operation and Development, and the EU.

“I firmly believe that we can learn from each other and regulate AI responsibly without harming innovation in this dynamic and rapidly changing environment,” Umberg told CalMatters in a written statement.

Three bills discussed with de Graaf in April were passed by their chambers this week. He suspects the questions California lawmakers will ask will become more specific as the bills get closer to passage.

During the current legislative session, California lawmakers have proposed over 100 bills to regulate artificial intelligence.

“I think the imperative now for the Legislature is to reduce the number of bills to a more manageable number,” he said. “I mean, there are over 50 of them, so we focused specifically on bills for just assembly members or senators.”

The state agency also seeks to protect Californians’ privacy

Elected officials and their staff are not the only ones talking to EU officials. The California Privacy Protection Agency – the state agency charged with protecting people’s privacy and requiring companies to comply with deletion requests – also regularly talks to EU officials, including de Graaf.

Most states with privacy laws rely on state attorneys general to enforce the law. California is the only state with an independent agency with enforcement powers that can inspect companies, impose fines or bring legal action, said the agency’s executive director Ashkan Soltanti, as key elements of EU privacy law influenced the shaping of California’s privacy law . De Graaf and Soltani testified about the similarities between California and EU definitions of artificial intelligence during a hearing before the Assembly’s privacy committee in February.

“The agency’s roots were largely inspired by the General Data Protection Regulation (GDPR),” Soltani said. “There is an interest and a purpose, and in fact our statute directs us to make sure, wherever possible, that our approach is harmonious with the frameworks in place in other jurisdictions, not just in the states but also internationally.”

Soltani was hired after the agency was created in 2021. He said CalMatters’ international coordination plays a big part of that work. After hiring staff and lawyers, one of his first assignments was to join the Global Privacy Assembly, a group of 140 data protection authorities from around the world. California is the only US state that is a member of this group.

Adaptation is important for setting traffic rules for businesses, but also for consumers to protect themselves and their communities in a digital world where boundaries are blurring.

“They don’t think about whether they’re doing business with a California company, a European company or an Asian company, especially if everything is in English, they just think they’re interacting online, so having a consistent protection framework ultimately benefits consumers,” Soltani said.

Like California lawmakers, the California Privacy Protection Agency is in the process of developing rules governing companies’ use of artificial intelligence and protections for consumers, students and workers. As with the Artificial Intelligence Act, the draft regulations require impact assessments. Its five-member board will consider adopting the regulations in July.

The last day of the legislative calendar year in which California lawmakers can pass a bill is August 31.