close
close

Solondais

Where news breaks first, every time

sinolod

Oxford summit focuses on regulating generative AI

I spent a few days last week at the University of Oxford in the UK where I spoke and attended the Oxford Generative AI Summit. This multi-stakeholder event brought together elected and appointed officials from the UK and other countries, as well as academics, executives and scientists from technology and media companies.

Other speakers included Michael Kratsios, who served as U.S. chief technology officer during the Trump administration; Michael Bronstein, professor of DeepMind AI at the University of Oxford; Dame Wendy Hall DBE, professor of computer science who is part of the United Nations’ high-level advisory body on artificial intelligence; and Baroness Joanna Shields, OBE, who served as UK Minister for Internet Safety and Security under David Cameron and Theresa May. There were also executives from Google, TikTok, OpenAI and other tech companies.

GenAI explained

As a reminder, generative AI (or GenAI) is artificial intelligence capable of creating “original” content, including text, images, videos, audio and software code in response to a prompt or request. question entered by a human. It has been around for several years, but has grown in prominence in recent years thanks to major players like OpenAI, Google, Microsoft, and Meta devoting massive resources to the development of GenAI. I put the original in quotes because, although the AI ​​model generates the content, it is based on training data it obtains online and from other sources. So even though the wording is original, the information comes from many other places. Of course, this is also true for human-created content, but reputable journalists and academics usually cite their sources, which is not necessarily the case with AI systems.

Regulation is needed

My panel focused on AI regulation. I was joined by Markus Anderljung of the Center for AI Governance, Rafaela Nicolazzi of OpenAI, Joslyn Barnhart, Senior Research Scientist at Google DeepMind, and moderator Keegan McBride of the Oxford Internet Institute.

There was almost unanimous consensus among my panel and other speakers that regulation of AI is inevitable and necessary. Most people seemed to agree with my comment that regulation should be targeted and nuanced in order to avoid negative consequences without hindering the potential positive aspects of generative AI, which, at least as far as products are concerned general public, is still in its infancy. It must focus on real harms and be flexible enough to take into account inevitable technological changes. As we’ve seen over the past few decades, the tech industry is evolving faster than governments. It is therefore important that governments provide general guidelines without seeking to micromanage technology.

Risk that jurisdictions adopt conflicting laws

A few speakers expressed concern about the balkanization of AI regulation, as several countries and US states consider or adopt laws, which sometimes conflict with regulations in other jurisdictions.

In an interview at the conference, Linda Lurie, who worked in the Biden White House’s Office of Science and Technology Policy and now works at Westexec Advisors, told me: “What’s going to happen is is that any company present will have to comply with the strictest regulations, which is rather unfair and undemocratic. She argued that many jurisdictions already have laws that could protect against the misuse of AI. “We don’t need to put the AI ​​stamp on every other law in a country. Make sure you know what is currently planned to see where the gaps are and do it at a harmonized level. This includes the contribution of both governments, but also businesses and civil society. Only then can you get real regulation that will be effective and will not kill AI.”

Risks

A number of people have expressed concerns that large companies, primarily based in the United States, are dominating generative AI in a way that could exclude other countries, notably in Africa, Latin America and the United States. other regions where the economy and technological infrastructure are not as developed. it’s in the US, UK and much of Europe.

The risks are not only the exclusion of these regions from any economic and social gains from GenAI, but also the biases that can be built into AI models, particularly those trained on internet data originating primarily from wealthier countries and dominant groups within these countries. Don’t just take my word for it. ChatGPT itself admits: “Countries with less internet infrastructure or lower rates of digital content creation (e.g., in media, academia, or user-generated platforms) contribute less to sets training data for AI models. “I guess I should be happy that even a robot can be self-critical when forced to question its own potential biases.

Optimism

Most speakers expressed cautious optimism. A British politician has explained how generative AI can help level the playing field, not only among adults but also among young people. When I asked her if she was concerned that big companies would dominate generative AI through social media, search and other aspects of the Internet, she expressed hope that regulation could prevent that from happening. . I hope she’s right, but I’m not convinced.

Although many participants and speakers expressed concerns about negative consequences, including employment interruptions, bias, misinformation, deep fakes, privacy and security issues, lack of accountability and litigation intellectual property, almost everyone agreed that generative AI can offer humanity enormous benefits and potential. economic growth.

Ph.D. from Oxford. Student Nathan Davies, who moderated the event’s panels, said: “It’s rare to bring policy makers, academics and business people together in one space. »

Even though disagreements were expected, I remained a strong sense of hope about some shared values, which is impressive, given that conference participants ranged from Donald Trump’s former CTO to current House lawmakers. Labor Party.

After the lecture, I walked around the 1,000-year-old campus. I’m sure its founders had no idea about artificial intelligence, but they helped lay the foundation for human intelligence that got us to this point.