close
close

New surveys suggest that CEOs of large companies are rethinking digital regulation

At a pivotal moment for new forms of artificial intelligence – and the Internet itself – it appears that an overwhelming number of CEOs are looking to re-evaluate how they manage these technologies. However, who and how this would happen remains an open question.

That’s the key takeaway from Tuesday’s online forum hosted by longtime Yale School of Management senior Jeffrey Sonnenfeld Chief Executive Officer publicist, to commemorate the 50th anniversaryvol anniversary of the birth of the Internet.

In three striking survey results, the 200 CEOs of large companies in the audience expressed remarkable unanimity when it comes to reassessing the legal protections that underpin the success of “Big Tech” platforms like Google, Facebook and TikTok.

About 85 percent of respondents said they strongly agreed or agreed with the statement: “I support stronger government regulation of social media platforms.” A full 100 percent of voters supported the Children’s Online Safety Act sponsored by Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT).

Perhaps most importantly, the question of whether tech companies are protected under Section 230 of the landmark Communications Decency Act of 1996 – a legal provision that provided a “safe harbor” when it comes to liability for what their users post on social media platforms – some 96 percent of CEOs said they believe the law is “outdated” at this point and requires reconsideration by Congress.

The event, a biannual gathering of CEOs, authors, investors, scientists, policymakers and technology pioneers, focused much of its attention on the question of how best to regulate – and not regulate – the digital world as it enters a new phase of its existence.

But who, how, and how broadly a reassessment of how the government patrols cyberspace is to be made remains a very open question – as it has been for much of the last 50 years – a question that increasingly depends on issues of data ownership and protection and the rapid development of generative artificial intelligence.

Anne Neuberger, deputy national security adviser for cybersecurity and emerging technologies, reminded the group that, at least when it comes to cybersecurity, the private sector must work more closely with the federal government to combat adversaries in an era of rising threats, if only because most critical infrastructure in the US it was in the hands of the private sector. She cited the example of a partnership with Google and Microsoft that provides free cybersecurity training to 1,800 hospitals in rural America. In the era of artificial intelligence, such efforts should be expanded.

“We must ensure that before AI is deployed in critical waters, pipelines and railways, we build in safeguards such as transparency of the data they are trained on, red-teaming models appropriately, keeping key decisions up to date, ensuring , that before the operating systems are connected to the AI ​​models, we have tested it and made sure that we have built in guardrails as well,” she said. “So in some ways, cybersecurity is really sobering. Social media is sobering when we think about regulation of artificial intelligence and the need for responsible regulation to ensure that as a country we can benefit from the enormous innovations that artificial intelligence will bring, but let’s not wait to introduce controls until we are ready to put them on later , which is more expensive and more difficult.”

Tom Bossert, President Donald Trump’s former homeland security adviser, largely agreed, but, as he noted, new and proposed cybersecurity regulations for companies have done little to stop “coordinated nation-state intrusions” into U.S. companies. “I already see compliance costs going up,” he said, “and I don’t see them translating into greater safety outcomes.”

The most skeptical voice in the morning session came from longtime Silicon Valley investor Roger McNamee, who has become increasingly concerned about how the tech industry polices itself in the decades since he was an early investor in Facebook. Over the past year, he has argued that the unregulated gold rush around generative AI was not only a terrible long-term choice for investors, making no economic sense, but could pose risks to society that potentially dwarf the unforeseen challenges created by the rise of social media.

“My biggest warning to all CEOs in this conversation is to stop,” he said. “There is no rush to apply artificial intelligence. In fact, after some analysis, one could reasonably conclude that the technology is not yet ready for the best of times, and applying it to productivity-enhancing applications in corporations may actually lead to the opposite results, similar to what we have seen in case of other Internet technologies. So I encourage everyone to realize that not only is this battle not over, but we are barely joining it.