close
close

The global AI landscape is changing as regulations evolve

The global artificial intelligence (AI) landscape is undergoing significant change as regulators grapple with the technology’s rapid advances.

As the United States and Europe consider tightening regulations on artificial intelligence, Argentine President Javier Milei is positioning his country as a potential haven for technology investment. Meanwhile, the U.S. legal system is treading cautiously, with federal appeals courts hesitant to adopt AI-related laws.

Various industry leaders are also calling on the U.S. Food and Drug Administration (FDA) to strike a balance in its approach to regulating artificial intelligence in the pharmaceutical and medical device sectors.

Regulatory changes could drive AI investment to Argentina

After six months in office, President Milei is benefiting from global regulatory changes to position Argentina as the world’s fourth artificial intelligence center. According to a report by the Financial Times, Milei’s economic advisor, Demian Reidel, highlighted Argentina’s potential as a strategic destination for technology investments, given increasing regulatory pressure in the US and Europe.

Reidel, who has organized Milei’s recent meetings with tech giants such as OpenAI, Google, Apple and Meta, said tight regulations in other regions make Argentina an attractive alternative.

“Extremely restrictive” regulations “have killed AI in Europe,” Reidel said. He added that discussions taking place in the US, particularly in California, showed that US lawmakers could follow a similar path, further encouraging companies to seek more favorable terms.

In May, Milei and Reidel held private meetings in California with key industry figures, including OpenAI’s Sam Altman and an Apple representative Tim is cooking. They also held a summit with AI investors and thinkers such as venture capitalists Marc Andreessen and sociologist Larry Diamond. Additionally, Milei met with Tesla’s CEO Elon Musk twice.

Lawsuit? Better bring a man

In a move that could have set a digital precedent, the Fifth U.S. Circuit Court of Appeals in New Orleans decided to keep its courtrooms exclusively human for now. As Reuters reported on Tuesday (June 11), the court decided not to adopt the country’s first rule regulating the use of generative artificial intelligence by lawyers.

The proposed rule, introduced last November, sought to require lawyers who use AI-generated documents – through tools such as OpenAI’s ChatGPT – to certify that the documents have been carefully reviewed for accuracy. Errors in compliance with regulations could result in sanctions or the deletion of erroneous documents from court registers.

The court’s decision came after an outpouring of public comments, mostly from skeptical lawyers. The legal community has raised concerns about the reliability of AI, citing cases where AI “hallucinations” resulted in fictitious case citations.

If the Fifth Circuit went further, it would be the only court among 13 federal appeals courts to apply such a rule. Other federal appeals courts are also toying with the concept of AI regulation, echoing the concerns of the Fifth Circuit.

On the other hand, a recent survey by Thomson Reuters found that UK lawyers are divided on AI regulation, with 44% of in-house lawyers wanting government oversight and 50% prefer self-regulation. Law firms echo this divide: 36% favor regulation and 48% favor a laissez-faire approach, leaving regulators in a difficult position.

Experts are calling on the FDA to strike a balance in its AI regulations

Industry leaders at the RAPS Regulatory Intelligence Conference emphasized the need for a balanced approach in future FDA AI regulations, advocating for flexibility and collaboration over rigid rules, Reguly News reported on Monday (June 10).

Moderated by Chris Whalley, Pfizerdirector of regulatory intelligence, a lawyer participated in the panel Bradley Thompson law firm Epstein, Becker and Green; Vice President of Pharmacy Sam Kay an artificial intelligence health data company Basil Systems; Director of Global Regulatory Strategy Gopala Abbineni pharmaceutical company Bayer; and Head of U.S. Global Regulatory and Science Policy at Merck Group Elizabeth Rosenkrands Lange scientific and technical company EMD Serono/Merck. Together, they warned that overly prescriptive regulations could hamper innovation.

The panel highlighted the importance of clearly defining the goals of artificial intelligence in the pharmaceutical and medical device industries. Bayer’s use of AI was highlighted as an example of AI integration with medical devices and regulatory information. Merck’s AI tools and pilot projects were also noted, emphasizing the need for partnerships with suppliers given current technology limitations.

Thompson noted that AI’s potential to analyze massive amounts of data points to untapped data that could improve regulatory processes.

Opinions on AI readiness varied among panelists.

Some expressed skepticism about AI’s current capabilities and advised against large investments without clear goals, noting that companies often fail within months due to poor planning. Others, however, were more optimistic, emphasizing AI’s ability to accelerate product development while warning that it was just a first step and needed further refinement.

The panel concluded with the consensus that precise goals and strategic investments are critical to realizing the full potential of AI in the pharmaceutical and medical device sectors, while successfully navigating the regulatory environment.