close
close

AI regulations are taking shape around the world

New regulations regarding artificial intelligence (AI) are being introduced around the world.

Colorado has become the first U.S. state to regulate artificial intelligence, aiming to prevent harm and discrimination to consumers. New requirements for high-risk AI systems will come into force in 2026.

Meanwhile, a comprehensive EU Artificial Intelligence law will soon come into force that will have an impact on global regulation, and Japan is considering requiring major AI developers to disclose information for security and integrity reasons.

Colorado becomes the first state to regulate artificial intelligence

Colorado became the first U.S. state to create a regulatory framework for artificial intelligence last week when Gov. Jared Polis signed a bill establishing barriers to companies developing and using the technology.

The law, which will come into force in 2026, aims to prevent consumer harm and discrimination from artificial intelligence systems increasingly deployed in sensitive areas such as employment, banking and housing. It imposes new requirements on creators and users of “high-risk” AI.

Polis, a Democrat, expressed reservations when he signed the bill into law.

“While the guardrails, long implementation timeline, and limitations contained in the final version are enough for me to sign this legislation into law today, I am concerned about the impact this law may have on an industry that drives critical technological advances,” he wrote in a signed statement.

The governor expressed hope that the conversation about artificial intelligence regulation will continue at the state and federal levels, warning against a patchwork of state policies. A similar bill in Connecticut was not passed during that state’s last legislative session.

The Colorado bill was sponsored by Democratic lawmakers and was passed in the final days of the legislative session, which ended on May 8. It comes as a result of the exponential development of artificial intelligence – from OpenAI’s ChatGPT to Google’s Gemini – triggering a global reckoning with this technology.

EU passes groundbreaking artificial intelligence legislation

Pioneering European Union legislation regulating artificial intelligence has cleared the final hurdle, paving the way for the law to come into force within weeks.

The Council of Ministers, one of the EU’s central legislative bodies, approved the Artificial Intelligence Law after the measure was adopted by the European Parliament in March. The legislation, dubbed the world’s first artificial intelligence law, comes three years after it was proposed by the European Commission.

“The EU Artificial Intelligence Act does not randomly select specific areas of AI, such as the regulation of generative AI, but rather takes a comprehensive approach, trying to set the stage for developers, implementers and those affected by the use of AI,” Nils said in a statement Rauer, an expert in technology law at Pinsent Masons in Frankfurt, Germany.

The Artificial Intelligence Act will enter into force 20 days after its publication in the Official Journal of the EU, although most of its provisions will apply for a maximum of two years. It sets stringent requirements for high-risk artificial intelligence systems and completely prohibits certain applications.

Experts said the legislation is likely to have an impact on artificial intelligence regulation in other jurisdictions.

“The introduction of data governance for training, validation and test datasets for high-risk AI systems is a positive development as it will impact the maturity and hygiene of those using software and AI in their operations,” said Wouter Seinen, director of Amsterdam-based lawyer working at Pinsent Masons.

The passage of the Artificial Intelligence Act comes as concerns about the technology have increased following the release of chatbots such as ChatGPT. Policymakers struggled with balancing innovation and mitigating potential risks.

“EU lawmakers decided to combine two structural concepts in one legal act,” Rauer said. “There is a risk-based approach to AI systems and a separate approach that applies to general-purpose AI. Time will tell how these two concepts will work together.”

The European Union has taken a significant step forward in regulating artificial intelligence by approving the EU’s Artificial Intelligence Act, British artificial intelligence lawyer Matt Holman told PYMNTS.

“The approval of EU AI is an extremely important step forward in AI regulation because it is unlike any other law in the world,” Holman said. “For the first time, it creates a detailed regulatory regime for artificial intelligence.”

The new law, which is technology- and sector-agnostic, will require anyone who develops, creates, uses or resells artificial intelligence in the EU to comply with its rules. The law aims to control artificial intelligence during its development, training and implementation.

US tech giants are closely monitoring the development of this law, especially as significant resources have been allocated for publicly available generative artificial intelligence systems that will need to comply with the new regulations, which Holman says are quite burdensome in some places.

“They will need to ensure AI literacy among their employees and transparency with users about what AI does and how it uses their data,” he said.

The implementation of the law will be staggered, and different provisions will enter into force at different stages. Rules banning AI will come into force first, followed by rules on penalties, transparency, AI literacy and CE marking obligations. Finally, regulations on high-risk artificial intelligence systems will be implemented.​​​​​​​​​​​​​​​​ Artificial intelligence products considered to pose high risk include tools used to make recruitment decisions or law enforcement.

Holman said the law includes GDPR-style penalties, with the highest penalty level being 35 million euros ($37.9 million), or 7% of global turnover, for using illegal AI. Companies violating procedural rules face fines of 15 million euros ($16.3 million) or 3% of global turnover for violating procedural rules. Providing misleading information could also result in fines of 7.5 million euros ($8.1 million) or 1% of global turnover.

Japan is considering legal requirements for AI developers to disclose information

According to the Japanese newspaper Kyodo News, the Japanese government is considering requiring major artificial intelligence developers to disclose certain information as part of the basic principles of artificial intelligence regulation.

According to the draft policy, the government is also examining control over artificial intelligence from a security perspective, in line with future technology developments and international discussions. The project highlights the importance of establishing a transparent approach to ensure fairness and the need to regulate artificial intelligence.

The bill also calls for the government to consider the types of regulations needed to control artificial intelligence systems that may be linked to crimes and human rights violations.

Additionally, the government is expected to discuss creating a framework requiring major AI operators whose systems significantly impact society to make certain adjustments and disclose appropriate information. The purpose of this policy is to ensure that developers share security information with the government.