close
close

Tech Matters: What could the EU AI law foretell? | News, Sports, Jobs


Photo provided

Leslie Meredith

The European Union has taken a groundbreaking step by introducing the EU AI Act, the first legislation of its kind in the world. The act establishes comprehensive regulations for the development and implementation of artificial intelligence technologies. Its main goal is to ensure that AI systems used in the EU are safe, transparent and respect fundamental rights.

The law classifies AI systems into three risk levels: unacceptable, high risk, and low or minimal risk. Unacceptable AI systems, such as those used for social scoring by governments, are banned outright. The most famous example is China’s Social Credit System, which uses financial records, social behavior, and compliance with laws and regulations to assign points to citizens and organizations to assess their “trustworthiness.” Those with high scores may have easier access to loans and travel, while those with low scores may face travel bans, restricted access to social services, and public shaming.

High-risk systems, such as those used in critical infrastructure, healthcare, and law enforcement, are subject to strict requirements for transparency, oversight, and accountability. Facial recognition systems used to identify people in public spaces and systems that assess student performance and make decisions about their educational paths fall into this category. Infrastructure systems include water supply systems and energy grid management systems, both essential to society.

Low-risk AI systems, including chatbots and deepfakes, are subject to less stringent regulations but still have to meet certain transparency obligations. They must be clearly labeled or otherwise identified as AI-generated. At the lower end of the spectrum are minimal-risk systems, such as AI-enabled video games, inventory management systems, and email spam filters. Because they don’t interact directly with humans, or have very little impact when they do, these products can be freely developed and used.

Penalties for non-compliance can be staggering and are set according to the level of risk. Penalties can be a monetary amount or a percentage of the company’s global annual turnover. At the highest level, fines can reach €35 million or 7% of the company’s global annual turnover, whichever is higher. High-risk violations can result in fines of up to €15 million or 3% of the company’s annual turnover, while violations involving low-risk systems can result in fines of up to €7.5 million or 1.5% of annual turnover.

Like the EU’s General Data Protection Regulation (GDPR), the AI ​​Act applies to any company doing business in the EU, meaning that U.S. companies will have to comply if they operate in European markets. In the case of GDPR, we’ve seen many companies craft their policies to meet EU requirements rather than crafting policies for different markets. You can expect the same thinking from tech companies like Alphabet (Google), Microsoft, Apple, and Meta as they work with EU regulators. As a result, we should see a standard level of transparency and privacy protection in these companies’ products.

While U.S. lawmakers are unlikely to pass AI regulations as stringent as those in the EU, that doesn’t mean the U.S. will remain a free-for-all. There are already ongoing discussions and proposals to regulate AI in the U.S., signaling a move toward more regulated and accountable AI innovation.

Utah lawmakers have been pioneers of AI legislation at the state level. On the same day the European Parliament passed the AI ​​Act (March 13), Utah Governor Spencer Cox signed the AI ​​Act into law, which went into effect May 1 and was incorporated into Utah’s consumer protection laws. Key elements of the bill include establishing liability for insufficient or inadequate disclosure of generative AI (when consumers interact with a chatbot) and creating an Office of Artificial Intelligence Policy to administer the state’s AI program.

Companies or individuals associated with companies regulated by Utah’s Division of Consumer Protection must tell a person that they are interacting with a chatbot, not a human — but only if the person asks. For health care companies, a disclosure must be made before the interaction begins that clearly informs the person that they will be communicating with a chatbot, not a real person.

The liability part is important and holds the company accountable if the chatbot makes a mistake—and we all know how generative AI can make mistakes. Remember, these big language models are trained to predict the next most likely word and can be prone to making things up, or “hallucinating.” If a covered company violates the law, penalties will be imposed. The Utah Department of Consumer Protection can issue administrative fines of up to $2,500 per violation. And if the company violates an administrative or court order, fines can be as high as $5,000 per violation.

AI-powered technology is evolving at a rapid pace, and we are pleased to see our legislators paving the way for a safer future for Utahns.

Leslie Meredith has been writing about technology for over a decade. As a mother of four, she prioritizes value, usability, and safety online. Have a question? Email Leslie at [email protected].



Bulletin

Join the thousands of people who already receive our daily newsletter.