close
close

Key concepts of the Colorado Artificial Intelligence Act

The Colorado AI Act (CAIA) will go into effect on February 1, 2026, and will be the first comprehensive, risk-based approach to artificial intelligence (AI) regulation to be implemented in the United States. This new legislation aims to regulate the use of artificial intelligence systems in certain applications by private sector developers and implementers, with the stated aim of ensuring transparency, consumer rights and accountability.

Range

CAIA primarily regulates the development and implementation of artificial intelligence systems in specific applications; namely, what CAIA defines as “high-risk artificial intelligence systems.” According to a press release from the Colorado General Assembly: “The bill requires a developer of a high-risk artificial intelligence system (high-risk system) to exercise reasonable care to avoid algorithmic discrimination in a high-risk system. There is a rebuttable presumption that the developer exercised due diligence if the developer complied with the provisions specified in the project.”

Key ideas

Algorithmic discrimination: Use of an artificial intelligence system that unlawfully results in differential treatment or adversely affects an individual or group of individuals because of a protected status under Colorado or federal law. However, algorithmic discrimination does not include “expanding a pool of applicants, customers, or participants for the purpose of increasing diversity or redressing historical discrimination.”

High Risk Artificial Intelligence System (HRAIS): Any artificial intelligence system that, when implemented, creates or is: important factor in doing consequence of the decision.

Consistent decision: A decision that has a significant legal effect or similarly significant impact on the provision or denial of any consumer, or on the cost or terms of educational opportunities, employment opportunities, financial or lending services, essential government services, health care services, housing, insurance or legal services.

Developer: Any person or entity doing business in Colorado that develops or significantly modifies an artificial intelligence system.

Implementer: Any person or entity doing business in Colorado that implements a high-risk artificial intelligence system.

Important factor: A factor that helps make an important decision or may change it and is generated by an artificial intelligence system.

Key provisions of the law

Algorithmic discrimination: CAIA prohibits the use of high-risk artificial intelligence systems in a manner that results in unlawful differential treatment based on protected classes.

Risk management: CAIA requires implementers to implement and regularly update risk management policies to reduce the risk of algorithmic discrimination.

Transparency and accountability: CAIA’s goal is to ensure that both developers and implementers remain transparent about the use and impact of high-risk AI systems.

Responsibilities of programmers and implementers

Generally speaking, CAIA imposes the following obligations on developers and implementers:

Duty of care: Both developers and implementers have a duty to exercise due diligence to protect consumers from known or foreseeable risks of algorithmic discrimination.

Documentation and disclosure: Developers must provide implementers with detailed documentation, including intended uses, known risks, data summaries, and mitigation measures. This documentation must also be made available upon request by the Attorney General.

Public statements: Implementers must maintain clear summaries of high-risk AI systems on the implementer’s website, including risk management strategies for algorithmic discrimination, as well as “details” of the nature, source and scope of information collected and used by the implementer. The implementer is obliged to update this information periodically.

Impact assessments: Implementers must conduct annual impact assessments detailing the purpose of the AI ​​system, risks of algorithmic discrimination, data use, performance metrics, and post-deployment monitoring. These assessments must be kept for at least three years.

Consumer rights

CAIA provides consumers with the following rights:

Note before implementation: Consumers must be informed if a high-risk AI system is used to make decisions about them. Interestingly, the implementer must provide notice “no later than the time the HRAIS is implemented,” but the notice must inform the consumer “that the implementer has implemented HRAIS.”

Right to explanation: If a high-risk AI system makes a negative decision, consumers have the right to receive an explanation providing details about the system’s role in making the decision, the data used and its sources.

Right to rectification and cancellation: Consumers can correct any inaccurate personal data used by the AI ​​system and, if possible, appeal the decision for human review.

Notification form: The notification must be provided directly to the consumer, in plain language, in all languages ​​in which the implementing entity carries out its normal business, and in a format accessible to consumers with disabilities.

Enforcement and compliance

Attorney General: The Attorney General has exclusive authority to enforce the CAIA, including making regulations and ensuring compliance.

Reporting an incident: Developers and implementers must report any detected algorithmic discrimination to the Attorney General without undue delay.

Defense and safe havens: Developers and implementers can use compliance with nationally recognized risk management frameworks as a defense against enforcement actions.

Exclusions and special provisions

Federal expropriation: Artificial intelligence systems approved by federal agencies, such as the U.S. Food and Drug Administration or the Federal Aviation Administration, are exempt from certain CAIA requirements.

Trade secret: The CAIA includes an exception providing that notice and disclosure requirements do not require an implementer to disclose trade secrets or information protected from disclosure under state or federal law. However, if an implementer withholds information under this exception, it must “notify the consumer and provide a basis for the withholding.”

Small business: Small businesses (employing 50 or fewer full-time employees) are exempt from maintaining a risk management program or conducting impact assessments, but must still comply with prudence and consumer notification requirements.

Given the scope and scope of CAIA, it is likely to generate significant compliance costs and will likely result in a number of similar acts in other U.S. states unless or until a federal law is passed expressly repealing such laws. Given the current state of play of federal legislation, this is unlikely to happen soon.

Johnathan H. Taylor, Joseph “Joe” Damon, Leslie Green, Jackson Parese, Richard B. Levin, Kevin Tran and Bobby Wenner contributed to this article.