close
close

The New AI Consumer Protection Act and Its Consequences for Ca Health

As the first state law to regulate the outcomes of artificial intelligence systems (AI Systems), Colorado’s SB24-205, the Concerning Consumer Protections in Interactions with Artificial Intelligence Systems (the Act), has generated a lot of cross-industry interest, and for good reason. In some respects similar to the risk-based approach taken by the European Union (EU) in the EU AI Act, the Act aims to regulate the activities of developers and implementers of AI systems, which are defined in the Act as “any machine-based system that, for any explicit or implicit purpose infers from the inputs received by the system how to generate outputs, including content, decisions, predictions, or recommendations that may affect physical or virtual environments.”

The law is set to go into effect on February 1, 2026, and will be limited in scope to Colorado operations, entities doing business in Colorado, or entities whose operations affect Colorado residents. It primarily focuses on regulating “high-risk” AI systems, which are defined as any AI system that, when deployed, creates or is an important factor in making decisions about the nature of consequences. A “consequential decision” means a decision that has a significant legal or similarly significant influence on the provision or denial of service to any consumer, or on the cost or conditionsamong others, health care services.

Requirements for implementers and programmers

Both developers and deployers of high-risk AI systems must exercise due diligence to protect consumers from any known or reasonably foreseeable risks of “algorithmic discrimination.”(1) The Act also imposes certain obligations on developers of high-risk AI systems, including disclosing information to deployers; publishing summaries of the types of high-risk AI systems developed by the developer and how they manage foreseeable risks; and disclosing to the Colorado Attorney General (AG) “any known or reasonably foreseeable risks” of algorithmic discrimination arising from the intended use of the high-risk AI system within 90 days of discovery. Deployers would be required to implement risk management policies and programs to govern the deployment of high-risk AI systems; complete impact assessments for high-risk AI systems; send notices to consumers after the deployment of high-risk AI systems for the purpose of making decisions about consumers or playing a significant role in making such decisions; and submit notice to the AG within 90 days of discovering that the high-risk AI system resulted in algorithmic discrimination.

Coverage and exceptions to health care services

The Act defines “health care services” by reference to the definition in the Public Health Service Act.(2) While this is a broad definition that may cover a wide range of services, the authors also included systems that are not high risk, as well as some work , which have already been implemented or are in the process of being implemented by the federal government as there are exceptions that apply to certain health care providers.

HIPAA Covered Entities

The Act will not apply to implementers, developers, or other persons who are HIPAA covered entities and make health care recommendations that: (i) are generated by an AI system; (ii) require the health care provider to take action to implement the recommendations; and (iii) are not considered high risk (as defined in the Act). This exception appears to be directed at health care providers because it requires the health care provider to engage in the actual implementation of the recommendations made by the AI ​​systems, rather than recommendations that are implemented automatically by the systems. However, the scope is not limited to providers, as covered entities may include health care providers, health plans, or health care clearinghouses. There are a number of possible uses of AI systems by HIPAA covered entities, including, but not limited to, disease diagnosis, treatment planning, clinical outcome prediction, outreach, diagnostics and imaging, clinical research, and population health management. Depending on the circumstances, many of these uses could be considered “high risk.” Examples of uses of AI systems that do not involve “high risk” to healthcare services and therefore could potentially meet this exception include administrative tasks such as clinical documentation and note-taking, billing, or appointment scheduling.

FDA approved systems

Implementers, developers, or others who implement, develop, launch, or materially modify high-risk AI systems that have been approved, authorized, certified, approved, developed, or granted clearance by a federal agency such as the Food and Drug Administration (FDA) are not required to comply with the Act. Because the FDA has deep expertise in AI and machine learning (ML) and has, as of May 13, 2024, issued clearance for 882 AI/ML-enabled medical devices, this is a welcome and welcome clarification for entities that have already developed or are working on FDA-approved AI/ML-enabled medical devices. Additionally, implementers, developers, or others conducting research in support of an application for approval or certification by a federal agency such as the FDA, or research in support of an application otherwise subject to review by the agency, are not required to comply with the Act. The use of artificial intelligence systems is common in drug development and, to the extent these activities are approved by the FDA, the development and implementation of artificial intelligence systems pursuant to those approvals is not subject to the Act.

Compliance with ONC standards

Also exempt from the requirements of the Act are implementers, developers, and others who implement, develop, deploy, or intentionally and significantly modify a high-risk AI system that complies with standards established by federal agencies such as the Office of the Coordinator for Health Information Technology (ONC). This exemption helps avoid potential regulatory uncertainty for certified health IT developers and health care providers using certified health IT solutions, as set forth in ONC’s HTI-1 Final Rule, which imposes certain disclosure and risk management obligations on developers of certified health IT solutions. Not all developers of high-risk AI systems in healthcare are developers of certified health IT solutions, but the vast majority are certified, which is an important distinction for developers who already meet or are working toward compliance with the HTI-1 Final Rule.

Key takeaways

Using a risk-based approach to reviewing the use of an AI system may be a new practice for developers and implementers directly or indirectly involved in the delivery of healthcare services. In particular, implementers will want to have processes in place to determine whether they are required to comply with the Act and document the results of any relevant analyses. These analyzes will include determining whether their AI system serves as a significant factor in subsequent decision making (and therefore the system is “high risk”) with respect to the delivery of health care services. If they determine that they are using high-risk AI systems and none of the above-mentioned exceptions apply, they will need to begin taking actions such as developing the required risk management policies and procedures, conducting impact assessments for these systems, and setting up mechanisms to notify consumers and AGs. It will likely take some time for some organizations to integrate these new responsibilities into their relevant policies and procedures and risk management systems, and they will want to ensure that they include the right people in these conversations and decisions.

(1) The Act defines algorithmic discrimination as “any condition in which the use of an artificial intelligence system results in unlawful differential treatment or an impact to the disadvantage of an individual or group of persons on the basis of their actual or perceived age, skin color, disability, ethnic origin, genetic information, limited English proficiency, national origin, race, religion, reproductive health, sex, veteran status,” or any other protected classification under Colorado or federal law.

(2) The PHSA defines health care services as “any services provided by a health care professional or any person working under the supervision of a health care professional that relates to: (A) the diagnosis, prevention, or treatment of any disease, illness, or impairment of a person; or (B) the assessment of, or care for, the health of people.”