close
close

Colorado AI Regulations: What Healthcare Implementers and Developers Need to Know | Mintz – Health Care Viewpoints

As the first state law to regulate the effects of artificial intelligence (AI) systems, Colorado SB24-205 “Relating to the Protection of Consumers in Interactions with Artificial Intelligence Systems” (the Act) has generated a great deal of cross-industry interest, not least for good reason. Somewhat similar to the risk-based approach taken by the European Union (EU) under the EU Artificial Intelligence Act, the Act seeks to regulate developers and implementers of AI systems, which are defined in the Act as “any machine-based system that, for any explicit or implicit purpose, infers from inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations that may affect physical or virtual environments.

The act is set to take effect on February 1, 2026, and will be limited to Colorado businesses, entities doing business in Colorado, or entities whose business involves Colorado residents. It primarily focuses on regulating “high-risk” AI systems, which are defined as any AI system that, when deployed, causes or is an important factor in making consistent decisions. “Follow-up decision” means a decision that is: significant legal effect or similarly significant impact on the provision or refusal to provide services to any consumer, or on the cost or conditions of provision of servicesamong others, health care services.

Requirements for implementers and developers

Both developers and implementers of high-risk AI systems must exercise due diligence to protect consumers from any known or reasonably foreseeable risks of “algorithmic discrimination.” The Act also imposes certain obligations on developers of high-risk AI systems, including disclosure of information to implementers; publishing summaries of the developer’s high-risk AI system types and how any foreseeable risks are managed; and disclosing to the Colorado Attorney General (AG) “any known or reasonably foreseeable risks” of algorithmic discrimination arising from the intended use of a high-risk AI system within 90 days of discovery. Implementers will need to implement risk management policies and programs to govern the deployment of high-risk AI systems; conduct impact assessments for high-risk AI systems; send notifications to consumers following the implementation of high-risk AI systems to make or be an important factor in making consumer decisions; and send notice to the AG within 90 days of discovering that a high-risk AI system caused algorithmic discrimination.

Scope and exceptions of health care services

The Act defines “health care provision”, referring to the definition contained in the Public Health Service Act. While this is a broad definition that can cover a wide range of services, the drafters also included systems that do not involve high risk and some work that has already been done or is being done by the federal government because there are exceptions that apply to some medical entities.

HIPAA Covered Entities

The Act will not apply to implementers, developers, or other persons who are HIPAA covered entities that make health care recommendations that: (i) are generated by an AI system; (ii) require the health care provider to take action to implement the recommendations; and (iii) are not considered high risk (as defined in the Act). This exception appears to be directed at health care providers because it requires the involvement of the health care provider to actually implement the recommendations made by the AI ​​systems, rather than recommendations that are implemented automatically by the systems. However, the scope is not limited to providers, as covered entities may include health care providers, health plans, or health care clearinghouses. There are a number of potential uses of AI systems by HIPAA covered entities, including, but not limited to, disease diagnosis, treatment planning, clinical outcome prediction, outreach, diagnostics and imaging, clinical research, and population health management. Examples of uses of AI systems that do not involve “high risk” to healthcare services and therefore could potentially meet this exception include administrative tasks such as clinical documentation and note-taking, billing, or appointment scheduling.

FDA approved systems

Implementers, developers, and others who implement, develop, put into service, or significantly modify high-risk AI systems that have been approved, authorized, certified, cleared, developed, or granted by a federal agency such as the Food & Drug Administration (FDA), are not obliged to comply with the Act. Since the FDA has extensive experience in AI and machine learning (ML) and has authorized 882 AI/ML-enabled medical devices as of May 13, 2024, this is an expected and welcome clarification for those who have already developed or are working with FDA-approved FDA AI/ML-enabled medical devices. Additionally, implementers, developers, or other persons conducting research in support of an application for approval or certification from a federal agency such as FDA, or research in support of an application otherwise subject to review by the agency, are not required to comply with the Act. The use of AI systems is common in drug development, and to the extent these activities are approved by the FDA, the development and implementation of AI systems based on these approvals are not subject to the Act.

Compliance with ONC standards

Also exempt from the Act’s requirements are implementers, developers, and others who implement, develop, launch, or intentionally and significantly modify a high-risk artificial intelligence system that complies with standards established by federal agencies such as the Office of the Coordinator for Information Technology in health (ONC). This exemption helps avoid possible regulatory uncertainty for certified health IT developers and health care service providers using certified health IT solutions, consistent with ONC’s final rule HTI-1, which imposes certain disclosure and risk management obligations on certified health IT creators. Not all developers of high-risk healthcare AI systems are developers of certified healthcare IT solutions, but the vast majority are certified, which is an important distinction for developers who already meet or are working to meet the HTI-1 requirements of the Final Rule.

The most important conclusions

Using a risk-based approach to reviewing the use of an AI system may be a new practice for developers and implementers directly or indirectly involved in the delivery of healthcare services. In particular, implementers will want to have processes in place to determine whether they are required to comply with the Act and document the results of any relevant analyses. These analyzes will include determining whether their AI system serves as a significant factor in subsequent decision making (and therefore the system is “high risk”) with respect to the delivery of health care services. If they determine that they are using high-risk AI systems and none of the above-mentioned exceptions apply, they will need to begin taking actions such as developing the required risk management policies and procedures, conducting impact assessments for these systems, and setting up mechanisms to notify consumers and AGs. It will likely take some time for some organizations to integrate these new responsibilities into their relevant policies and procedures and risk management systems, and they will want to ensure that they include the right people in these conversations and decisions.

(Show source.)