Regulators are struggling to keep pace with the rapid and pervasive evolution of Artificial Intelligence. A major question persists regarding the future of AI regulation: will it be managed at the state, federal, or a combined level? Colorado’s attorney general took a state level stance in May 2024 with the signed senate bill, Colorado’s Artificial Intelligence Act (SB 24-205). This leads to the question, what exactly is the Colorado AI Act, and what are its key takeaways for you?
The Colorado AI Act: A New Frontier in AI Regulation
Colorado is forging forward with AI regulation through the Colorado Artificial Intelligence Act, effective February 1, 2026. This is one of the first comprehensive state laws in the U.S. to impose a strict duty of reasonable care on both developers and deployers of high-risk AI systems to protect consumers from algorithmic discrimination.
Why Colorado? The State’s Ambitious Stance
States like Colorado have begun enacting their own patchwork AI legislation. Colorado’s AI Act is part of growing regulations, along with other states such as Utah (through its AI Policy Act) and Texas (via TRAIGA) also passing laws focused on specific aspects of AI technology. Increased state level AI regulation has sparked conversation around the future of AI regulation in the U.S.; will it remain as state-by-state patchwork or will congress establish a unified federal framework?
On a recent GRC & Me podcast episode CISO to CISO—Let’s Get Real About AI— LogicGate CISO, Nick Kathman, and Anecdotes CISO, Jake Bernardes, dive into AI regulation and touch on the state level. Nick kicks off the discussion with the belief that most states are going to try to implement their own regulation. Get the full conversation here.
What Exactly Does the Colorado AI Act Aim to Do?
The goal of the Colorado AI Act is to protect consumers from algorithmic discrimination by requiring developers and deployers to use reasonable care. This aims to protect against unfair, biased, or unequal outcomes that may negatively impact employment opportunities as a result of algorithms that learn from historical data containing human biases and discriminatory patterns.
The legal framework requires
- Deployers to implement risk management programs, conduct annual impact assessments, and provide essential consumer rights, such as the ability to correct and appeal inaccurate data.
- Developers to document publicly available statements summarizing their high-risk artificial intelligence systems and risk mitigation strategies, and to proactively disclose any known or reasonably foreseeable risks of algorithmic discrimination to both the Attorney General and their deployers within 90 days of discovery. This mandated transparency will safeguard AI usage and protect consumers.
Defining “High-Risk AI Systems”
The Colorado AI Act solely regulates AI systems posing a major risk, which are defined as “High-Risk AI Systems”. These systems are any AI tools that either make or heavily influence a significant decision regarding individuals, including decisions such as receiving a job offer, loan, health insurance, housing, and more. If an AI program is involved in these types of decisions, it is deemed “high-risk” and required, under law, for companies using it to follow the safety and transparency rules.
Key Provisions and Requirements for Developers
Companies building these high-risk AI tools are responsible for ensuring fairness and transparency. This responsibility falls mainly on the developer to actively use “reasonable care” to prevent discrimination. They are mandated to provide all customers, or deployers, the detailed documentation and information needed to conduct risk and impact assessments. In addition to this, they’re required to publicly share high-risk system summaries and mitigation strategies. If AI is discovered to be likely causing discrimination, the said developers in the situation must alert both the Attorney General and their customers, or deployers, within 90 days. According to the National Association of Attorneys General, “violations of the Colorado AI Act requirements are deemed to be an unfair trade practice under the Colorado Consumer Protection Act, with penalties of up to $20,000 per violation”.
Key Provisions and Requirements for Deployers
Strict regulation does not fall solely on the companies, or developers, who own and provide the high-risk AI tools, but also on the companies, or deployers, who purchase and utilize these tools. The deployers are required to implement formal risk management programs and conduct regular assessments to catch and stop discriminatory bias. If the AI tools come to an incorrect or negative conclusion or decision, like denying health insurance, the company is required to disclose that the intended use of AI was used in the decision making process, provide an opportunity to correct any incorrect personal data, and allow you to appeal the decision, typically with a human review.
Who Does the Colorado AI Act Affect?
So who exactly does the Colorado AI Act affect? It applies broadly to any company or person conducting business in Colorado that is involved with high-risk AI systems, regardless of having a physical presence in the state. The law divides the responsibility between two key parties:
- Developers: those who build or substantially modify high-risk AI systems
- Deployers: those who use the high-risk AI systems to make consequential decisions.
The Impact on Consumers
The Colorado AI Act significantly benefits consumers by expanding their rights, increasing transparency, and providing crucial protection from unfair practices involving automated systems. Not only will they be notified that AI was involved in a critical decision that could affect job, health, loans, and more, but they are given the right to correct and appeal any inaccurate data or adverse decisions. The regulation addresses the growing use of AI tools by ensuring consumers maintain the right to human review. This provision is designed to safeguard against algorithmic discrimination.
Navigating Compliance: Practical Steps for Businesses
To ensure compliance and adherence to the Colorado Artificial Intelligence Act, effective February 1, 2026, companies should implement a proactive strategy. These proactive measures should include:
- Understand Your AI’s Risk Profile: Identify and formally document every AI tool used within your organization for “consequential decisions”. Like hiring and loans, as “high-risk”.
- Implement Robust Risk Management Frameworks: Create a continuous risk management program to actively manage risk bias.
- Conduct Regular Audits: Perform annual impact assessments to regularly assess AI tools for algorithmic discrimination.
- Ensure Consumer Rights: Notify consumers ahead of AI use that the tool will be used and provide them the right to appeal, with human review where possible, any negative decisions or inaccurate data.
- Stay Agile and Adaptable: Regularly update contracts and ensure developers provide you with all necessary compliance check documentation.
Potential Challenges and Criticisms
The Colorado AI Act faces two primary criticisms: its broad and vague language and its potential negative economic impact.
Vagueness and Broad Scope: Key terms within the act, such as “algorithmic discrimination” and “consequential decisions,” are considered too open to interpretation. This vagueness could result in companies not knowing whether or not they are in compliance with the AI regulation.
Negative Economic Impact: Critics also point to the negative economic consequences of the law. A study by the Common Sense Institute of Colorado projects significant economic loss, estimating that the act could result in approximately 40,000 job losses across the the six key sectors studied – which include Finance, Housing, Healthcare, Education, Insurance, and Legal Service – and nearly $7 billion in economic output by 2030.
The Burden on Small Businesses
State-level AI regulation, often described as a “patchwork,” is an attempt to address the growing need for oversight. However, this approach, with different AI laws and regulations across the state and local level, can lead to increased litigation and compliance expenses. It is argued that this burden is particularly large for small businesses, which must navigate complex regulatory environments to survive.
While the Act offers some exemptions—such as relief from certain requirements like impact assessments for deployers with fewer than 50 employees—these small businesses lose that exemption if they use their own data to train or customize a high-risk AI system. Consequently, this leads to elevated legal compliance costs for those smaller entities.
The Bigger Picture: Colorado’s Influence on Future AI Legislation
The signing of the Colorado AI Act is a significant moment in the history of American AI regulation. As the first U.S. state to pass a comprehensive, risk-based AI law, Colorado has established a crucial precedent that directly influences the national regulatory landscape. However, it’s argued that this fragmented state-level approach is creating a “patchwork” of conflicting laws that tech advocates claim will stifle innovation and competition. Whether there will be future federal legislation or continued state-level patchwork, compliance with the Colorado AI Act is driving towards an AI-regulated future.