
AI laws in the U.S. vary by state, as there is no federal regulation unlike the EU’s AI Act in Europe. The states, however, take different approaches. Colorado adopted a stricter framework for high-risk AI, drawing criticism for its complexity. Meanwhile, Utah has opted for a lighter approach to encourage innovation. The discussion below explores these regulatory differences, their effectiveness, and key insights for AI stakeholders and states considering AI legislation.
This blog is authored by Genny Ngai, a partner at Morrison Cohen LLP in New York. Genny is a former federal prosecutor with over 10 years of criminal and civil litigation experience. As a federal prosecutor, Genny prosecuted a wide variety of white-collar crimes, including crimes relating to the misuse of artificial intelligence. Now in private practice, Genny focuses on advising and defending companies and individuals in government investigations and prosecutions, regulatory inquiries, and in civil disputes. In particular, Genny advises clients in the digital assets and innovative technology space and helps them navigate legal concerns to mitigate civil and criminal risk.
Trying to understand artificial intelligence (AI) laws in the United States is no easy task. While Europe has a comprehensive risk-based AI law (the European Union Artificial Intelligence Act), the U.S. government has no equivalent federal law that govern AI use in all 50 states. As a result, each state has had to decide whether to regulate AI in its jurisdiction, and if so, how, when and why. Many states have tackled this challenge head on, including Colorado, Utah, California, and New York. Two distinct regulatory approaches have emerged from this effort. On one hand, states like Colorado have imposed a complex tort regime on developers and users of “high-risk” AI systems that have generated criticism from industry participants and complicated its own implementation. In contrast, states like Utah have taken a lighter regulatory touch to avoid chilling innovation. Below is a discussion regarding the differing approaches, the viability of these laws, and some key takeaways for AI players and other states contemplating AI legislation.
A “Tougher” Approach - The Colorado Artificial Intelligence Act (CAIA)
On May 17, 2024, Colorado enacted its Artificial Intelligence Act (CAIA). The CAIA is the country’s first comprehensive risk-based AI law, and the law seeks to regulate “high-risk artificial intelligence systems.” These systems are defined as “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making a consequential decision" (1). A “consequential decision” is in turn, defined as “any decision that has a material, legal, or similarly significant effect on the provision or denial to any consumer of, or the cost or terms” in the areas of education, employment, financial or lending services, essential government services, healthcare services, housing, insurance, or legal services. (2) In short, the CAIA regulates the use of automated decision-making tools in these key industries to mitigate the risk of algorithmic discrimination to consumers.
Notably, the CAIA requires “developers” and “deployers” of these high-risk AI systems to use “reasonable care” to protect consumers from “any known or reasonably foreseeable risks of algorithmic discrimination from the intended and contracted uses” of the high-risk AI system. The CAIA also provides a number of obligations (e.g. transparency, risk management, consumer rights) for developers and deployers to follow. If the developers and deployers comply with their statutory obligations, then they have a rebuttable presumption of using reasonable care. The CAIA also requires that anyone deploying or using an AI system to interact with consumers must disclose to the consumer that they are engaging with an AI system. (3)
The CAIA is technically scheduled to go into effect on February 1, 2026. But the rollout of the CAIA has not been smooth, and it is almost certain that the law will be substantially revised before then. Almost immediately after Colorado’s governor signed the bill into law, the governor launched a task force comprising of policymakers, industry insiders, and experts and directed them to meet with relevant stakeholders and revise the CAIA. In February 2025, the task force released a Report and Recommendation, which identified, among other things, stakeholders’ “firm disagreement” with the scope of AI technologies that would be subject to the CAIA as well as the “duty of care” standard – essentially, the core concepts of the CAIA. (4) The report even called into question whether the CAIA should include the concept of a duty of care, and the timing of the law’s implementation. (5)
A “Lighter” Approach: Utah Artificial Intelligence Policy Act (UAIP)
In contrast, on March 13, 2024, Utah passed the Artificial Intelligence Policy Act (UAIP), which went into effect on May 1, 2024.(6) Unlike the CAIA’s risk-based focus on predictive AI use, the UAIP focuses solely on increasing transparency when individuals and businesses use generative AI to interact with the public.
The UAIP is comparatively straightforward. Essentially, if a business or individual uses generative AI to provide services in a “regulated occupation” (i.e., occupations that requires a person to obtain a license or state certification), then they must “prominently” disclose that a consumer is interacting with generative AI, or materials created by generative AI, at the beginning of any communication. In all other instances, anyone using generative AI to interact with an individual need only disclose the use of the AI technology when asked or prompted by the individual. The UAIP also explicitly makes clear that anyone using AI is responsible for any resulting AI-related consumer protection violations and cannot blame the AI as their defense. (7)
The UAIP, which has already gone into effect, has received a less challenging rollout than Colorado’s AI law and that is mostly attributed to its less stringent regulatory approach. Utah has specifically designed the law to avoid chilling innovation, (8) and even created the Office of Artificial Intelligence Policy to offer the equivalent of a regulatory sandbox program to AI companies. In fact, Utah calls the Office the “first-in-the-nation office for AI policy, regulation and innovation” and has indicated that the Office stands for the state’s “commitment to being at the forefront of AI policy and collaborative regulation.” Through that Office, participating companies can work with Utah to receive regulatory exemptions, capped penalties for violations, cure periods to address compliance issues, and tailored mitigation agreements.(9)
There are also signs that the UAIP will be even less stringent in the future. Utah’s legislature is currently discussing a substitute bill (S.B. 226) to narrow the scope of businesses in “regulated occupations” that are required to automatically and prominently disclose when their customers are interacting with generative AI.(10)
Key Takeaways from States’ Dueling Approaches
The U.S. currently has a patchwork system of AI legislation which vary state-by-state and in some cases, even by localities. Colorado and Utah are just two examples of states that stand at opposite ends of the AI regulatory spectrum and thus serve as good test cases. Below are a few lessons from these states’ approaches:
The more complex or onerous the AI law is, the harder it will be to implement. Many have heralded Colorado’s AI law as the country’s first comprehensive AI law and a major breakthrough in the U.S. But since day one, the CAIA has been mired in challenges, and it is doubtful that the CAIA will even be implemented in its original form. States that have tried to enact their own version of the CAIA – like Connecticut – have also failed, as Connecticut’s own governor refused to sign the bill into law due to fears that it would stymie innovation.(11) As a result, it is more likely that states will pass piecemeal AI legislation to test the waters.
Relatedly, until there is a federal solution, AI law in the U.S. will only become more disparate and will be dependent on states’ interest in attracting AI investments and jobs. States who want to recruit and maintain AI innovation at home will be less inclined to pass stringent AI regulations.
Finally, despite the various challenges legislators face, there will be more AI laws passed in the U.S. AI participants who are currently not regulated in their home jurisdictions now have roadmaps (i.e. the enacted AI laws) that forecast what their compliance obligations or even a federal AI law may look like in the future. At a minimum, companies should consider disclosing when using generative AI to interact with the public because transparency-focused AI laws like the UAIP have been successfully implemented and other states will take note and follow accordingly.
References:
(1) CAIA, Sec. 6-1-1701(9(a).
(2) CAIA, Sec. 6-1-1701.
(3) CAIA, Sec. 6-1-1704.
(4) Artificial Intelligence Impact Task Force, Report and Recommendations, February 2025, at pp.4-5, available at https://leg.colorado.gov/sites/default/files/images/report_and_recommendations_0.pdf
(5) Id.
(6) S.B. 149 Artificial Intelligence Amendments, available at https://le.utah.gov/~2024/bills/static/SB0149.html
(7) Id. § 13-2-12.
(8) IAPP, “Private-sector AI bill clears Utah Legislature,” March 6, 2024, available at https://iapp.org/news/a/utah-brings-gen-ai-into-consumer-protection-realm-with-bill-passage.
(9) Utah Department of Commerce, Office of Artificial Intelligence Policy, available at https://ai.utah.gov/.
(10) S.B. 226, Artificial Intelligence Consumer Protection Amendments, available at https://le.utah.gov/~2025/bills/static/SB0226.html
(11) CT Insider, “Proposed bill on artificial intelligence regulation in CT dies after Gov. Ned Lamont threatens veto,” May 7, 2024, available at https://www.ctinsider.com/politics/article/if-bill-ai-survives-ct-house-vote-lamont-19444053.php
Collaborate with us!
As always, we appreciate you taking the time to read our blog post.
If you have news relevant to our global WAI community or expertise in AI and law, we invite you to contribute to the WAI Legal Insights Blog. To explore this opportunity, please contact Silvia A. Carretta, WAI Chief Legal Officer (via LinkedIn or silvia@womeninai.co), or Dina Blikshteyn (dina@womeninai.co).
Silvia A. Carretta and Dina Blikshteyn
- Editors
Comments