AI Regulations: What’s Happening at the State Level?
As AI becomes more ingrained in everyday life, questions persist about how the technology will be regulated to address its potential security, privacy, and ethical concerns. When President Trump took office, he quickly revoked an Executive Order President Biden signed to establish new standards for AI security and privacy practices within the public sector and to set the tone for the private sector. Trump appears to be taking a hands-off approach with AI, issuing his Removing Barriers to American Leadership in Artificial Intelligence Executive Order. With federal regulation unlikely over the next several years, states will have to take action to establish AI guardrails. State lawmakers have encountered resistance to AI regulations, with Governors vetoing or threatening to veto laws in some cases.
In 2024, Colorado and Utah became the first states to pass comprehensive laws regulating the development and deployment of AI systems. Colorado’s Governor signed the law in his state, but had reservations, asking lawmakers to reexamine the law before its effective date of February 1st, 2026. Utah’s law went into effect on May 1, 2024, and while narrower than Colorado’s law, it brings AI squarely into the scope of Utah’s consumer protection laws and sets disclosure obligations for any AI that consumers may interact with.
California also passed a series of AI-related laws we’ll detail later in the article.
Lawmakers in other states are attempting to follow Colorado and Utah’s lead, introducing their own AI regulation bills. Here’s a look at where efforts in other states stand.
Virginia Lawmakers Pass AI Bill, Governor’s Signature in Question
The Virginia Legislature passed the High-Risk Artificial Intelligence Developer and Deployer Act on February 20, 2025. Virginia Governor Glen Youngkin has been tight-lipped on if he’ll sign or veto the bill. He has until March 24 to sign, veto, or return the bill with amendments.
If the Governor does sign the bill, the law will establish these requirements:
- Developers of high-risk artificial intelligence systems shall protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination arising from the intended and contracted uses. A high-risk artificial intelligence system is defined as any AI system specifically intended to autonomously make consequential decisions. Consequential decisions are any decisions that impact any form of incarceration, educational opportunity, employment, financial services, access to healthcare, housing, insurance, marital status, or legal services.
- High-risk AI system developers must provide deployers of their systems with documentation detailing the intended use of the system, any risks of algorithmic discrimination, and how the system was evaluated for performance.
- Deployers must have a risk management program in place to use a high-risk AI system. Acceptable frameworks include ISO 42001 and the NIST AI Risk Management Framework.
- Deployers must complete an impact assessment before using a high-risk AI system.
- When using high-risk AI systems to interact with a consumer, deployers must disclose:
- The purpose of the high-risk artificial intelligence system
- The nature of the consequential decision
- Contact information for the deployer
- A description of the artificial intelligence system in plain language
If the bill is signed into law, it will take effect on July 1, 2026. The Attorney General would be responsible for enforcing the law. Penalties will range between $1,000-$10,000.
California has Passed Multiple AI laws, More Could Be Coming
The California Legislature passed more than a dozen laws that deal with AI in some fashion. You can learn more about the most significant California AI laws here.
Governor Gavin Newsom vetoed what was arguably the most notable AI bill passed by the legislature. The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act aimed to prevent AI from resulting in critical harms, including the creation or use of weapons that could result in mass casualties, endanger public infrastructure, or enable severe cyber-attacks. In a letter giving his reasons for vetoing the bill, Newson wrote “While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions.”
Bills introduced in the California Legislature in 2025 include AB1018. This bill establishes regulations for automated decision systems (ADS) that make consequential decisions affecting employment, education, housing, healthcare, financial services, and more. If passed, developers of covered ADS would be required to conduct performance evaluations assessing the system’s accuracy, potential disparate impacts, and potential disparate treatment of individuals based on protected characteristics. The bill requires third-party independent audits for ADS used to make decisions impacting more than 5,999 people over a three-year period, and allows public entities like the Attorney General to bring civil actions for non-compliance, with potential penalties of up to $25,000 per violation.
Bill Introduced in New Mexico
The Artificial Intelligence Act was introduced in the New Mexico Legislature. This bill is similar to the ones in Virginia and Colorado. It would require disclosure of any algorithmic discrimination risk, risk management policies, impact assessment, etc. The bill is in the early stages of the legislative process, it’s unknown if it will pass.
Second Attempt at an AI Bill in Connecticut
A 2024 AI bill in Connecticut died after the Governor threatened to veto it. A new AI bill was introduced in 2025. It is also early in the legislative process and we will continue to monitor its progress.
CompliancePoint has helped organizations across various industries comply with privacy and cybersecurity laws and frameworks. To learn more about how our services can help your business, contact us at connect@compliancepoint.com.
Finding a credible expert with the appropriate background, expertise, and credentials can be difficult. CompliancePoint is here to help.