WASHINGTON, D.C. – Even as the Trump administration lowers some artificial intelligence guardrails in hopes of boosting innovation, states continue to establish policies for the safe use of AI.
During his first week in office, President Donald Trump signed an executive order revoking some Biden-era programs promoting the safe use of artificial intelligence.
A Biden administration order had directed more than 50 federal entities to implement guidance on AI safety and security. Some agencies, including the U.S. Department of Justice, were tasked with studying the effects of AI bias and how the technology could affect civil rights.
Besides rescinding that policy, Trump’s order also calls for the development of an AI Action Plan, which will outline policies to “enhance America’s position as an AI powerhouse and prevent unnecessarily burdensome requirements.”
But states are still pursuing legislation that aims to keep residents safe. The measures range from requiring companies to implement consumer protections to outlawing fake photos and videos to regulating the use of AI in health care decisions.
State lawmakers eye promise, pitfalls of AI ahead of November elections
States will need to take a bigger role in regulating artificial intelligence, said Serena Oduro, a senior policy analyst at Data & Society. The nonprofit research institute studies the social implications of data-centric technologies, including AI.
“If we continue with the road that Trump is on, I think states will have to step up because they’re going to need to protect their constituents,” Oduro said. “What’s unfortunate is people are already scared.”
In 2024, 31 states adopted resolutions or enacted legislation regarding artificial intelligence, according to a database from the National Conference of State Legislatures, a nonpartisan public officials’ association. This year, nearly every state has introduced AI legislation.
Colorado last year became the first state to implement sweeping AI regulations. Virginia this year became the second state to pass comprehensive AI anti-discrimination legislation, which would make companies responsible for protecting consumers from bias in areas such as hiring, housing and health care. If signed by Republican Gov. Glenn Youngkin, the new law would go into effect in July 2026.
The legislation also would require companies developing and using “high-risk” AI systems, such as those used for employment decisions or financial services, to conduct risk assessments and document their intended uses.
Many states are hoping to curb the rise of deepfakes — digitally altered photos and videos — on the internet.
Lawmakers in some states, including Montana and South Dakota, are aiming to deter people from using political deepfakes during elections. Other bills would establish civil and criminal penalties for sharing sexually explicit deepfake images without the subject’s consent, such as in Hawaii and New Mexico.
Lawmakers in a number of states, including Arkansas, California, Maryland and more, also introduced legislation that would regulate the use of artificial intelligence in health care and insurance decisions.
The Utah legislature, for instance, passed a bill last week that would provide protections for mental health patients interacting with chatbots that use AI. The measure is currently awaiting action from Republican Gov. Spencer Cox.
If we continue with the road that Trump is on, I think states will have to step up because they’re going to need to protect their constituents.
– Serena Oduro, a senior policy analyst at Data & Society
California Assemblymember Rebecca Bauer-Kahan, a Democrat, is helping lead the state’s efforts to create a framework for AI regulation.
Following her successful legislation from last year that now defines “artificial intelligence” within the state’s code, Bauer-Kahan is currently working on six bills related to AI. They would require generative AI developers to publicly document materials used to train their systems, crack down on deepfake pornography services, regulate the deployment of automated decision systems and more.
She told Stateline that more people are aware of AI now that it’s widely available to the public. Generative AI tools, such as OpenAI’s free chatbot, ChatGPT, allow anyone to analyze data, create weekly meal plans, organize grocery lists and more in a matter of seconds.
“I actually think one of the things generative AI has done is brought AI into public consciousness in a really powerful way, and it’s leading legislators to want to learn about it and understand it,” she said.
In Washington, Republican state Rep. Michael Keaton said he filed legislation to help small businesses that want to invest in AI innovation.
After retiring from active duty in the Air Force, Keaton later began working for the service as a contractor. Collaborating with engineers, Keaton said he learned that it’s important to strike a balance between tasks for humans and tasks that can be automated — and how this balance can be used in the public’s interest.
Earlier this month, the Washington state House approved Keaton’s bill, which would create a grant program for small businesses that use artificial intelligence for projects that have statewide impact, such as for wildfire tracking, cybersecurity or health care advancements. The bill now sits in a Senate committee.
While the bill promotes innovation, it also requires applicants to commit to ethical uses of AI and analyze the risks that could come with their product.
“We’re driving for innovation and we’re trying to get the monies appropriated to be able to take advantage of that innovation,” Keaton said, “but we want to do it in a smart way.”
With the emerging patchwork of AI legislation across the states, it could be challenging for AI developers to keep up, said Paul Lekas, the senior vice president and head of global public policy and government affairs at the Software & Information Industry Association, a trade association representing the digital content industry.
Not only are states creating their own definitions of artificial intelligence but they’re also outlining different rules for different actors, such as AI developers, distributors or consumers, he added.
“I think the industry is struggling to figure out how to comply with all of these laws were they to pass,” Lekas said.
Stateline reporter Madyson Fitzgerald can be reached at mfitzgerald@stateline.org.
Stateline is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Stateline maintains editorial independence. Contact Editor Scott S. Greenberger for questions: info@stateline.org.