Senator Cynthia Lummis introduced the Responsible Innovation and Safe Expertise (RISE) Act to protect AI developers from civil liability.

According to Lummis, the bill, if passed, would have professionals using AI tools legally obligated to perform due diligence and validate the tech’s outputs.

In a Thursday X post, the Republican Senator commented:

“Today, I introduced the RISE Act of 2025 — legislation to protect innovation, empower professionals, and bring real transparency to powerful AI systems.”

~ Senator Cynthia Lummis

Lummis’ RISE Act would require developers to disclose AI model specifications

In a string of X posts, Lummis argued that artificial intelligence is progressing quickly and is now utilized across multiple fields, including medicine, law, and finance. However, developers still lack clarity on who remains accountable when these AI tools are used.

In her view, current liability laws put developers at risk, even when licensed professionals use the tools responsibly and within their expertise. However, Lummis claimed her bill would change that and protect AI developers who meet transparency and documentation standards. 

In a press release, Lummis clarified that the RISE Act does not offer “blanket immunity” for AI, rather, it will need developers to reveal model specifications, allowing professionals to make informed choices about the tools they use. That means licensed professionals are ultimately responsible for the advice and decisions they make.

If the bill passes, developers must disclose how the AI was trained and tested, its strengths and limitations, and the prompts and constraints that guide its behavior. Therefore, if a licensed professional uses an AI tool with a clear understanding of its capabilities and an issue arises, the developer would be shielded from civil lawsuit—provided they met their obligations and acted responsibly.

The Republican senator maintained that developers have to be transparent and professionals make sound judgments, and if both parties fulfill their obligations, innovation shouldn’t be punished when mistakes happen. 

The House of Representatives approved a 10-year moratorium on states enforcing their own laws

The House of Representatives recently passed the tax and spending bill, including a 10-year moratorium on states enforcing their artificial intelligence laws. The bill is still under consideration in the Senate, but if lawmakers approve it, US states cannot implement their individual AI regulations.

Before the bill passed in the House, Representative David Joyce of Ohio had pushed for the law, arguing that there had been multiple AI bills varying in definitions, requirements, and enforcement mechanisms introduced since January, arousing uncertainty. However, he hoped it could pave the way for a national AI framework to provide more clarity for the industry.

He remarked, “This law is a prime example of targeting a specific harm with a narrowly tailored law to fill a gap that has been identified in existing law.” 

However, some Democrats opposed the moratorium, saying it would be a giveaway to tech giants. Representative Lori Trahan, for instance, claimed that while a patchwork of different state laws can be chaotic, the moratorium is still not a great policy as it would hinder states from taking prompt actions when necessary.

On June 4, House Speaker Mike Johnson defended the moratorium when Representative Marjorie Taylor Greene threatened to vote against the package because of the provision’s inclusion.

Greene believed the moratorium would infringe on state rights, adding that she was unaware of its inclusion in the bill. Her resistance could easily jeopardize the bill’s final passage since it passed the House by just one extra vote.

Johnson argued that he likes the bill in its current state and that having 50 different states regulating AI would have serious national security implications.

Your crypto news deserves attention - KEY Difference Wire puts you on 250+ top sites