Lawmakers in New York have passed a bill to limit disasters caused by artificial intelligence. According to the bill, the state wants to prevent AI models created by firms like OpenAI, Google, and Anthropic from contributing to disaster scenarios.

According to the bill, these scenarios include the death or injury of more than 100 people or more than $1 billion in damages or losses. The bill, known as the RAISE Act, represents a win for movements pushing AI safety. The movements have lost steam in the last few years as Silicon Valley and the Trump administration have continued to prioritize speed and innovation.

Advocates focused on safety, including Nobel laureate Geoffrey Hinton and AI research pioneer Yoshua Bengio are behind the RAISE Act. If it eventually gets signed into law, the bill would establish the first set of legally mandated transparency standards for leading artificial intelligence labs in the United States of America.

New York considers RAISE Act to limit AI-fueled disasters

The RAISE Act has some of the same provisions and goals as the controversial AI safety bill SB 1047 in California, which was eventually vetoed.

However, a co-sponsor of the RAISE Act, New York State Senator Andrew Gounardes, mentioned in an interview that he designed the bill in a way that it doesn’t stifle innovation among startups or academic researchers, a common criticism the SB 1047 faced. “The window to put in place guardrails is rapidly shrinking given how fast this technology is evolving,” said Senator Gounardes.

The senator also said that most of the people who are well-versed in the AI sector have also recognized these risks, a development he called “alarming.” Meanwhile, the RAISE Act is now on its way to the desk of New York Governor Kathy Hochul, where she could either sign it into law or send it back for amendment. Another option would be for her to veto the bill, which is likely.

If it is eventually signed into law, the RAISE Act will require some of the biggest AI labs in the world to publish safety and security reports on their frontier AI models. The bill also mandates AI labs to report safety incidents concerning AI model behavior or bad actors stealing an AI model, if it happens.

If tech companies fail to live up to these standards, the RAISE Act empowers New York’s attorney general to bring civil penalties of up to $30 million against them.

RAISE Act seeks to regulate AI labs

The RAISE Act was designed to regulate the largest AI firms globally, including those based in California like OpenAI and Google, and those based in China, like DeepSeek and Alibaba. The requirements of the bill include a mandatory clause that applies to companies that used more than $100 million in computing resources to train their AI models and are available to residents in New York.

Although similar to SB 1047 in some ways, the RAISE Act addresses some of the previous AI safety bills.

For example, no clause requires AI model developers to have a kill switch on their models nor does it hold companies that post-train their models accountable for critical harms. Nevertheless, there has been pushback on the New York bill, according to co-sponsor of the RAISE Act, New York State Assembly member Alex Bores. He called the resistance unsurprising but added that the RAISE Act will not limit the developmental prowess of tech companies in any way.

“The NY RAISE Act is yet another stupid, stupid state-level AI bill that will only hurt the US at a time when our adversaries are racing ahead,” said Andreessen Horowitz general partner Anjney Midha in a Friday post on X.

Andreessen Horowitz and startup incubator Y Combinator were in fierce opposition to SB 1047. In addition, Anthropic co-founder Jack Clark also shared his grievances over how broad the RAISE Act is, noting that it could present risks to smaller firms.

Your crypto news deserves attention - KEY Difference Wire puts you on 250+ top sites