California and New York AI safety laws take effect amid federal pushback
New AI safety laws in California and New York have come into effect as federal AI legislation remains unsettled, and a December executive order and a new task force have signalled renewed federal scrutiny of state-level rules. California's SB‑53, which went into effect on January 1, requires model developers to disclose how they will mitigate major AI risks and to report safety incidents, with fines up to $1 million for noncompliance.
New York's RAISE Act, passed in December, includes similar reporting requirements for models of all sizes and sets an upper fine threshold of $3 million after a company's first violation; SB‑53 requires state notification within 15 days of an incident, while RAISE requires notification within 72 hours.
Both laws target companies with more than $500 million in annual revenue, a threshold critics say exempts many smaller startups. SB‑53 also includes whistleblower protections, and New York's law mandates annual third‑party audits at the time of writing; neither statute generally forces third‑party model testing.
Legal and industry observers quoted in the coverage said SB‑53 focuses on transparency and reporting rather than heavier safety mandates that were proposed in earlier bills, such as a failed version that would have required costly safety testing and a shutdown mechanism.
Key Topics
Tech, Raise Act, California, New York, Ai Task Force, Whistleblower Protections