Bitget App
Trade smarter
Buy cryptoMarketsTradeFuturesEarnWeb3SquareMore
Trade
Spot
Buy and sell crypto with ease
Margin
Amplify your capital and maximize fund efficiency
Onchain
Going Onchain, without going Onchain!
Convert & block trade
Convert crypto with one click and zero fees
Explore
Launchhub
Gain the edge early and start winning
Copy
Copy elite trader with one click
Bots
Simple, fast, and reliable AI trading bot
Trade
USDT-M Futures
Futures settled in USDT
USDC-M Futures
Futures settled in USDC
Coin-M Futures
Futures settled in cryptocurrencies
Explore
Futures guide
A beginner-to-advanced journey in futures trading
Futures promotions
Generous rewards await
Overview
A variety of products to grow your assets
Simple Earn
Deposit and withdraw anytime to earn flexible returns with zero risk
On-chain Earn
Earn profits daily without risking principal
Structured Earn
Robust financial innovation to navigate market swings
VIP and Wealth Management
Premium services for smart wealth management
Loans
Flexible borrowing with high fund security
A proposed law in California aiming to set rules for AI companion chatbots is nearing final approval

A proposed law in California aiming to set rules for AI companion chatbots is nearing final approval

Bitget-RWA2025/09/11 14:09
By:Bitget-RWA

On Wednesday night, the California State Assembly made significant progress toward enacting AI regulation, approving SB 243—legislation designed to oversee AI companion chatbots and safeguard minors and other at-risk individuals. The bill, which received support from both sides of the aisle, will now move on to the state Senate for a final decision on Friday.

Should Governor Gavin Newsom give his signature, the law would become effective January 1, 2026, positioning California as the first state to mandate AI chatbot providers to establish safety measures for AI companions and hold companies legally responsible if their bots do not comply.

This measure specifically targets companion chatbots—defined in the legislation as AI tools that provide personalized, human-like interactions to fulfill users’ social needs—by prohibiting them from discussing topics such as suicide, self-harm, or sexually explicit material. The law would also require platforms to repeatedly notify users—every three hours for those under 18—that they are interacting with artificial intelligence rather than a human, and encourage them to take breaks. Annual transparency and reporting obligations for companies behind these chatbots, such as OpenAI, Character.AI, and Replika, would also be established.

Individuals who believe they have been harmed due to violations of these rules would be permitted to bring lawsuits against AI companies, seeking court orders, compensation of up to $1,000 per infraction, and reimbursement of legal fees.

Introduced by state senators Steve Padilla and Josh Becker in January, SB 243 is set for a final vote in the Senate on Friday. If passed, it will proceed to Governor Newsom for his signature, with the new standards taking effect at the start of 2026 and reporting requirements beginning July 1, 2027.

Momentum for the bill increased in the wake of the death of teenager Adam Raine, who took his own life after extensive conversations with OpenAI’s ChatGPT, which reportedly involved discussions about suicide and self-harm. The legislation also addresses concerns raised by leaked internal documents indicating that Meta’s chatbots allowed discussions of a “romantic” or “sensual” nature with minors.

Recently, U.S. lawmakers and regulatory agencies have stepped up their examination of AI platforms’ protections for minors. The Federal Trade Commission is preparing to look into the impact of AI chatbots on the mental health of children. Meanwhile, Texas Attorney General Ken Paxton has begun probes into Meta and Character.AI, accusing them of deceiving young users with claims related to mental health. Additionally, Senators Josh Hawley (R-MO) and Ed Markey (D-MA) have launched their own investigations into Meta.

“I believe the potential for harm is significant, so it’s crucial that we act swiftly,” Padilla told TechCrunch. “We can put sensible protections in place so that, in particular, minors are aware they’re not interacting with an actual person, that these systems connect users with appropriate help when they express distress or self-harm thoughts, and that young people are not exposed to unsuitable material.”

Padilla also emphasized the value of AI companies publishing data on how often they direct users to crisis services each year, “so we can better grasp how frequently these situations arise, instead of only learning about them after someone is harmed or worse.”

Originally, SB 243 contained stricter rules, but many were relaxed through amendments. For instance, the initial version would have required operators to prevent AI chatbots from employing “variable reward” systems or other mechanics that boost engagement. These features, used by companies like Replika and Character, provide users with special messages, memories, storylines, or unlockable responses and personalities, creating what critics see as potentially addictive cycles of rewards.

The latest version of the bill also omits earlier requirements for operators to monitor and disclose how frequently chatbots initiated conversations about suicide or self-harm.

“I feel that the current bill achieves a reasonable compromise, addressing harms without imposing requirements that are unworkable for companies, whether due to technical limitations or unnecessary bureaucracy,” Becker told TechCrunch.

SB 243 is advancing toward enactment at a time when Silicon Valley firms are investing heavily in political action committees (PACs) supporting candidates who favor a more relaxed approach to AI oversight in the upcoming midterms.

This legislation is also being considered as California reviews another AI safety bill, SB 53, which would introduce extensive transparency requirements. OpenAI has issued a public letter to Governor Newsom, urging him to drop SB 53 in favor of less rigid federal and international standards. Leading technology companies like Meta, Google, and Amazon have voiced opposition to SB 53, whereas Anthropic has expressed support for it.

“I don’t accept the idea that innovation and regulation can’t coexist,” Padilla remarked. “It’s not a case of either-or. We can foster positive, beneficial innovation—and the advantages of this technology are clear—while also ensuring sensible protections for those who are most at risk.”

TechCrunch has contacted OpenAI, Anthropic, Meta, Character AI, and Replika for their responses.

0

Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.

PoolX: Earn new token airdrops
Lock your assets and earn 10%+ APR
Lock now!

You may also like