California recently passed a law that will, in practice, cause AI chatbots to respond to any hint of emotional distress by spamming users with 988 crisis line numbers, or by cutting off the conversation entirely. The law requires chatbot providers to implement “a protocol for preventing the production of suicidal ideation” if they’re going to engage in mental health conversations at all, with liability waiting for any provider whose conversation is later linked to harm. New York is considering going further, with a bill that would simply ban chatbots from engaging in discussions “suited for licensed professionals.” Similar proposals are moving in other states.
If you’ve been reading Techdirt for any length of time, you know exactly what’s happening here. It’s the same moral panic playbook we’ve seen deployed against cyberbullying, then against social media, and now against generative AI. Something terrible happens. A handful of tragic stories emerge. Lawmakers, desperate to show they’re doing something, reach for the most visible technology in the room and start passing laws designed to stop it from doing whatever it was supposedly doing. The possibility that the technology might actually be helping more people than it’s hurting, or that the proposed fix might make things worse, rarely enters the conversation.
Professor Jess Miers and her student Ray Yeh had a terrific piece at Transformer last month that actually engages with the data and the incentive structures here, and their central argument may seem counterintuitive to many: the way to make AI chatbots safer for people in mental health distress might be to reduce liability for providers. For many people, I’m sure, that will sound backwards. That is, until you actually think through how the current liability regime shapes behavior — as well as reflect on what we know about Section 230’s liability regime in a different context.
As predicted, Mike Masnick is the author. Mike has a conflict of interest when it comes to reporting on platforms’ responsibilities, because he’s on the board for Bluesky… The social media company.
And he’s trying to argue that chatbots are good for mental health actually. Never mind healthcare, he praises chatbots.
Yet chatbots have emerged as first aid for people experiencing mental health issues, providing genuine benefit to those who aren’t in crisis but are not OK either. Heavy-handed legislation risks derailing this breakthrough in support, creating more problems than it solves.
The proof? Self-reports. Including people who use the Replika Girlfriend-bot.
At this point, I consider anything on Mike’s website that’s related to social media to be compromised, and this is yet another example of that disappointing pattern.
The comments in the article are actually pretty good. Like this one.
I love how on Techdirt, when it comes to LLMs, the entire concept of product liability just goes right out the window. If this were a physical object that, ha ha, occasionally convinced people to commit suicide or murder, or spiral off into other delusions, it’d be off the shelves in a heartbeat, no matter how useful some people thought it was, and the manufacturer would be rightly sued into the ground. But according to Techdirt, because it’s software, it is now and forever a permanent and untouchable part of the internet landscape and regulating it is impossible and undesirable.
I’m (cautiously) interested in the concept of built-for-purpose chatbots being used therapeutically, although I expect the providers to fail horribly at not abusing the massive trove of personal data they’ll gain access to. But if a corporation can’t produce a general purpose chatbot that won’t help people kill themselves, they have no intrinsic right to just dump it on the internet and say “it’s not our fault.” If that’s a bet they want to make, then they need to accept that they’re going to take their lumps.
ETA: the comment above ended up causing a Mike freakout. It was written by user TheKilt, who is exceptionally friendly and willing to concede points to Mike. Mike responds by accusing TheKilt of lying, and then proceeds to react to different people in the same thread who are only insulting him. TheKilt tries to fetch Mike’s attention one last time, but Mike keeps ignoring him.
Mike is choosing the lowest hanging fruit and ignoring substantive criticism. It’s embarrassing.
If this were a physical object that, ha ha, occasionally convinced people to commit suicide or murder, or spiral off into other delusions, it’d be off the shelves in a heartbeat
I want to gently push back on this. There are medications that can cause psychological symptoms and suicidal ideation as side effects and they’re still prescribed. They are, however, controlled, people who take them have to be informed of the side effects, and they’re managed by a trained physician. I absolutely think LLMs need to be more tightly regulated, and we need to have a much better idea of how they work and how to deploy them safely and in contexts where they are actually useful and won’t cause harm. But we do manage known risks with other products.
The difference between the medical industry and the AI industry is like night and day. Medications are tested by professionals, side effects are documented, and professionals recommend them.
The AI/Wellness industry, by comparison, grabs people that should have been treated by the medical system. AI is the medicine equivalent of a weirdo in an alleyway promising that they’re a doctor, giving you some random pills with ingredients unknown to even them, but that they know for sure has caused people to kill themselves before. And the weirdo’s only goal is to make you feel correct about ingesting that medicine.
Regulation would be great, though. In fact, the product should be pulled until that regulation is in place.
Counterpoint, do
magic eightballs“AI” chatbots prevent suicide in the first place? We know human staffed phone helplines can, which is why users will be “spammed” with those numbers. They verifiably do more good than talking to a goddamn bullshit generator.Liability matters where human lives are concerned.
It shouldn’t be a chatbot what prevents suicide in the first place. Something has gone horribly wrong with society – and it has already been normalized too.
Isn’t that what the bill says top? Stop the conversation, talk to someone professional.
It goes deeper than that, though. Why is the person talking about this with a chatbot in the first place, rather than with some professional?
Money, accessibility, shame, no results from past experiences with a professional, curiosity, control …
Don’t talk about your mental health with an AI seems like pretty basic common sense to me, as someone who interacts with them constantly.
This article seems like it was written by a PR firm for big AI.
Like a director of a social media company that is allergic to taking responsibility for its actions, for example?
Don’t talk about your mental health with an AI seems like pretty basic common sense to me
And as we all know, people in the middle of a mental health crisis always act logically and make good decisions.
Right and which makes it all the worse that the editor-in-chief of a major tech blog would validate the behavior.
(I hope the grandparent comment didn’t intend to be read as accusing people of doing the wrong thing knowingly, especially since so many AI companies intentionally mislead potential customers.)
deleted by creator
moral panic
That sums it up nicely. I’ve got nothing to add.
Can you explain you believe it’s a “moral panic” when we identify software that has been caught encouraging suicide and homicide in real life…
Edit: While pushing baseless conspiracies that AI will kill everyone?







