As a developer: yes to the developer and data scientist and data engineer. Scientists and engineers should be responsible for their work.
The BI analyst: maybe, if they’re responsible for collecting data that ignores the impact of the service on teens. If they’re doing sales-comparisons between Anthropic and OpenAI… eh, I donno.
The janitor: probably not since I don’t feel like the deaths are widely publicized and they probably work for a contracting company that handles the building.
In most cases suicide isn’t anyone’s fault. People like to find someone to blame, and I get that, but people who are even remotely close to doing that, were always going to find a way and a justification.
No AI is going to convince me to kill myself if I didn’t already want to. Equally the inverse must also be true.
That’s not to say that the companies are completely off the hook, it’s utterly ridiculous that these conversations weren’t flagged and sent to a human, but I think it’s daft to suggest that these people would necessarily still be alive had the AI not existed.
I completely agree. Not off the hook. There should be better guardrails (like recipes for bombs and other dangerous things) but from there to accuse the CEO of murder there’s quite a stretch.
If you manufacture a knife that convinces children to kill themselves, yeah, you’re culpable. Everyone else can be charged according to their level of culpability, but any time a company is found liable for killing someone the CEO should be sentenced for their murder. Maybe that would incentivize CEOs to stop getting people killed.
Is the developer also culpable? How about the data scientist? How about the data engineer? How about the BI Analyst? And the janitor?
How about the manufacturer of the knife / pill / gas they used to kill themselves?
As a developer: yes to the developer and data scientist and data engineer. Scientists and engineers should be responsible for their work.
The BI analyst: maybe, if they’re responsible for collecting data that ignores the impact of the service on teens. If they’re doing sales-comparisons between Anthropic and OpenAI… eh, I donno.
The janitor: probably not since I don’t feel like the deaths are widely publicized and they probably work for a contracting company that handles the building.
In most cases suicide isn’t anyone’s fault. People like to find someone to blame, and I get that, but people who are even remotely close to doing that, were always going to find a way and a justification.
No AI is going to convince me to kill myself if I didn’t already want to. Equally the inverse must also be true.
That’s not to say that the companies are completely off the hook, it’s utterly ridiculous that these conversations weren’t flagged and sent to a human, but I think it’s daft to suggest that these people would necessarily still be alive had the AI not existed.
I completely agree. Not off the hook. There should be better guardrails (like recipes for bombs and other dangerous things) but from there to accuse the CEO of murder there’s quite a stretch.
If you manufacture a knife that convinces children to kill themselves, yeah, you’re culpable. Everyone else can be charged according to their level of culpability, but any time a company is found liable for killing someone the CEO should be sentenced for their murder. Maybe that would incentivize CEOs to stop getting people killed.
What about a knife that does the sliicng of the body, the killing itself?