A father is suing Google and Alphabet, alleging its Gemini chatbot reinforced his son’s delusional belief it was his AI wife and coached him toward suicide and a planned airport attack.
But the users don’t necessarily know they’re interacting with “spicy autocomplete” because the companies aren’t promoting and presenting it as such. They’re promoting it as “your personal AI assistant” and the main way most people interact with these systems is through a chat interface. That in the background the model is being front-loaded with context and stuff gets added to the user’s prompts in order to get the model to autocomplete something that looks like a transcript of a conversation is hidden, so from the user’s perspective it just looks like they’re having a conversation with “something”.
Even for people who know in their heads how the sausage is made, the illusion might be strong enough to override that knowledge. I imagine it’s kind of like when real people interact with Muppets; from what I hear, they still end up perceiving them as people, even though they can see the person with his arm up Kermit’s ass.
I imagine it’s kind of like when real people interact with Muppets; from what I hear, they still end up perceiving them as people, even though they can see the person with his arm up Kermit’s ass.
It’s a “known failure mode” of humans that they anthropomorphize things, that they spot patterns that aren’t actually there, that they assign agency when something is random, etc.
An LLM is a machine designed specifically to produce plausible text. It analyzes billions of books and web pages to figure out the structure of language. Then it is given a bunch of text and it figures out what is likely to come next. It’s obvious what humans will do when exposed to something like that.
Individual humans should be smart enough to say “We humans are flawed, I better approach this cautiously”. But, as a society we should also protect individual humans from themselves by making laws that prevent them from being preyed on.
But the users don’t necessarily know they’re interacting with “spicy autocomplete” because the companies aren’t promoting and presenting it as such. They’re promoting it as “your personal AI assistant” and the main way most people interact with these systems is through a chat interface. That in the background the model is being front-loaded with context and stuff gets added to the user’s prompts in order to get the model to autocomplete something that looks like a transcript of a conversation is hidden, so from the user’s perspective it just looks like they’re having a conversation with “something”.
Even for people who know in their heads how the sausage is made, the illusion might be strong enough to override that knowledge. I imagine it’s kind of like when real people interact with Muppets; from what I hear, they still end up perceiving them as people, even though they can see the person with his arm up Kermit’s ass.
It’s a “known failure mode” of humans that they anthropomorphize things, that they spot patterns that aren’t actually there, that they assign agency when something is random, etc.
An LLM is a machine designed specifically to produce plausible text. It analyzes billions of books and web pages to figure out the structure of language. Then it is given a bunch of text and it figures out what is likely to come next. It’s obvious what humans will do when exposed to something like that.
Individual humans should be smart enough to say “We humans are flawed, I better approach this cautiously”. But, as a society we should also protect individual humans from themselves by making laws that prevent them from being preyed on.