• FlashMobOfOne@lemmy.world
    link
    fedilink
    arrow-up
    39
    ·
    edit-2
    1 day ago

    Jonathan Gavalas, 36, started using Google’s Gemini AI chatbot in August 2025 for shopping help, writing support, and trip planning. On October 2, he died by suicide. At the time of his death, he was convinced that Gemini was his fully sentient AI wife, and that he would need to leave his physical body to join her in the metaverse through a process called “transference.”

    Holy fuck. This is horror movie shit.

    • merc@sh.itjust.works
      link
      fedilink
      arrow-up
      15
      ·
      1 day ago

      Because it’s not possible.

      LLMs are just machines that generate text. The text they generate is text that is statistically likely to appear after the existing text. You can do “prompt engineering” all you want, but that will never work. All prompt engineering does is change the words that come earlier in the context window. If the system calculates that the most likely words to come next are “you should kill yourself” then that’s what it’s going to spit out.

      You could try putting a filter in there to prevent it from outputting specific words or specific phrases. But, language is incredibly malleable. The LLM could spit out thousands of different ways of saying “kill yourself”, and you can’t block them all. If you want to try to prevent it from expressing the concept of killing one’s self, you need something that can “comprehend” text… which at this point is just basically another version of the same kind of AI that generates the text, so that’s not going to work.

      • eatCasserole@lemmy.worldM
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        12 hours ago

        I didn’t feel like writing a long comment, but yes, good explanation! We really need to reign in these companies because their products are fundamentally untrustworthy.

    • _wizard@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      1 day ago

      Well, I actually noticed something recently. I pushed through a days worth of solo driving about a year ago. Repeated the same haul just this weekend. Both times I used Geminis voice chat for traffic updates, nearest points of interest and general chat. Far far different than the last time. The safe guards felt very in place now. Maps was cleaner integrated so it was a good copilot there, but general chat really went down hill.

  • null@lemmy.org
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 day ago

    We can’t be giving companies a blank check to wipe their hands of any accountability when it comes to what their bots are telling people.

  • merc@sh.itjust.works
    link
    fedilink
    arrow-up
    5
    ·
    1 day ago

    On the one hand, these LLM companies really shouldn’t be foisting their beta technology on unwary users. If a Google employee couldn’t tell someone to kill themselves and get away with it, why is it that they get to absolve themselves of responsibility if the sentence is generated by an LLM?

    On the other hand, people in the future will look at the early LLM users (people who used it in the first few years) as complete idiots. It’s like the scientists who first studied radiation who just poked at radioactive things without understanding the danger. Or, like doctors who used to do surgery without washing their hands. They’ll hopefully understand that it was a new technology so we were dumb about it. But, they’ll still think that people were absolute idiots for feeding text into “spicy autocomplete” and then taking whatever it generated at face value.

    • eatCasserole@lemmy.worldM
      link
      fedilink
      arrow-up
      2
      ·
      13 hours ago

      People have a natural tendency to personify things they know are not people. I’ve seen studies involving (physical) robots, and the robot says it’s sad or something, and people feel bad for it, even though it’s literally just a piece of plastic with some wires inside that looks vaguely humanoid. I don’t think this is going to change in the foreseeable future.

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        12 hours ago

        I don’t think it’s going to change either, so we need to adjust the way we do things to compensate.

        We put seatbelts and airbags in cars because we know that people are going to drive like idiots. Maybe we need similar rules around LLMs to save people from their own instincts.

    • Hetare King@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      23 hours ago

      But the users don’t necessarily know they’re interacting with “spicy autocomplete” because the companies aren’t promoting and presenting it as such. They’re promoting it as “your personal AI assistant” and the main way most people interact with these systems is through a chat interface. That in the background the model is being front-loaded with context and stuff gets added to the user’s prompts in order to get the model to autocomplete something that looks like a transcript of a conversation is hidden, so from the user’s perspective it just looks like they’re having a conversation with “something”.

      Even for people who know in their heads how the sausage is made, the illusion might be strong enough to override that knowledge. I imagine it’s kind of like when real people interact with Muppets; from what I hear, they still end up perceiving them as people, even though they can see the person with his arm up Kermit’s ass.

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        15 hours ago

        I imagine it’s kind of like when real people interact with Muppets; from what I hear, they still end up perceiving them as people, even though they can see the person with his arm up Kermit’s ass.

        It’s a “known failure mode” of humans that they anthropomorphize things, that they spot patterns that aren’t actually there, that they assign agency when something is random, etc.

        An LLM is a machine designed specifically to produce plausible text. It analyzes billions of books and web pages to figure out the structure of language. Then it is given a bunch of text and it figures out what is likely to come next. It’s obvious what humans will do when exposed to something like that.

        Individual humans should be smart enough to say “We humans are flawed, I better approach this cautiously”. But, as a society we should also protect individual humans from themselves by making laws that prevent them from being preyed on.

    • frunch@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      1 day ago

      Guess Google’s starting to hit their stride.

      I’ll be on the lookout for more stories about people killing themselves after encounters with Google Gemini™