Zak Stein is a researcher focused on child development, education, and existential risk. He joins the podcast to discuss the psychological harms of anthropomorphic AI. We examine attention and attachment hacking, AI companions for kids, loneliness, and cognitive atrophy. Our conversation also covers how we can preserve human relationships, redesign education, and build cognitive security tools that keep AI from undermining our humanity.

LINKS:

  • Whats_your_reasoning@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    2 days ago

    They hit upon a strong point with comparing chatbots to talking with a psychopath (about 56-58 minutes in.) Discouraging someone from talking to other people is a classic method of increasing one’s control over someone else.

    It bears repeating that the chatbots’ sycophantic nature isn’t in order to help you, but rather for their own (owners’) goal - that is, to keep you coming back to it. It’s quite like grooming, if you think about it. With the current end-goal of getting users addicted.

    The future end goals? Still to be determined. If enshittification has taught us anything, it should be that any technology (in the current framework, at least) that gains significant adoption can and will eventually be used to exploit its users.

    • DriftingLynx@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Ah but this tech is ahead of the expoiting-their-users-curve.

      By using them now you’re opening yourself to psychosis, yes, but also your conversations are being used to further train the models. I do agree we can assume we’re at the high point and these tools are on the same downward slide as all big tech projects. It’s going to happen quickly considering the mind boggling levels of debt they are carrying.

      • CubitOom@infosec.pubOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        They aren’t about to squander having insights into the deepest recesses of their most loyal users.

        It’s an advertising wet dream.

    • amino@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      Discouraging someone from talking to other people is a classic method of increasing one’s control over someone else.

      you’ve just described what the average parent, teacher, priest, doctor, insert authority figure tells children to do if they wanna “stay safe”. where AI comes in is automating said preexisting systems of domination to hide the underlying social harms and naturalize child abuse. “the AI isn’t a person therefore it can’t groom my kids”.

      I’d argue that when the majority of adults engage in abuse, that behavior can’t be called psychopathic because that shifts the blame from abolishing childism to people with personality disorders.