Zak Stein is a researcher focused on child development, education, and existential risk. He joins the podcast to discuss the psychological harms of anthropomorphic AI. We examine attention and attachment hacking, AI companions for kids, loneliness, and cognitive atrophy. Our conversation also covers how we can preserve human relationships, redesign education, and build cognitive security tools that keep AI from undermining our humanity.

LINKS:

  • Whats_your_reasoning@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    2 days ago

    They hit upon a strong point with comparing chatbots to talking with a psychopath (about 56-58 minutes in.) Discouraging someone from talking to other people is a classic method of increasing one’s control over someone else.

    It bears repeating that the chatbots’ sycophantic nature isn’t in order to help you, but rather for their own (owners’) goal - that is, to keep you coming back to it. It’s quite like grooming, if you think about it. With the current end-goal of getting users addicted.

    The future end goals? Still to be determined. If enshittification has taught us anything, it should be that any technology (in the current framework, at least) that gains significant adoption can and will eventually be used to exploit its users.

    • DriftingLynx@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Ah but this tech is ahead of the expoiting-their-users-curve.

      By using them now you’re opening yourself to psychosis, yes, but also your conversations are being used to further train the models. I do agree we can assume we’re at the high point and these tools are on the same downward slide as all big tech projects. It’s going to happen quickly considering the mind boggling levels of debt they are carrying.

      • CubitOom@infosec.pubOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        They aren’t about to squander having insights into the deepest recesses of their most loyal users.

        It’s an advertising wet dream.

    • amino@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      Discouraging someone from talking to other people is a classic method of increasing one’s control over someone else.

      you’ve just described what the average parent, teacher, priest, doctor, insert authority figure tells children to do if they wanna “stay safe”. where AI comes in is automating said preexisting systems of domination to hide the underlying social harms and naturalize child abuse. “the AI isn’t a person therefore it can’t groom my kids”.

      I’d argue that when the majority of adults engage in abuse, that behavior can’t be called psychopathic because that shifts the blame from abolishing childism to people with personality disorders.

  • rafoix@lemmy.zip
    link
    fedilink
    arrow-up
    27
    ·
    3 days ago

    Seems like AI regulation is becoming more necessary by the minute.

    Ban it in schools. Ban it for children. We’re letting billionaires destroy a whole generation of children.

    • KelvarCherry [They/Them]@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      19
      ·
      3 days ago

      Removing AI LLM slop from schools is absolutely the first step, and would be impactful at that. These kids are being forced onto AI through their schools. If we win this battle, one day we’ll look back at these “historical figure” chatbots the same way we think of candy cigarettes.

      I want to iterate that school curriculum is incredibly controllable at the local level. Check your school board. Rally for a No AI policy in lessons, and perhaps in teaching materials. This at least is one step we can take.

    • CubitOom@infosec.pubOP
      link
      fedilink
      English
      arrow-up
      15
      ·
      2 days ago
      1. Gen AI should be proven safe before deployment
      2. Gen AI needs to be opt in by default
      3. Model/agent must transparently show what they are doing
      4. Gen AI should not be anthropomorphized sycophants designed to make the users stay in the chat and be isolated from other humans
      5. Gen AI profiteers should be held accountable
  • Mothra@mander.xyz
    link
    fedilink
    arrow-up
    3
    ·
    2 days ago

    Do you guys think some forms or applications of AI will eventually be outright banned, in a similar fashion as mercury, cocaine and heroin were in the beginning used as medicines for all sorts of ailments and later were removed?

    • JustTesting@lemmy.hogru.ch
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      Not really, or it will take a long time. We already struggle to do this with social media, knowing full well that there’s issues since 2010-2016. And politicians/governments still use twitter unconsensual porn generator platform for communicating with the public

      • Mothra@mander.xyz
        link
        fedilink
        arrow-up
        3
        ·
        2 days ago

        It’s usually the case at least with scientific knowledge that it takes around 15 to 30 years for any given stance to become accepted or rejected and into the mainstream, where society starts to take measures with it into account. For example, paleontologists were arguing about some dinosaurs being feathered since the 80’s, but the idea was challenged for a long time until more evidence surfaced. By the 2000’s it was gaining traction and by 2015 or so you would be hard pressed to find reconstructions that didn’t include feathers.

        I don’t know the timeframes for the drugs I mentioned though, but I would be expecting something similar. You mentioned issues with social media being known since 2010-2016 and we’re now starting to see a pushback from the general public, with demands about banning infinite scrolling, whether or not teenagers are allowed to use social media, etc. So maybe in another ten years… But AI moves so fast I wonder if we’ll ever catch up