• Tattorack@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    edit-2
    20 hours ago

    Sounds like you’re anthropomorphising. To you it might not have been the logical response based on its training data, but with the chaos you describe it sounds more like just a statistic.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      8 hours ago

      You do realize the majority of the training data the models were trained on was anthropomorphic data, yes?

      And that there’s a long line of replicated and followed up research starting with the Li Emergent World Models paper on Othello-GPT that transformers build complex internal world models of things tangential to the actual training tokens?

      Because if you didn’t know what I just said to you (or still don’t understand it), maybe it’s a bit more complicated than your simplified perspective can capture?

      • Tattorack@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        It’s not a perspective. It just is.

        It’s not complicated at all. The AI hype is just surrounded with heaps of wishful thinking, like the paper you mentioned (side note; do you know how many papers on string theory there are? And how many of those papers are actually substantial? Yeah, exactly).

        A computer is incapable of becoming your new self aware, evolved, best friend simply because you turned Moby Dick into a bunch of numbers.