• kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    10 hours ago

    You do realize the majority of the training data the models were trained on was anthropomorphic data, yes?

    And that there’s a long line of replicated and followed up research starting with the Li Emergent World Models paper on Othello-GPT that transformers build complex internal world models of things tangential to the actual training tokens?

    Because if you didn’t know what I just said to you (or still don’t understand it), maybe it’s a bit more complicated than your simplified perspective can capture?

    • Tattorack@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      It’s not a perspective. It just is.

      It’s not complicated at all. The AI hype is just surrounded with heaps of wishful thinking, like the paper you mentioned (side note; do you know how many papers on string theory there are? And how many of those papers are actually substantial? Yeah, exactly).

      A computer is incapable of becoming your new self aware, evolved, best friend simply because you turned Moby Dick into a bunch of numbers.