• merc@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    I imagine it’s kind of like when real people interact with Muppets; from what I hear, they still end up perceiving them as people, even though they can see the person with his arm up Kermit’s ass.

    It’s a “known failure mode” of humans that they anthropomorphize things, that they spot patterns that aren’t actually there, that they assign agency when something is random, etc.

    An LLM is a machine designed specifically to produce plausible text. It analyzes billions of books and web pages to figure out the structure of language. Then it is given a bunch of text and it figures out what is likely to come next. It’s obvious what humans will do when exposed to something like that.

    Individual humans should be smart enough to say “We humans are flawed, I better approach this cautiously”. But, as a society we should also protect individual humans from themselves by making laws that prevent them from being preyed on.