• Iconoclast@feddit.uk
    link
    fedilink
    English
    arrow-up
    35
    ·
    10 hours ago

    It’s a Large Language Model designed to generate natural-sounding language based on statistical probabilities and patterns - not knowledge or understanding. It doesn’t “lie” and it doesn’t have the capability to explain itself. It just talks.

    That speech being coherent is by design; the accuracy of the content is not.

    This isn’t the model failing. It’s just being used for something it was never intended for.

    • THB@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      10 hours ago

      I puke a little in my mouth every time an article humanizes LLMs, even if they’re critical. Exactly as you said they do not “lie” nor are they “trying” to do anything. It’s literally word salad that organized to look like language.

  • FancyPantsFIRE@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    14 hours ago

    The thing I find amusing here is the direct quoting of Gemini’s analysis of its interactions as if it is actually able to give real insight into its behaviors, as well as the assertion that there’s a simple fix to the hallucination problem which, sycophantic or otherwise, is a perennial problem.

    • jeeva@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      This mischaracterisation really struck me during the coverage and commentary of the recent “AI blogged about my rejection” as if that weren’t something prompted by a human for.

    • CosmoNova@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 hours ago

      That‘s what annoys me the most about all of this. The reasoning of the LLM doesn‘t matter because that‘s not actually why it happened. Once again bad journalism falls on it‘s face when talking about word salad as if it was a person.

    • MolochHorridus@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      9 hours ago

      There is no hallucination problems, just design flaws and errors. The so called AI bots are not sentient and cannot hallucinate.

      • FancyPantsFIRE@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        My gut response is that everyone understands that the models aren’t sentient and hallucination is short hand for the false information that llms inevitably and apparently inescapably produce. But taking a step back you’re probably right, for anyone who doesn’t understand the technology it’s a very anthropomorphic term which adds to the veneer of sentience.

      • draco_aeneus@mander.xyz
        link
        fedilink
        English
        arrow-up
        6
        ·
        7 hours ago

        It’s not really even errors. It is well-suited for what it was designed. It produced pretty good text. It’s just that we’re using it for stuff it’s not suited for. Like digging a hole with a spoon, then complaining your hands hurt.

        • Silver Needle@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 hours ago

          It’s a convenient way of looking at things. Saying that it’s good at one thing and bad at others. What I have come to realize with LLMs is that anywhere where experts deal with them, they are very aware of their shortcomings with respect to someone’s area of expertise. Sure, you might say they’re good at producing text, yet a journalist or someone who simply writes a ton might be able to spot generated text in an instant. The same way a photographer or painter can spot these statistical methods instantly. Rinse and repeat for coding, translation, medicine and all other tasks specific to current societal roles. That is not to say that you need to be an expert to spot LLMs or other generative ANNs, it comes down to attention and what you condition yourself to be attentive to. Of course pictures or code, or whatever will be convincing if you treat these things as secondary, like a doctor would treat creative writing as secondary to their job though necessary or a biologist would treat writing python scripts.

          • Iconoclast@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 hours ago

            Saying that it’s good at one thing and bad at others.

            But that’s exactly the difference between narrow AI and a generally intelligent one. A narrow AI can be “superhuman” at one specific task - like generating natural-sounding language - but that doesn’t automatically carry over to other tasks.

            People give LLMs endless shit for getting things wrong, but they should actually get credit for how often they get it right too. That’s a pure side effect of their training - not something they were ever designed to do.

            It’s like cruise control that’s also kinda decent at driving in general. You might be okay letting it take the wheel as long as you keep supervising - but never forget it’s still just cruise control, not a full autopilot.

    • THX-1138@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      ·
      14 hours ago

      “Daisy, Daisy, give me your answer do. I’m half crazy all for the love of you. It won’t be a stylish marriage, I can’t afford a carriage. But you’ll look sweet upon the seat of a bicycle built for two…”