• mark@programming.dev
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 day ago

    yup and when you DO catch it spitting out nonsense. it"ll say “oh you right, let me change that”… 🙄 like, why do I have to tell you that you’re wrong about something? You should already know it’s wrong and fix it without me ever pointing it out.

    • LePoisson@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      21 hours ago

      You already got the right replies from the other two. But I think your comment shows the danger of AI being talked about like it’s the fucking second coming.

      They’re all based on LLM - large language models

      They’re just modeling what “most likely” is the right response. AI doesn’t know shit and that’s why it also will yes and you to death because it really is just a yes and machine spitting out what is likely to appear as a valid response to a prompt.

      It’s very dangerous that people treat AI like it actually has some understanding of the training materials or true knowledge of anything. They’re just very good little parrots.

    • Rooster326@programming.dev
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 day ago

      But it didn’t even understand it was wrong

      It can’t understand that. It can’t understand anything

      The Human-feedbaxk algorithm dictates humans prefer to receive an apology so it does.

    • SparroHawc@lemmy.zip
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 day ago

      That’s because it doesn’t really ‘know’ things in the same way you and I do. It’s much more like having a gut reaction to something and then spitting it out as truth; LLMs don’t really have the capability to ruminate about something. The one pass through their neural network is all they get unless it’s a ‘reasoning’ model that then has multiple passes as it generates an approximation of train-of-thought - but even then, its output is still a series of approximations.

      When its training data had something resembling corrections in it, the most likely text that came afterwards was ‘oh you’re right, let me fix that’ - so that’s what the LLM outputs. That’s all there is to it.