• kadu@scribe.disroot.org
    link
    fedilink
    arrow-up
    110
    arrow-down
    1
    ·
    edit-2
    2 days ago

    People thinking they’re AI experts because of prompts is like claiming to be an aircraft engineer because you booked a ticket.

    • jj4211@lemmy.world
      link
      fedilink
      arrow-up
      58
      ·
      2 days ago

      I have had in person conversations with multiple people who swear they have fixed the AI hallucination problem the same way. “I always include the words ‘make sure all of the response is correct and factual without hallucinating’”

      These people think they are geniuses thanks to just telling the AI not to mess up.

      Thanks to being in person with a rather significant running context, I know they are being dead serious, and no one will dissuade them from thinking their “one weird trick” works.

      All the funnier when, inevitably, they get screwed up response one day and feel all betrayed because they explicitly told it not to screw up…

      But yes, people take “prompt engineering” very seriously. I have seen people proudly display their massively verbose prompt that often looked like way more work than to just do the things themselves without LLM. They really think it’s a very sophisticated and hard to acquire skill…

      • ebc@lemmy.ca
        link
        fedilink
        arrow-up
        25
        ·
        2 days ago

        “Do not hallucinate”, lol… The best way to get a model to not hallucinate is to include the factual data in the prompt. But for that, you have to know the data in question…

          • flying_sheep@lemmy.ml
            link
            fedilink
            arrow-up
            12
            ·
            edit-2
            2 days ago

            That’s incorrect because in order to lie, one must know that they’re not saying the truth.

            LLMs don’t lie, they bullshit.

            • Danquebec@sh.itjust.works
              link
              fedilink
              arrow-up
              6
              ·
              1 day ago

              It’s incredible by now how many LLM users don’t know that it merely predicts the next most probable words. It doesn’t know anything. It doesn’t know that it’s hallucinating, or even what it is saying at all.

              • jj4211@lemmy.world
                link
                fedilink
                arrow-up
                2
                ·
                23 hours ago

                One things that is enlightening is why the seahorse LLM confusion happens.

                The model has one thing to predict, can it produce a spexified emoji, yes or no? Well some reddit thread swore there was a seahorse emoji (along others) so it decided “yes”, and then easily predicted the next words to be “here it is:” At that point and not an instant before, it actually tries to generate the indicated emoji, and here, and only here it falls to find something of sufficient confidence, but the preceding words demand an emoji so it generates the wrong emoji. Then knowing the previous token wasn’t a match, it generates a sequence of words to try again and again…

                It has no idea what it is building to, it is building results the very next token at a time. Which is wild how well that works, but lands frequently in territory where previously generated tokens back itself into a corner and the best fit for subsequent tokens is garbage.

      • skisnow@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I didn’t think prompt engineering was a skill until I read some of the absolute garbage some of my ostensibly degree-qualified colleagues were writing.

    • CharlesDarwin@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      2 days ago

      Reminds me of the very early days of the web, where you had people with the title “webmaster”. When you looked deeper into the supposed skillset, it was people that knew a bare minimum of HTML and the ability to manage a tree of files?

      I’ll never forget being at an ATM and overhearing a conversation between two women in their 30s behind me - the one woman tells the other - “I’ve been thinking about what I want to do and I think I want to be a webmaster”. It just sounded like a very casual choice and one about making money, and not much deeper than that.

      This was in 1999 or so. I thought - man, this industry is so fucked right now - we have hiring managers, recruiters, etc…that have almost no idea of the difference in skillsets between what I do (programming, architecture, networking, database, and then trying to QA all of that and keep it running in production, etc.) and people calling themselves “webmasters”.

      Sure enough, not long after, the dotcom bubble popped. It was painful for everyone (even people that kept their distance from the dotcom thing to an extent) without question, whether you had skills or no. But I don’t think roles like “webmaster” did very well…