• JustTesting@lemmy.hogru.ch
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    19 hours ago

    It’s always funny to me when people do add ‘confidence scores’ to LLMs, because it always amounts to just adding ‘say how confident you are with low, medium or high in your response’ to th prompt, and then you have made up confidences for made up replies. And you can tell clients that it’s just made up and not actual confidence, but they will insist that they need it anyways…

    • Eggyhead@lemmings.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      And you can tell clients that it’s just made up and not actual confidence, but they will insist that they need it anyways…

      That doesn’t justify flat out making shit up to everyone else, though. If a client is told information is made up but they use it anyway, that’s on the client. Although I’d argue that an LLM shouldn’t be in the business of making shit up unless specifically instructed to do so by the client.

      • JustTesting@lemmy.hogru.ch
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        I’m not really sure I follow.

        Just to be clear, I’m not justifying anything, and I’m not involved in those projects. But the examples I know concern LLMs customized/fine-tuned for clients for specific projects (so not used by others), and those clients asking to have confidence scores, people on our side saying that it’s possible but that it wouldn’t actually say anything about actual confidence/certainty, since the models don’t have any confidence metric beyond “how likely is the next token given these previous tokens” and the clients going “that’s fine, we want it anyways”.

        And if you ask me, LLMs shouldn’t be used for any of the stuff it’s used for there. It just cracks me up when the solution to “the lying machine is lying to me” is to ask the lying machine how much it’s lying. And when you tell them “it’ll lie about that too” they go “yeah, ok, that’s fine”.

        And making shit up is the whole functionality of LLMs, there’s nothing there other than that. It just can make shit up pretty well sometimes.