• JustTesting@lemmy.hogru.ch
    link
    fedilink
    English
    arrow-up
    1
    ·
    7 hours ago

    I’m not really sure I follow.

    Just to be clear, I’m not justifying anything, and I’m not involved in those projects. But the examples I know concern LLMs customized/fine-tuned for clients for specific projects (so not used by others), and those clients asking to have confidence scores, people on our side saying that it’s possible but that it wouldn’t actually say anything about actual confidence/certainty, since the models don’t have any confidence metric beyond “how likely is the next token given these previous tokens” and the clients going “that’s fine, we want it anyways”.

    And if you ask me, LLMs shouldn’t be used for any of the stuff it’s used for there. It just cracks me up when the solution to “the lying machine is lying to me” is to ask the lying machine how much it’s lying. And when you tell them “it’ll lie about that too” they go “yeah, ok, that’s fine”.

    And making shit up is the whole functionality of LLMs, there’s nothing there other than that. It just can make shit up pretty well sometimes.