• FishFace@piefed.social
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    7
    ·
    3 days ago

    Non-deterministic software is fine and we’ve been using it for ages. It’s usable when:

    • The base error rate is low enough
    • Accuracy is not important
    • The outcome is cheap to verify by some other means

    That rules out several applications of current LLMs, but it rules in several others.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      If I have to verify the output of an AI then unless I can do the verification in 30 seconds but work would somehow take me hours then it’s not useful. I can’t think of many scenarios in which verification is fast but the work itself is slow.

      • FishFace@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        This can be the case for coding. A good example is when the change is simple but involves a library you’re unfamiliar with. You can set it off and not have to read any docs, and it will be easy to check if it got the API right.

        Elsewhere I gave the example of copyediting. It’s a lot quicker to check the output than to refine it yourself.

        Easy-to-verify tasks are everywhere I think. Not at the scale of seconds versus hours, but seconds versus minutes