• Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      I suspect it’s more like “use the tool correctly or it will give bad results”.

      Like, LLMs are a marvel of engineering! But they’re also completely unreliable for use cases where you need consistent, logical results. So maybe we shouldn’t use them in places where we need consistent, logical results. That makes them unsafe for use in most business.

        • Sekoia@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          Not even, they should be used to interpret/process natural language and maybe generate some filler things (smart defaults etc; a good use is generating titles for things). Translation it’s very good at too.

          The more text an LLM has to generate, the worse it is, and the less it can base itself off of real text, the less it’ll do it correctly.

          • SoleInvictus@lemmy.blahaj.zoneM
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            I’ll admit I once used an LLM to generate a comparison between the specs of three printers. It did a great job, but doing it myself is still faster and doesn’t make me feel dirty.

          • Swedneck@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            LLMs are basically optimized for making newspapers to put in the background of games, put some relevant stuff in the prompt and it’ll shit out text that’s sensible enough that players can skim things in the world and actually feel immersed.