• gens@programming.dev
    link
    fedilink
    arrow-up
    9
    arrow-down
    2
    ·
    21 hours ago

    It’s not just far. LLMs inherently make stuff up (aka hallucinate). There is no cure for that.

    There are some (non llm, but neural network) tools that can be somewhat useful, but a real doctor needs to do the job anyway because all of them have various chances to be wrong.

    • Tja@programming.dev
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      15 hours ago

      Not only there’s a cure, it’s already available: most models right now provide sources for their claims. Of course this requires the user the gargantuan effort of clicking on a link, so most don’t and complain instead.

      • medgremlin@midwest.social
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        11 hours ago

        This is stupid. Fully reading and analyzing the source for accuracy and relevancy can be extremely time consuming. That’s why physicians have databases like UpToDate and Dynamed that have expert (ie physician and PhD) analyses and summaries of the studies in the relevant articles.

        I’m a 4th year medical student and I have literally never used an LLM. If I don’t know something, I look it up in a reliable resource and a huge part of my education is knowing what I need to look up. An LLM can’t do that for me.

        • ByteJunk@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          10 hours ago

          And why are you assuming that a model that is designed to be used by physicians would not include the very same analysis from experts that goes into UpToDate or Dynamed? This is something that is absolutely trivial to do, the only thing stopping it is copyright.

          AI can not only lookup reliable sources, it will probably be much better and faster than you or I or anybody.

          I’m a 4th year medical student and I have literally never used an LLM

          It was clear enough from your post, but thanks for confirming. Perhaps you should give it a try so you can understand it’s limitations and strengths first-hand, no? Grab one the several generic LLMs available and ask something like:

          Can you provide me with a small summary of the most up to date guidelines for the management of fibrodysplasia ossificans progressiva? Please be sure to include references, and only consider sources that are credible, reputable and peer reviewed whenever possible.

          Let me know how it did. And note that it probably is a general purpose model and trained on very generic data, and not at all optimized for this usage, but it’s impossible to dismiss the capabilities here…

          • gens@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            2 hours ago

            It’s called RAG, and it’s the only “right” way to get any accurate information out of an LLM. And even it is not perfect. Not by far.

            You can use it without an LLM. It’s basically keyword search. You still have to know what you are asking, so you have to study. Study without an imprecise LLM that can feed you false information that sounds plausible.

            There are other problems with current LLMs that make them problematic. Sure you will catch onto those problems if you use them, and you still have to know more about the topic then them.

            They are a fun toy and ok for low-stakes knowledge (ex cooking recipies). But as a tool in serious work they are a rubber ducky at best.

            PS What the guy couple comments above said about sources, that’s probably about web search. Even when an LLM reads the sources it can missinterpet them easily. Like how apple removed their summaries because they were often just wrong.