

102·
24 hours agoIt’s not just far. LLMs inherently make stuff up (aka hallucinate). There is no cure for that.
There are some (non llm, but neural network) tools that can be somewhat useful, but a real doctor needs to do the job anyway because all of them have various chances to be wrong.
It’s called RAG, and it’s the only “right” way to get any accurate information out of an LLM. And even it is not perfect. Not by far.
You can use it without an LLM. It’s basically keyword search. You still have to know what you are asking, so you have to study. Study without an imprecise LLM that can feed you false information that sounds plausible.
There are other problems with current LLMs that make them problematic. Sure you will catch onto those problems if you use them, and you still have to know more about the topic then them.
They are a fun toy and ok for low-stakes knowledge (ex cooking recipies). But as a tool in serious work they are a rubber ducky at best.
PS What the guy couple comments above said about sources, that’s probably about web search. Even when an LLM reads the sources it can missinterpet them easily. Like how apple removed their summaries because they were often just wrong.