Ok but my counter argument is that if they pass their exam with GPT, shouldn’t they be allowed to practice medicine with GPT in hand?
Preferably using a model that’s been specifically trained to support physicians.
I’ve seen doctors that are outright hazards to patients, hopefully this would limit the amount of damage from the things they misremember…
EDIT: ITT bunch of AI deniers who can’t provide a single valid argument, but that doesn’t matter because they have strong feelings. Be sure to slam the “this doesn’t align with how I want my world to be” button!
Out of curiosity, i put in some organic chemistry practice questions into ChatGPT just now, it got 1 out of 5 correct. Im not an outright hater of AI (I do dislike how its being forced into some things and makes the original product worse, and the enviromental impact it has) but i’m sure in the future it’ll be able to do some wonderous things.
As it stands though, I would rather my doc do a review of literature rather than trusting ChatGPT alone.
I love having a doctor who offloaded so much of their knowledge onto a machine that they can’t operate without a cell phone in hand. It’s a good thing hospitals famously have impenetrable security, and have never had network outages. And fuck patient confidentiality, right? My medical issues are between me, my doctor, and Sam Altman
Do you realize your argument is basically the same argument people used to make about calculators? That they were evil and should never be used because they make kids stupid and how will their brains develop and yap yap yap.
There is a scenario where doctors are AIDED by AI tools and save more lives. You outright reject this on the edge case that they loose that tool and have to * checks notes * do what they do right now. How does that even make sense?
Going by that past example, this is how it’ll go: you’ll keep bitching and moaning that it’s useless and evil up until your dying breath, an old generation that will never embrace the new tech, while the world around you moves on and figures out out how to best take advantage of it. We’re at the stage where it’s capabilities are overhyped and oversold, as it always happens when something is new, and eventually we’ll figure out how to best use them and when/where to avoid them.
EDIT to add: on your case of network outages, do you know what happens right now when there’s a network outage at a hospital? It already stops working - you can’t admit patients, you don’t have access to their history or exams, basically can’t prescribe anything, can’t process payments. Being unable to access AI tools is the least of the concerns.
Thag might be okay if what said GPT produces would be reliable and reproducible, not to mention providing valid reasoning. It’s just not there, far from it
It’s not just far. LLMs inherently make stuff up (aka hallucinate). There is no cure for that.
There are some (non llm, but neural network) tools that can be somewhat useful, but a real doctor needs to do the job anyway because all of them have various chances to be wrong.
Not only there’s a cure, it’s already available: most models right now provide sources for their claims. Of course this requires the user the gargantuan effort of clicking on a link, so most don’t and complain instead.
This is stupid. Fully reading and analyzing the source for accuracy and relevancy can be extremely time consuming. That’s why physicians have databases like UpToDate and Dynamed that have expert (ie physician and PhD) analyses and summaries of the studies in the relevant articles.
I’m a 4th year medical student and I have literally never used an LLM. If I don’t know something, I look it up in a reliable resource and a huge part of my education is knowing what I need to look up. An LLM can’t do that for me.
And why are you assuming that a model that is designed to be used by physicians would not include the very same analysis from experts that goes into UpToDate or Dynamed? This is something that is absolutely trivial to do, the only thing stopping it is copyright.
AI can not only lookup reliable sources, it will probably be much better and faster than you or I or anybody.
I’m a 4th year medical student and I have literally never used an LLM
It was clear enough from your post, but thanks for confirming. Perhaps you should give it a try so you can understand it’s limitations and strengths first-hand, no? Grab one the several generic LLMs available and ask something like:
Can you provide me with a small summary of the most up to date guidelines for the management of fibrodysplasia ossificans progressiva? Please be sure to include references, and only consider sources that are credible, reputable and peer reviewed whenever possible.
Let me know how it did. And note that it probably is a general purpose model and trained on very generic data, and not at all optimized for this usage, but it’s impossible to dismiss the capabilities here…
It’s called RAG, and it’s the only “right” way to get any accurate information out of an LLM. And even it is not perfect. Not by far.
You can use it without an LLM. It’s basically keyword search. You still have to know what you are asking, so you have to study. Study without an imprecise LLM that can feed you false information that sounds plausible.
There are other problems with current LLMs that make them problematic. Sure you will catch onto those problems if you use them, and you still have to know more about the topic then them.
They are a fun toy and ok for low-stakes knowledge (ex cooking recipies). But as a tool in serious work they are a rubber ducky at best.
PS What the guy couple comments above said about sources, that’s probably about web search. Even when an LLM reads the sources it can missinterpet them easily. Like how apple removed their summaries because they were often just wrong.
Ok but my counter argument is that if they pass their exam with GPT, shouldn’t they be allowed to practice medicine with GPT in hand?
Preferably using a model that’s been specifically trained to support physicians.
I’ve seen doctors that are outright hazards to patients, hopefully this would limit the amount of damage from the things they misremember…
EDIT: ITT bunch of AI deniers who can’t provide a single valid argument, but that doesn’t matter because they have strong feelings. Be sure to slam the “this doesn’t align with how I want my world to be” button!
Out of curiosity, i put in some organic chemistry practice questions into ChatGPT just now, it got 1 out of 5 correct. Im not an outright hater of AI (I do dislike how its being forced into some things and makes the original product worse, and the enviromental impact it has) but i’m sure in the future it’ll be able to do some wonderous things.
As it stands though, I would rather my doc do a review of literature rather than trusting ChatGPT alone.
I love having a doctor who offloaded so much of their knowledge onto a machine that they can’t operate without a cell phone in hand. It’s a good thing hospitals famously have impenetrable security, and have never had network outages. And fuck patient confidentiality, right? My medical issues are between me, my doctor, and Sam Altman
And the people Sam Altman sold your info to.
It is our info now comrade.
Do you realize your argument is basically the same argument people used to make about calculators? That they were evil and should never be used because they make kids stupid and how will their brains develop and yap yap yap.
There is a scenario where doctors are AIDED by AI tools and save more lives. You outright reject this on the edge case that they loose that tool and have to * checks notes * do what they do right now. How does that even make sense?
Going by that past example, this is how it’ll go: you’ll keep bitching and moaning that it’s useless and evil up until your dying breath, an old generation that will never embrace the new tech, while the world around you moves on and figures out out how to best take advantage of it. We’re at the stage where it’s capabilities are overhyped and oversold, as it always happens when something is new, and eventually we’ll figure out how to best use them and when/where to avoid them.
How is this an AI problem? That’s already fucked - 5 million patients data breached, here other 4.5M patients, the massive Brazil one with 250 million patient records, etc etc. The list is endless, as health data increasingly goes online, best you come to terms that it will be unlawfully accessed sooner rather than later, with or without AI.
EDIT to add: on your case of network outages, do you know what happens right now when there’s a network outage at a hospital? It already stops working - you can’t admit patients, you don’t have access to their history or exams, basically can’t prescribe anything, can’t process payments. Being unable to access AI tools is the least of the concerns.
Thag might be okay if what said GPT produces would be reliable and reproducible, not to mention providing valid reasoning. It’s just not there, far from it
It’s not just far. LLMs inherently make stuff up (aka hallucinate). There is no cure for that.
There are some (non llm, but neural network) tools that can be somewhat useful, but a real doctor needs to do the job anyway because all of them have various chances to be wrong.
Not only there’s a cure, it’s already available: most models right now provide sources for their claims. Of course this requires the user the gargantuan effort of clicking on a link, so most don’t and complain instead.
This is stupid. Fully reading and analyzing the source for accuracy and relevancy can be extremely time consuming. That’s why physicians have databases like UpToDate and Dynamed that have expert (ie physician and PhD) analyses and summaries of the studies in the relevant articles.
I’m a 4th year medical student and I have literally never used an LLM. If I don’t know something, I look it up in a reliable resource and a huge part of my education is knowing what I need to look up. An LLM can’t do that for me.
And why are you assuming that a model that is designed to be used by physicians would not include the very same analysis from experts that goes into UpToDate or Dynamed? This is something that is absolutely trivial to do, the only thing stopping it is copyright.
AI can not only lookup reliable sources, it will probably be much better and faster than you or I or anybody.
It was clear enough from your post, but thanks for confirming. Perhaps you should give it a try so you can understand it’s limitations and strengths first-hand, no? Grab one the several generic LLMs available and ask something like:
Let me know how it did. And note that it probably is a general purpose model and trained on very generic data, and not at all optimized for this usage, but it’s impossible to dismiss the capabilities here…
It’s called RAG, and it’s the only “right” way to get any accurate information out of an LLM. And even it is not perfect. Not by far.
You can use it without an LLM. It’s basically keyword search. You still have to know what you are asking, so you have to study. Study without an imprecise LLM that can feed you false information that sounds plausible.
There are other problems with current LLMs that make them problematic. Sure you will catch onto those problems if you use them, and you still have to know more about the topic then them.
They are a fun toy and ok for low-stakes knowledge (ex cooking recipies). But as a tool in serious work they are a rubber ducky at best.
PS What the guy couple comments above said about sources, that’s probably about web search. Even when an LLM reads the sources it can missinterpet them easily. Like how apple removed their summaries because they were often just wrong.
Why bother going to the doctor then? Just use Web Md.
For what it’s worth I recently was urged by chatGPT to go to the hospital after explaining symptoms and turns out I had appendisitis
Do you think the doctor needed to check chatGPT to see if that was true?
“just replace developers with ai”
You bother going to the doctor because an expert using a tool is different than Karen using the same tool.