Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

  • WorldsDumbestMan@lemmy.today
    link
    fedilink
    English
    arrow-up
    2
    ·
    17 hours ago

    I see it like programming randomly, until you get something that is accidentally right, then you rate it, and it now shows up every time. I think that’s how it roughly works. True about the prompt wording, that can be somewhat limited too, thanks to the army of idiots beta testers that will make every kind of prompt.

    Having said that uh…it’s not much better than just straight up programming the thing yourself. It’s like, programming, but extra lazy, right?