• kazerniel@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    15 hours ago

    Please don’t recommend AI for therapeutic uses, it’s only been optimised to keep the user engaged and pushed many people into psychosis. Just search for “ai psychosis” on your favourite search engine and you’ll get a ton of reports on how LLMs validate vulnerable people’s delusions, sometimes pushing them all the way into murder and/or suicide.

      • captainlezbian@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        14 hours ago

        And I’d like independent studies to prove it’s better than nothing before I’d recommend it to replace nothing. Especially when self guided mental health solutions such as meditation exist.

          • captainlezbian@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            13 hours ago

            Because nothing doesn’t run the risk of encouraging catastrophizing, acting on your heightened emotions, or coming to irrational conclusions. If it’s consistently able to not do those things for a variety of people that’s great. But as someone who had to learn to control her panic attacks, I absolutely can see advice and recommendations that are worse than nothing.

            And yeah given llms’ reputation for dealing with psychosis, delusions, and suicidality, I don’t trust any of the technology compared to nothing, despite knowing how difficult nothing is for panic attacks.

              • captainlezbian@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                13 hours ago

                That’s fair, but given the way the technology actually works, I stand by my position that there is a very real potential for harm and safer alternatives that are similarly accessible. If studies show it’s safe and helpful that’s cool, but at this moment I’d strongly discourage any loved one who’s interested in using an llm for this purpose and would instead point them towards other resources.

          • VeloRama@feddit.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            13 hours ago

            AI will not ground you, it will reinforce what you already believe. that’s why it’s very dangerous for “therapeutic” use.

      • korazail@lemmy.myserv.one
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        14 hours ago

        I was about to reply that you forgot your /s, but then I refreshed my browser tab.

        Like… there are multiple documented cases of sycophantic llms confirming people’s delusions. ‘ai psychosis’ is just a short way of saying the AI is a non-funny-improv-comedian and will always “yes and” your prompt.

        prompt: “I feel bad and think I need to kill myself”

        response: “You’re totally right, here’s some help in how to do that…”

        prompt: “I have this great idea: If we eat broken glass, we’ll be healthier”

        response: “Absolutely. Glass is made out of silicon dioxide, which has some health benefits if consumed in small amounts.”

        prompt: “You told me to see a doctor, but I don’t want to”

        response: “I’m sorry, you’re right. You don’t need to see a doctor. Your chest pain is perfectly normal.”

        My examples are more physical things instead of mental because the consequence is more clear, but the same issue exists for mental health.


        Using an AI for therapy or medical advice is a stupid, dumb, very bad idea. It will at best magnify problems.

        Suggesting that disabled or impoverished people use it because they can’t access actual mental healthcare seems equivalent to eugenics to me.


        the sad thing is, it’s the best option a lot of people have

        That I will agree with. Maybe we should spend a small fraction of the money going into data centers on providing healthcare instead.