It really doesn’t. Pretty much all models so far loose their guardrails once you are deep enough in the conversation. There were multiple news articles about ai giving someone the go ahead to off themselves.
and how it’s used
No matter which way you use it its bad. If you ask it for tips, you are essentially asking the average redditor for mental health advice. If you use it for conversations, you are forming a parasocial relationship with an AI that will constantly get things wrong you told it about before while reinforcing whatever worldview you have. The only thing that would slightly help is supervision by a human, but that would make the whole exercise redundant.
Do you think that comment should be applied to disabled people who can’t access any other form of therapy?
If they were desperate enough to be forced into using AI, then that above comment wouldn’t apply to them, but instead to the ones that are responsible for the broken system in the first place.
It really doesn’t. Pretty much all models so far loose their guardrails once you are deep enough in the conversation. There were multiple news articles about ai giving someone the go ahead to off themselves.
No matter which way you use it its bad. If you ask it for tips, you are essentially asking the average redditor for mental health advice. If you use it for conversations, you are forming a parasocial relationship with an AI that will constantly get things wrong you told it about before while reinforcing whatever worldview you have. The only thing that would slightly help is supervision by a human, but that would make the whole exercise redundant.
If they were desperate enough to be forced into using AI, then that above comment wouldn’t apply to them, but instead to the ones that are responsible for the broken system in the first place.
I see it differently but thanks for chatting with me