I knew from the moment I deeply interacted with AI that this would be an issue. It’s wild and AI is bar far the coolest invention since the Internet. It’s a philosophers dream. It’s a playground for your brain! It’s a sparring partner for your thoughts and ideas to stress test them.
Having said that I can easily see so some losing their grip depending on how they are interacting with it.
I can’t run local models bigger than 7b q_4_k_m or so, so I’m safe for now. The idea of revealing my deeper personality to corporate LLMs is horrifying.
No because I don’t us ai slop
By far the easiest solution.
With how so many services are forcing it upon us, I’d have to disagree.
It’s also getting to be a bit of a chore to block AI elements on all the various websites implementing them, and a few of the worst offenders (Google is one that I know does this) add a random string of characters on the element that serve as a unique identifier that periodically changes and so requires me to readd them to my UBO blocklist. On each device…
It is the most effective solution for sure, though.
You don’t have to block them. Just don’t use them.
I put in an IT ticket the other day over the fucking Copilot button on my work-issued Surface laptop. They actually told me to install Powertoys. So I did. And disabled that fucking button.
I’m not immune to it but it’ll have to fight all the other psychosises I have.
might be entitled to compensation, for all the money you spent.
ouija board lied to me
O
Nope
One dude dies on his way to meet his cat fishing AI girlfriend and every new outlet pretends it’s Rise Of The Terminators.
The American Psychological Association met with the FTC in February to urge regulators to address the use of AI chatbots as unlicensed therapists.
Protect our
revenue, er patients!AI chabots are terrible therapists. What the fuck are you even implying?
I think that’s a little cynical. I know a few people who work in psych, some in ER’s, and it’s becoming more common to hear people following advice they got via ChatGPT and harming themselves. One particularly egregious one was where the patient was using the program for therapy reasons then suddenly pivoted to asking what the highest buildings were locally, which, of course, the program answered.
Thr highest building will just make you regret your action for longer while falling. May i suggest this building close to your location that is perfectly as tall as it needs to do the job? Chatgpt probably.
Funny, but the reality is even darker. There are zero safeguards built into the program for these scenarios so it makes absolutely no correlation between the two topics, something even a self-styled, unlicensed “life coach” would easily do.