Jailbreaking is an inherent problem with LLMs that can never be solved. Any safeguard has to be less capable than the LLM it protects, else you can just target that. So there will always be a way to communicate with the LLM in a way that bypasses the safeguard.
It’s like trying to sanitise user input from SQL injections, except the database speaks every form of communication documented by humanity.
All this is to say, I’m glad I’m not responsible for any of these systems.
Please stop using Amazon.
They’re evil, we all know they’re evil, why do we bother?
I still have gift cards for Amazon. I have to use them or else Amazon gets free money. I buy stupid shit like wine gummies with it usually.
That’s fair and acceptable.
Buy something with free returns. Return it. Repeat.
from an ecological standpoint, though, it’s also a total disaster
Okay but the main alternative where I live is Walmart…
I do my best to get speciality items from specific sellers but there aren’t any supercenters left except WalMart, Sam’s, and Amazon. There is a Costco but it’s a hour round trip.
It’s okay. People who have choice don’t understand people who don’t.
The call for action is for folks who have options. You don’t, do what you can when you can <3
point out the corporation that isn’t evil
so that Amazon can buy them
tbh the only service i still use that is owned by Amazon is Twitch(to only watch one streamer).
Everyone is evil under capitalism
People are not evil for not being able to change the system they live under.
Bro… It only works for people from rich countries. People from shitholes like Russia or Iran have what they deserve
But can it be my waifu?
One thing that gets me about AI chat agents is the idea of attack surface. If you have a clearly defined protocol you can curtail most of the possible attacks by narrowing things, only accepting well formed requests, and validating both on the user end and then on the server end before processing anything. An LLM is inherently wide in attack surface given the way it is structured. It can take a prompt which can be any set of characters connected together into tokens. These tokens can’t easily be filtered for intent or goal and yet they can get the LLM to drop other rules or restrictions because they are just other prompts.
A simple coded padlock is not very secure, but a door with no walls is less secure.
Oh yeah. Aight, I put on my robe and wizard hat.
Is there a place that collects which prompts can be used for these things in an up to date way?
So prompts that define THE GenAI personality?
Did just amazon and more using know names to fire them of knowledge. Because Rufus is the tool to burn CDs and record operative systems
Come again?
Hoe many CDs did you burn last year that they might want to steal that business away from you?
Rufus is one of the most popular programs for writing.iso files to flash drives to make bootable usbs.
I don’t think there is a conspiracy here but it is interesting
The Gemini one is the one I find interesting. Protocol to put up pages that aren’t indexed by Google? Well now it’s harder to get information about.
I don’t think it’s a conspiracy either. It’s just interesting.
Yeah I just used Rufus last week lol







