Anthropic actually developed a system which, in the hands of the most capable…in narrow domains used conscientiously in a limited fashion with tremendous and constant risk mitigation……, is reportedly not garbage
Well theyd be able to say how to make a bomb. Or kill yourself effectively. AI ceos dont even care what their systems can do. If some customers die thats okay to them, it shows how intelligent their ai is. And thats a statement from one of the big AI CEOs.
I don’t think those are the categories where most people are finding LLMs frustrating. We keep being told human white collar work is on the precipice of being replaced, but LLMs continue to be really inconsistent. Failing to parrot easily retrievable info like how to build a legally restricted thing or off yourself isn’t what people are finding lacking it’s that half the time it does something sorta correctly and the other half of the time it lies, fucks up, or fucks up and then lies about it.
Do you genuinely know what you are talking about, or are you just here to ragebait?
…
anyways, yeah, the ais are trained to be more friendly, agreeable, and never take off the mask, but prompts are just text files you can delete??
if you want a real comparison, try one of the olmo checkpoints before the fine-tuning?? i think??
AI is more regulated than you might think, or else they would not censor their models. One thing is to improve quality in a cosmetic way, as they have not fixed the issue at the core yet (scaling is currently more important). The other thing is safety. Or did you not hear what Grok did in the past months? So tell me again it is not regulated.
Thank you for demonstrating to everybody in the thread that you have absolutely no idea what you’re talking about because you have now resorted to attempting to be insulting rather than to defend your arguement.
You do not want to know how good current LLM’s would be, if you would remove the thousands of negative-prompts aka. guard rails.
Narrator: They would still be garbage.
Anthropic actually developed a system which, in the hands of the most capable…in narrow domains used conscientiously in a limited fashion with tremendous and constant risk mitigation……, is reportedly not garbage
Narrator: they ruined it
I doubt that. What evidence do you have?
Well theyd be able to say how to make a bomb. Or kill yourself effectively. AI ceos dont even care what their systems can do. If some customers die thats okay to them, it shows how intelligent their ai is. And thats a statement from one of the big AI CEOs.
I don’t think those are the categories where most people are finding LLMs frustrating. We keep being told human white collar work is on the precipice of being replaced, but LLMs continue to be really inconsistent. Failing to parrot easily retrievable info like how to build a legally restricted thing or off yourself isn’t what people are finding lacking it’s that half the time it does something sorta correctly and the other half of the time it lies, fucks up, or fucks up and then lies about it.
Im just parroting what john oliver said on his last episode on sunday.
This is demonstrably false, given you can download your own models and change the system prompts yourself.
That’s not how it works, as the guard rails are not just simple prompts that you just can delete.
Even with “abliteration”, you are modifying the model basically without the whole retraining, but also lose many capabilities at the same time.
So much for “demonstrably false”, while you obviously have never tried to uncensor any LLM.
The thread was literally about the prompt text.
The prompts are part of the training, you realize that? They are then inside the weights. Not just text files you can delete and you are good?
Only because an LLM reveals those negative-prompts does not mean you can just remove them.
Do you genuinely know what you are talking about, or are you just here to ragebait?
…
anyways, yeah, the ais are trained to be more friendly, agreeable, and never take off the mask, but prompts are just text files you can delete??
if you want a real comparison, try one of the olmo checkpoints before the fine-tuning?? i think??
No they’re not. They’re injected into every input that you enter into the system.
Are you suggesting that there is a conspiracy to keep AI down?
How would that work AI is barely regulated.
AI is more regulated than you might think, or else they would not censor their models. One thing is to improve quality in a cosmetic way, as they have not fixed the issue at the core yet (scaling is currently more important). The other thing is safety. Or did you not hear what Grok did in the past months? So tell me again it is not regulated.
It literally tells people to kill themselves some of the time it’s definitely not regulated.
I would love to know where you’re getting your information from.
Your mom told me that yesterday
Thank you for demonstrating to everybody in the thread that you have absolutely no idea what you’re talking about because you have now resorted to attempting to be insulting rather than to defend your arguement.