Seems like user error, I’m no programmer but even I lnow you don’t give an agent access to critical things, and Claude is very insistent at asking for permission at every step.
Seems like user error, I’m no programmer but even I lnow you don’t give an agent access to critical things
Yes.
But these models have (largely correctly) learned from Stack Overflow that, on average, every problem is due to not enough permissions.
Someone fully relying on an agentic AI model is essentially destined to give it full control (or close enough), eventually.
At some point, a tool like these LLMs either needs to not be marketed to that user, or needs stupid levels of safety warnings.
My money is on neither solution happening, and this kind of result continuing for the foreseeable future - until the rest of us doing cleanup instigate Dune’s Butlerian Jihad to stop the damage and save our own sanity.
Seems like user error, I’m no programmer but even I lnow you don’t give an agent access to critical things, and Claude is very insistent at asking for permission at every step.
Yes.
But these models have (largely correctly) learned from Stack Overflow that, on average, every problem is due to not enough permissions.
Someone fully relying on an agentic AI model is essentially destined to give it full control (or close enough), eventually.
At some point, a tool like these LLMs either needs to not be marketed to that user, or needs stupid levels of safety warnings.
My money is on neither solution happening, and this kind of result continuing for the foreseeable future - until the rest of us doing cleanup instigate Dune’s Butlerian Jihad to stop the damage and save our own sanity.