osanna@thebrainbin.org to Fuck AI@lemmy.world · 3 days agohaha that’s what you gettechcrunch.comexternal-linkmessage-square8linkfedilinkarrow-up150arrow-down11cross-posted to: technology@lemmy.world
arrow-up149arrow-down1external-linkhaha that’s what you gettechcrunch.comosanna@thebrainbin.org to Fuck AI@lemmy.world · 3 days agomessage-square8linkfedilinkcross-posted to: technology@lemmy.world
minus-squareAlmacca@aussie.zonelinkfedilinkEnglisharrow-up6·3 days ago As several others on X pointed out, prompts can’t be trusted to act as security guardrails. Models may misconstrue or ignore them. It’s not a bug; it’s a feature. People who say they are using them successfully are cobbling together methods to protect themselves. Such a great product! Goodness knows many of us would love help with email, grocery orders, and scheduling dentist appointments. If you can’t do that shit on your own, just shoot yourself, or get an a.i. to do it if you’re that useless.
It’s not a bug; it’s a feature.
Such a great product!
If you can’t do that shit on your own, just shoot yourself, or get an a.i. to do it if you’re that useless.