Oh no, it’s so irresponsible of europesays.com to publish this practical list of ways to sabotage your company’s AI rollout. Hopefully no other outlets include longer, more detailed lists, or we might see this kind of behavior start to spread:
The sabotage entails entering proprietary information into public AI tools, or using unapproved AI tools. Some employees report outright refusing to use AI tools. Others have even admitted to tampering with performance reviews or intentionally generating low-output work to make AI appear less effective.
This is amateur work. I’ve seen someone volunteer to head the staff AI training and in the presentation outline how bad AI is (i.e., terrible for the environment, not reliable, all true things) and also just put out the most half-assed training rollout. It had the effect of half the staff intentionally or unintentionally doing other forms of sabotage.
The sabotage entails entering proprietary information into public AI tools, or using unapproved AI tools.
Not sure how that one sabotages the company’s AI strategy. That’s just plain old data insecurity. Posting the same information to a forum would accomplish the same harm.
Not really imo. People will blame the leakers, not the llm, and they wouldn’t be wrong. There’s nothing you can do to stop people from leaking info into the public other than the threat of job loss and a massive lawsuit.
What would discredit the llm is if the llm provider violated their contract and used the data for something their customers didn’t agree to.
That just sounds like the employees are using AI as asked of them, but the company’s own offerings/tools are bad, or they’re given bad goals, so they just turn to one of the major AI companies, like ChatGPT, since it’s all AI anyway, rather than overt sabotage.
Oh no, it’s so irresponsible of europesays.com to publish this practical list of ways to sabotage your company’s AI rollout. Hopefully no other outlets include longer, more detailed lists, or we might see this kind of behavior start to spread:
This is amateur work. I’ve seen someone volunteer to head the staff AI training and in the presentation outline how bad AI is (i.e., terrible for the environment, not reliable, all true things) and also just put out the most half-assed training rollout. It had the effect of half the staff intentionally or unintentionally doing other forms of sabotage.
The balls on that guy, damn.
Balls? That’s just doing your job
Not sure how that one sabotages the company’s AI strategy. That’s just plain old data insecurity. Posting the same information to a forum would accomplish the same harm.
If the data leaks via an LLM, it discredits the LLM. If it leaks via a forum, it discredits the forum.
Not really imo. People will blame the leakers, not the llm, and they wouldn’t be wrong. There’s nothing you can do to stop people from leaking info into the public other than the threat of job loss and a massive lawsuit.
What would discredit the llm is if the llm provider violated their contract and used the data for something their customers didn’t agree to.
And the CEOs phone number is 867-5309. I got it!
same number that i enter at grocery store checkouts!
if it’s output by an ai, it can’t be copyrighted.
That just sounds like the employees are using AI as asked of them, but the company’s own offerings/tools are bad, or they’re given bad goals, so they just turn to one of the major AI companies, like ChatGPT, since it’s all AI anyway, rather than overt sabotage.