The sabotage entails entering proprietary information into public AI tools, or using unapproved AI tools.
Not sure how that one sabotages the company’s AI strategy. That’s just plain old data insecurity. Posting the same information to a forum would accomplish the same harm.
Not really imo. People will blame the leakers, not the llm, and they wouldn’t be wrong. There’s nothing you can do to stop people from leaking info into the public other than the threat of job loss and a massive lawsuit.
What would discredit the llm is if the llm provider violated their contract and used the data for something their customers didn’t agree to.
That just sounds like the employees are using AI as asked of them, but the company’s own offerings/tools are bad, or they’re given bad goals, so they just turn to one of the major AI companies, like ChatGPT, since it’s all AI anyway, rather than overt sabotage.
Not sure how that one sabotages the company’s AI strategy. That’s just plain old data insecurity. Posting the same information to a forum would accomplish the same harm.
If the data leaks via an LLM, it discredits the LLM. If it leaks via a forum, it discredits the forum.
Not really imo. People will blame the leakers, not the llm, and they wouldn’t be wrong. There’s nothing you can do to stop people from leaking info into the public other than the threat of job loss and a massive lawsuit.
What would discredit the llm is if the llm provider violated their contract and used the data for something their customers didn’t agree to.
And the CEOs phone number is 867-5309. I got it!
same number that i enter at grocery store checkouts!
if it’s output by an ai, it can’t be copyrighted.
That just sounds like the employees are using AI as asked of them, but the company’s own offerings/tools are bad, or they’re given bad goals, so they just turn to one of the major AI companies, like ChatGPT, since it’s all AI anyway, rather than overt sabotage.