• Hackworth@piefed.ca
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    3 days ago

    The sabotage entails entering proprietary information into public AI tools, or using unapproved AI tools.

    Not sure how that one sabotages the company’s AI strategy. That’s just plain old data insecurity. Posting the same information to a forum would accomplish the same harm.

    • CombatWombat@feddit.online
      link
      fedilink
      English
      arrow-up
      32
      arrow-down
      1
      ·
      3 days ago

      If the data leaks via an LLM, it discredits the LLM. If it leaks via a forum, it discredits the forum.

      • dreamkeeper@literature.cafe
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        3 days ago

        Not really imo. People will blame the leakers, not the llm, and they wouldn’t be wrong. There’s nothing you can do to stop people from leaking info into the public other than the threat of job loss and a massive lawsuit.

        What would discredit the llm is if the llm provider violated their contract and used the data for something their customers didn’t agree to.

    • T156@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 days ago

      That just sounds like the employees are using AI as asked of them, but the company’s own offerings/tools are bad, or they’re given bad goals, so they just turn to one of the major AI companies, like ChatGPT, since it’s all AI anyway, rather than overt sabotage.