• CombatWombat@feddit.online
    link
    fedilink
    English
    arrow-up
    96
    arrow-down
    1
    ·
    3 days ago

    Oh no, it’s so irresponsible of europesays.com to publish this practical list of ways to sabotage your company’s AI rollout. Hopefully no other outlets include longer, more detailed lists, or we might see this kind of behavior start to spread:

    The sabotage entails entering proprietary information into public AI tools, or using unapproved AI tools. Some employees report outright refusing to use AI tools. Others have even admitted to tampering with performance reviews or intentionally generating low-output work to make AI appear less effective.

    • searabbit@piefed.social
      link
      fedilink
      English
      arrow-up
      64
      arrow-down
      1
      ·
      3 days ago

      This is amateur work. I’ve seen someone volunteer to head the staff AI training and in the presentation outline how bad AI is (i.e., terrible for the environment, not reliable, all true things) and also just put out the most half-assed training rollout. It had the effect of half the staff intentionally or unintentionally doing other forms of sabotage.

    • Hackworth@piefed.ca
      link
      fedilink
      English
      arrow-up
      35
      arrow-down
      1
      ·
      3 days ago

      The sabotage entails entering proprietary information into public AI tools, or using unapproved AI tools.

      Not sure how that one sabotages the company’s AI strategy. That’s just plain old data insecurity. Posting the same information to a forum would accomplish the same harm.

      • CombatWombat@feddit.online
        link
        fedilink
        English
        arrow-up
        32
        arrow-down
        1
        ·
        3 days ago

        If the data leaks via an LLM, it discredits the LLM. If it leaks via a forum, it discredits the forum.

        • dreamkeeper@literature.cafe
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          3 days ago

          Not really imo. People will blame the leakers, not the llm, and they wouldn’t be wrong. There’s nothing you can do to stop people from leaking info into the public other than the threat of job loss and a massive lawsuit.

          What would discredit the llm is if the llm provider violated their contract and used the data for something their customers didn’t agree to.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 days ago

        That just sounds like the employees are using AI as asked of them, but the company’s own offerings/tools are bad, or they’re given bad goals, so they just turn to one of the major AI companies, like ChatGPT, since it’s all AI anyway, rather than overt sabotage.