• Cherries@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    10 hours ago

    An intern probably would not go on a mass deletion spree. Also, an intern doesn’t eat a billion gpus.

    • Buddahriffic@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      It doesn’t happen often, but there were horror stories like this before AI was a thing. Not just from interns, one that comes to mind was a guy running two terminals: one for the production db, one for the dev environment. He wanted to delete the dev db to start fresh again but accidentally ran the command in the production terminal.

      Can’t remember if that was the gitlabs one, but the gitlabs one also had issues where multiple backup options were never tested and none except the longest time period one worked (or maybe one did work but the initial command nuked that either directly or via mechanisms that “backed up” the deletion command).

      Not that that makes these any less stupid. LLMs aren’t genies that must follow the word of your orders to the letter. They are text prediction engines that use statistics from their training data to determine the most likely token that comes next. Any instructions you give it are just part of the context prior to the tokens it needs to predict. Any other part of the context could be determined to be more important or forgotten entirely. Especially by agents that are intended to work on their own, which might have conflicting instructions to ask before doing something dangerous while trying to do things without human input.

      These frameworks like claude code help set up a good context for the LLMs to work in but it’s not perfect (and might never be).