• merc@sh.itjust.works
    link
    fedilink
    arrow-up
    8
    ·
    4 hours ago

    Crane bullet points five things that need to change as the AI industry scales faster than it builds a worthwhile safety architecture. Specifics he calls for include; stricter confirmations, scopable API tokens, proper backups, simple recovery procedures, and AI agents existing within proper guardrails.

    “I hooked up spicy autocomplete to our production systems and it nuked them. What have I learned from this? Here are some bullet points for how the spicy autocomplete industry needs to do better.”

    • zalgotext@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      31 minutes ago

      To be fair, those bullet points are pretty standard security best practices that any software company should be following.

      But like, at the same time, even if AI companies were doing those best practices, I still wouldn’t let their products loose on production systems.

  • FellowEnt@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    ·
    8 hours ago

    Seems like user error, I’m no programmer but even I lnow you don’t give an agent access to critical things, and Claude is very insistent at asking for permission at every step.

    • pinball_wizard@lemmy.zip
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      2 hours ago

      Seems like user error, I’m no programmer but even I lnow you don’t give an agent access to critical things

      Yes.

      But these models have (largely correctly) learned from Stack Overflow that, on average, every problem is due to not enough permissions.

      Someone fully relying on an agentic AI model is essentially destined to give it full control (or close enough), eventually.

      At some point, a tool like these LLMs either needs to not be marketed to that user, or needs stupid levels of safety warnings.

      My money is on neither solution happening, and this kind of result continuing for the foreseeable future - until the rest of us doing cleanup instigate Dune’s Butlerian Jihad to stop the damage and save our own sanity.

  • Aceticon@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    9
    ·
    10 hours ago

    Thus completelly eliminating all bugs and data incorrections in Production!

    That right there is the sweet smell of Victory!

  • rtxn@lemmy.world
    link
    fedilink
    arrow-up
    19
    ·
    edit-2
    16 hours ago

    PocketOS founder blames ‘Cursor running Anthropic’s flagship Claude Opus 4.6’

    Fuck that. I’m blaming the PocketOS founder and every person in the chain of decisions that led to a clanker being given this level of unrestricted access to the database and the backups.

  • Chakravanti@monero.town
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 hours ago

    Don’t blame, Credit. Credit CloudStrike.

    Dance, rich. Dance and don’t stop cuz I ain’t a single Arnold Schwarzenegger saving your ass.

  • deadbeef79000@lemmy.nz
    link
    fedilink
    arrow-up
    109
    ·
    1 day ago

    Man who shit his own pants horrified that his pants are full of shit.

    Demands explanation from pants vendor.

  • Avicenna@programming.dev
    link
    fedilink
    arrow-up
    14
    ·
    18 hours ago

    If you are going to give an LLM a free pass to your whole prod database least you should do is to take weekly (or daily if plausible) offline backups of it. A hard limit against deleting stuff would be better.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    130
    ·
    1 day ago

    If you are giving your codegen LLM - the model involved truly, genuinely doesn’t matter - admin access to your prod env, all I’m going to do is point and laugh.

  • Raven@lemmy.org
    link
    fedilink
    arrow-up
    18
    ·
    20 hours ago

    This is fun to read. I hope people will have their actual intelligence activated after this.

    • chatokun@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      They blame AI, but what if this was one of those aggressive encrypt everything malware attacks? Why are the backups accessible anytime except during a backup? I know, I know, convenience. But all of the backups?

      • jafra@slrpnk.net
        link
        fedilink
        arrow-up
        10
        ·
        19 hours ago

        Actually this is how AI should be viewed. Under the right circumstances it maybe saves lots of time, but it also might destroy, so treat it like you would an intern…