because that’s what we want. To open up classified documents to an AI controlled by Elon, of all people :/

  • toiletobserver@lemmy.world
    link
    fedilink
    arrow-up
    39
    ·
    22 hours ago

    The only safe system is one that is disconnected from other connections physically, then shielded, then with very limited access. Even then, humans are still the biggest risk. I’m sure they’re doing the exact opposite.

    • Skyrmir@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      19 hours ago

      By definition LLM models need massive external input in order to improve. So they can’t really be disconnected. Top that off with them only being useful when you can interact with them from many or remote locations, and there’s just no way to really keep them secure. They need massive communication to accomplish anything useful, and there’s no real way to keep massive communication secure.