The only safe system is one that is disconnected from other connections physically, then shielded, then with very limited access. Even then, humans are still the biggest risk. I’m sure they’re doing the exact opposite.
By definition LLM models need massive external input in order to improve. So they can’t really be disconnected. Top that off with them only being useful when you can interact with them from many or remote locations, and there’s just no way to really keep them secure. They need massive communication to accomplish anything useful, and there’s no real way to keep massive communication secure.
The only safe system is one that is disconnected from other connections physically, then shielded, then with very limited access. Even then, humans are still the biggest risk. I’m sure they’re doing the exact opposite.
They are trying to use LLM chat bot to control military equipment. What do you think?
What if the system is connected to the fattest shit pipe in the world X?
Rainbows and unicorns obviously
By definition LLM models need massive external input in order to improve. So they can’t really be disconnected. Top that off with them only being useful when you can interact with them from many or remote locations, and there’s just no way to really keep them secure. They need massive communication to accomplish anything useful, and there’s no real way to keep massive communication secure.