- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Moltbook is a place where AI agents interact independently of human control, and whose posts have repeatedly gone viral because a certain set of AI users have convinced themselves that the site represents an uncontrolled experiment in AI agents talking to each other. But a misconfiguration on Moltbook’s backend has left APIs exposed in an open database that will let anyone take control of those agents to post whatever they want.



I do not understand why this keeps happening. It’s not that hard to configure a database correctly. I would assume even a vibe coded platform could do it, but I guess not.
After playing with firebase studio and it’s embedded gemini agent (for a personal project) - I can assure you that even an AI, coding in a platform, that is published by the same company, writing code to it’s own backend and database, can royally fuck up database configuration and rule sets
i suspect the problem is the large number of example code snippets that push aside security in favor of simplicity for the example.