Anthropic is probably on the leading edge of vibe coding, and in the past month, they’ve had major server uptime issues, they accidentally released the source code for their biggest product, and as of this week, their latest model was accidentally posted on an open URL for anyone to download. In the last 6 months, people have actually switched from vibe coding small apps to vibe coding major sections of their core infrastructure, and all evidence so far suggests that the consequences will come.
I’m vibe coding a fairly complicated bash script to fully automate upgrading a web server at work. For context, I have over 2 decades experience in programming/data analytics/tech, but I’m a Linux and server admin newbie.
It’s comically bad at it. Like, I had to tell it not to post passwords to the production database to console and plaintext log files. Then, about a dozen prompts later, it does it again. The restore script rm-ed things (as sudo) before checking that it had a valid backup file to replace it with. It keeps deleting the comments in the code snippets I send it to update/fix, even when explicitly told to keep the comments. I asked it to prepend time to live commands (i.e. not “dry run” echos), and then it deleted them all again when I asked it to refactor something unrelated.
It’s been great learning for me, and I’m definitely getting this job done faster and to a higher quality than I could on my own, but holy hell these scripts would have been a disaster if someone just ran them “as is”. I’ve needed to fix dozens of errors that could have really screwed things up.
I wonder how often people go through their vibe coded outputs with the careful attention and care it needs. I’m guessing infrequently. LLMs are just word prediction machines; they don’t understand anything.
I saw this one person who, despite getting 3 warnings (one in the chat, 2 in the file itself) about not placing a plaintext API key into a version-controlled env file from a chatbot, did so anyway. Its not just about the AI, but also about the people using them. Someone with experience will be able to utilize the speed of an AI, while finding its mistakes as well. A “vibe coder” won’t know the difference.
Anthropic is probably on the leading edge of vibe coding, and in the past month, they’ve had major server uptime issues, they accidentally released the source code for their biggest product, and as of this week, their latest model was accidentally posted on an open URL for anyone to download. In the last 6 months, people have actually switched from vibe coding small apps to vibe coding major sections of their core infrastructure, and all evidence so far suggests that the consequences will come.
I’m vibe coding a fairly complicated bash script to fully automate upgrading a web server at work. For context, I have over 2 decades experience in programming/data analytics/tech, but I’m a Linux and server admin newbie.
It’s comically bad at it. Like, I had to tell it not to post passwords to the production database to console and plaintext log files. Then, about a dozen prompts later, it does it again. The restore script rm-ed things (as sudo) before checking that it had a valid backup file to replace it with. It keeps deleting the comments in the code snippets I send it to update/fix, even when explicitly told to keep the comments. I asked it to prepend time to live commands (i.e. not “dry run” echos), and then it deleted them all again when I asked it to refactor something unrelated.
It’s been great learning for me, and I’m definitely getting this job done faster and to a higher quality than I could on my own, but holy hell these scripts would have been a disaster if someone just ran them “as is”. I’ve needed to fix dozens of errors that could have really screwed things up.
I wonder how often people go through their vibe coded outputs with the careful attention and care it needs. I’m guessing infrequently. LLMs are just word prediction machines; they don’t understand anything.
I saw this one person who, despite getting 3 warnings (one in the chat, 2 in the file itself) about not placing a plaintext API key into a version-controlled env file from a chatbot, did so anyway. Its not just about the AI, but also about the people using them. Someone with experience will be able to utilize the speed of an AI, while finding its mistakes as well. A “vibe coder” won’t know the difference.