As a software developer, I’ve found some free LLMs to provide productivity boosts. It is a fairly hairpulling experience to not try too hard to get a bad LLM to correct itself, and learning to switch quickly from bad LLMs is a key skill in using them. A good model is still one that you can fix their broken code, and ask them to understand why what you provided them fixes it. They need a long context window to not repeat their mistakes. Qwen 3 is very good at this. Open source also means a future of customizing to domain, ie. language specific, optimizations, and privacy trust/unlimited use with enough local RAM, with some confidence that AI is working for you rather than data collecting for others. Claude Sonnet 4 is stronger, but limited free access.
The permanent side of high market cap US AI industry is that it will always be a vector for NSA/fascism empire supremacy, and Skynet goal, in addition to potentially stealing your input/output streams. The future for users who need to opt out of these threats, is local inference, and open source that can be customized to domains important to users/organizations. Open models are already at close parity, IMO from my investigations, and, relatively low hanging fruit, customization a certain path to exceeding parity for most applications.
No LLM can be trusted to allow you do to something you have no expertise in. This state will remain an optimistic future for longer than you hope.
I think the key to good LLM usage is a light touch. Let the LLM know what you want, maybe refine it if you see where the result went wrong. But if you find yourself deep in conversation trying to explain to the LLM why it’s not getting your idea, you’re going to wind up with a bad product. Just abandon it and try to do the thing yourself or get someone who knows what you want.
They get confused easily, and despite what is being pitched, they don’t really learn very well. So if they get something wrong the first time they aren’t going to figure it out after another hour or two.
In my experience, they’re better at poking holes in code than writing it, whether that’s green or brownfield.
I’ve tried to get it to make sections of changes for me, and it feels very productive, but when I time myself I find I spend probably more time correcting the LLM’s work than if I’d just written it myself.
But if you ask it to judge a refactor, then you might actually get one or two good points. You just have to really be careful to double check its assertions if you’re unfamiliar with anything, because it will lead you to some real boners if you just follow it blindly.
At work we’ve got coderabbit set up on our github and it has found bugs that I wrote. Sometimes the thing drives me insane with pointless comments, but just today found a spot that would have been a big bug in prod in like 3 months.
But if you find yourself deep in conversation trying to explain to the LLM why it’s not getting your idea, you’re going to wind up with a bad product.
Yes. Kind of. It takes ( a couple of days) experience with LLMs to know that failing to understand your corrections means immediate delete and try another LLM. The only OpenAI llm I tried was their 120g open source release. It insisted that it was correct in its stupidity. That’s worse than LLMs that forget the corrections from 3 prompts ago, though I also learned that is also grounds for delete over any hope for their usefulness.
It is not useless. You should absolutely continue to vibes code. Don’t let a professional get involved at the ground floor. Don’t inhouse a professional staff.
Please continue paying me $200/hr for months on end debugging your Baby’s First Web App tier coding project long after anyone else can salvage it.
And don’t forget to tell your investors how smart you are by Vibes Coding! That’s the most important part. Secure! That! Series! B! Go public! Get yourself a billion dollar valuation on these projects!
Keep me in the good wine and the nice car! I love vibes coding.
Not me, I’d rather work on a clean code base without any slop, even if it pays a little less. QoL > TC
I’m not above slinging a little spaghetti if it pays the bills.
I’m sure it’s fun to see a series of text prompts turn into an app, but if you don’t understand the code and can’t fix it when it doesn’t work without starting over, you’re going to have a bad time. Sure, it takes time and effort to learn to program, but it pays off in the end.
Clearly satire
It’s kind of hard for me to tell on this one. Maybe the boomer lead is seeping into my brain.
Nah, it’s the microplastics.
Microplastics are stored in the balls.
Why not both ™?
With a pinch of PFAS for good measure?
So there are multiple people in this thread who state their job is to unfuck what the LLMs are doing. I have a family member who graduated in CS a year ago and is having a hell of a time finding work, how would he go about getting one of these “clean up after the model” jobs?
Has he tried being a senior developer? He should really try being a senior developer.
He needs at least a decade of industry experience. That helps me find jobs.
It would be nice if software development were a real profession and people could get that experience properly.
No idea, but I am not sure your family member is qualified. I would estimate that a coding LLM can code as well as a fresh CS grad. The big advantage that fresh grads have is that after you give them a piece of advice once or twice, they stop making that same mistake.
Where is this coming from? I don’t think an LLM can code at the level of a recent cs grad unless it’s piloted by a cs grad.
Maybe you’ve had much better luck than me, but coding LLMs seem largely useless without prior coding knowledge.
What’s this based on? Have you met a fresh CS graduate and compared them to an LLM? Does it not vary person to person? Or fuck it, LLM to LLM? Calling them not qualified seems harsh when it’s based on sod all.
It makes me so mad that there are CS grads who can’t find work at the same time as companies are exploiting the H1B process saying “there aren’t enough applicants”. When are these companies going to be held accountable?
Never, they donate to get the politicians reelected.
This is in no way new. 20 years ago I used to refer to some job postings as H1Bait because they’d have requirements that were physically impossible (like having 5 years experience with a piece of software <2 years old) specifically so they could claim they couldn’t find anyone qualified (because anyone claiming to be qualified was definitely lying) to justify an H1B for which they would be suddenly way less thorough about checking qualifications.
Yeah companies have always been abusing H1B, but it seems like only recently is it so hard for CS grads to find jobs. I didn’t have much trouble in 2010 and it was easy to hop jobs for me the last 10 years.
Now, not so much.
After they fill up on H1B workers and find out that only 1/10 is a good investment.
H1B development work has been a thing for decades, but there’s a reason why there are still high-paying development jobs in the US.
The difficult part is going to be that new engineers are not generally who people think about to unfuck code. Even before the LLMs junior engineers are generally the people that fuck things up.
It’s through fucking lots of stuff up and unfucking that stuff up and learning how not to fuck things up in the first place that you go from being a junior engineer to a more senior engineer. Until you land in a lofty position like staff engineer and your job is mostly to listen to how people want to fuck everything up and go “maybe let’s try this other way that won’t fuck everything up instead”
Tell your family member to network, that’s the best way to get a job. There are discord servers for every programming language and most projects. Contribute to open source projects and get to know the people.
Build things, write code, open source it on GitHub.
Drill on leet code questions, they aren’t super useful, but in any interview at least part of the assessment is going to be how well they can do on those.
There are still plenty of places hiring. AI has just made it so that most senior engineers have access to a junior engineer level programmer that they can give tasks to at all time, the AI. So anything you can do to stand out is an advantage.
My path was working for a consulting firm (Accenture) for a few years, making friends with my clients, and then jumping to freelance work a few years later when I can get paid my contract rate directly rather than letting Accenture take a big chunk of it.
Working with Accenture
I am so sorry…
It was a wild ride
Answer is probably the same as before AI: build a portfolio on GitHub. These days maybe try to find repos that have vibe code in them and make commits that fix the AI garbage.
Answer is probably the same as before AI: build a portfolio on GitHub
You really think that using GitHub falls in the usual vibecoding toolbox? As in: would they even know where/how to look?
You think vibe coders don’t love the smell of their own shit enough to show it to the world?
No idea, but I am not sure your family member is qualified. I would estimate that a coding LLM can code as well as a fresh CS grad. The big advantage that fresh grads have is that after you give them a piece of advice once or twice, they stop making that same mistake.
a coding LLM can code as well as a fresh CS grad.
For a couple of hundred lines of code, they might even be above average. When you split that into a couple of files or start branching out, they usually start to struggle.
after you give them a piece of advice once or twice, they stop making that same mistake.
That’s a damn good observation. Learning only happens with re-training and that’s wayyy cheaper when done in meat.
God bless vibe coders, because of them I’m buying a new PC build this week AND I’ve decided to get a PS5.
Thank you Vibe Coders, your laziness and and sheer idiocy are padding my wallet nicely.
But I thought armies of teenagers were starting tech businesses?!
My boss is literally convinced we can now basically make programs that take rockets to mars, and that it’s literally clicks away. For the life of me, it is impossible to convince him that this is, in fact, not the case. Whoever fired developers because ‘AI could do it’ is going to regret it.
Maybe try convincing him in terms he would understand. If it was really that good, it wouldn’t be public. They’d just use it internally to replace every proprietary piece of software in existence. They’d be shitting out their own browser, office suite, CAD, OS, etc. Microsoft would be screwing themselves by making chatgpt public. Microsoft could replace all the Adobe products and drive them out of business tomorrow.
Yea, it’s that lack of critical thinking that is the reason why MLMs, and get rich quick courses still exist
I mean … the first moon landings took a very low number of clicks to make the calculations, technically speaking
Lots of clacks, though.
it is impossible to convince him that this is, in fact, not the case
He’s probably an investor.
The tech economy is struggling. Every company needs 20% more every year, or it’s considered a failure. The big fish have bought up every promising property on the map in search of this. It’s almost impossible to go from small to large without getting gobbled up, and the guys gobbling up already have 7 different flavors of what you’re trying to make on ice in a repo somewhere. There’s no new venture capital flowing into conventional work.
AI has all the venture capitalists buzzing, handing over money like it’s 1999. Investors are hopping on every hype train because each one has the chance of getting gobbled up and making a good return on investment.
These mega CEO’s have moved their personal portfolios into AI funding and their companies pushing the product will line their pockets indirectly.
At some point, that $200/pp/m price will shoot up. They’re spending billions on datacenters, and eventually those investments will be called in for returns.
When they hit the wall for training-based improvement, things got slippery. Current models are costing exponentially more, making several calls for every request. The market’s not going to bear that without an exponential cost increase, even if they’re getting good work done.
Vibe coding tools are very useful when you want to make a tech movie but the
hollywood
command just does not cut it.No way. Youtube ad told me a different story the other day. Could that be a… lie? (shocked_face.jpg)
Like trying to write a book just using auto complete
Vibe coding is useful for super basic bash scripting and that’s about it. Even that it will mess up but usually in a suler easily fixed way
I don’t think it has much to do with how “complex or not” it is, but rather how common it is.
It can completely fail on very simple things that are just a bit obscure, so it has too little training data.
And it can do very complex things if there’s enough training data on those things.
When I want to be lazy and make some simple excel macros is about the most iv trusted it with that it manages to do with out fucking up and taking more time then just doing it my self.
I’ve also found it useful for simple Python scripts when I need to analyze data. I don’t use pandas/scipy/numpy/matplotlib enough to remember the syntax and library functions. With vibe coding it, I can have a script in minutes for reading a csv with weird timestamps, scale some of the channels, filter out noise or detrending, perform a Fourier transform, and do a curve fit towards a model.
But then obviously I know every intermediate step I want to do.
The AI Fix podcast had a piece about how someone let an AI agent do the coding for them but had a disaster because he gave it access to the production database.
Very funny.
https://theaifix.show/61-replit-panics-deletes-1m-project-ai-gets-gold-at-math-olympiad/
AI used extremely sparingly is sometimes helpful to an experienced coder. “Multivac, generate a set of unit tests for this function.” Okay, some of these are dumb, but it’s easier getting started on this mess than just looking at a blank buffer. Helps get the juices flowing a bit. But man, you try to actually do anything with it, and suddenly you’re lost chasing a will-o’-wisp.
Oh man, I love ChatGPT for one thing in particular: “Hey chatbot, is there some library or standard library function for that very specific, yet still kinda generic thing I’m trying to do, so that I don’t have to write it myself?”
It does frequently give a helpful answer. That is, it doesn’t give me working code, but a helpful pointer to some manual where I can find good instructions for how to use the thing to solve my problem.
I will usually google that kind of thing first (to save the rainforests)… Often I can find something that way, otherwise I might try an LLM
True that. But I often find that the search engine is not very good at giving me a solution if I don’t know the name of a problem and only have my spaghetti thoughts on what the thing is supposed to do, and translating spaghetti thoughts into something a search engine can find is where the chatbot excels.
Part of your trouble there is that Google is absolute dog shit these days. It used to be like magic; simple search terms would find you exactly what you were looking for in the first handful of results. Now you’re lucky to find it on the second page.
I don’t want to dismiss your point overall, but I see that example so often and it irks me so much.
Unit tests are your specification. So, 1) ideally you should write the specification before you implement the functionality. But also, 2) this is the one part where you really should be putting in your critical thinking to work out what the code needs to be doing.
An AI chatbot or autocomplete can aid you in putting down some of the boilerplate to have the specification automatically checked against the implementation. Or you could try to formulate the specification in plaintext and have an AI translate it into code. But an AI without knowledge of the context nor critical thinking cannot write the specification for you.
Tests are probably both the best and worst things to use LLMs for.
They’re the best because of all the boilerplate. Unit tests tend to have so much of that, setting things up and tearing it down. You want that to be as consistent as possible so that someone looking at it immediately understands what they’re seeing.
OTOH, tests are also where you figure out how to attack your code from multiple angles. You really need to understand your code to think of all the ways it could fail. LLMs don’t understand anything, so I’d never trust one to come up with a good set of things to test.
Unit tests become the specification once they are written. ChatGPT can easily write unit tests from whatever your specification is before that – such as documentation, a bunch of comments and stubs, or even a first draft of the function itself, given enough context from the rest of the project.
Unit tests are too klunky to think in. You don’t prototype the specification by implementing unit tests. And you really only lay down a few critical paths even if you “write the tests first” because code paths always come up during implementation that demand more test coverage anyway.
Also after your 4th proper project you won’t be as confused about “how to get started” anymore anyway.
You have a language of choice, a gui framework, and a build system you are comfortable with in mind already just start making folders.
Feels like there are some fine people here only working on new projects. Getting started could be, breaking down a 20 year old program, written in some wierd manner because the original developer use to do functional programming but was told to use java and oop. No comments, no tests, no normal patterns, no documentation.
The argument was “AI helps when starting up new projects by making unit tests etc.”
Also for not 20, but even building 10 year old libraries using AI is unhelpful. It just keeps hallucinating non-existent packages and functions.
At this point just drive yourself crazy while promising to become goose farmer and commenting every single line in your own words like god intended for programmers to do.
You read “new projects” in, actually. And the whole unit test thing was just an example demonstrating how AI use has to be tightly bounded to be arguably useful.
A buddy of mine is into vibe coding, but he actually does know how to code as well. He will reiterate through the code with the llm until he thinks it will work. I can believe it saves time, but you still have to know what you are doing.
The most amazing thing about vibe coding is that in my 20 odd years of professional programming the thing I’ve had to beg and plead for the most was code reviews.
Everyone loves writing code, no one it seems much enjoyed reading other people’s code.
Somehow though vibe coding (and the other LLM guided coding) has made people go “I’ll skip the part where I write code, let an LLM generate a bunch of code that I’ll review”
Either people have fundamentally changed, unlikely, or there’s just a lot more people that are willing to skim over a pile of autogenerated code and go “yea, I’m sure it’s fine” and open a PR
I suspect it’s a bit of both. With agents the review size can be pretty small and easier to digest which leads to more people reviewing, but I suspect it is still more surface level.
I don’t see how it would save time as someone whose job is to currently undo what “time” it “saves”. You can give Claude Code the most fantastic and accurate prompt in the world but you’re still going to have to explain to it how something actually works when it gets to the point, and it will, that it starts contradicting itself and over complicating things.
You said yourself he has to reiterate through the code with the LLM to get something that works. If he already knows it, he could just write it. Having to explain to something HOW to write what you ALREADY know can’t possibly be saving time. it’s Coding with extra steps.
I don’t think it saves time. You spend more time trying to explain why it’s wrong and how the llm should take the next approach, at which point it actually would’ve been faster to read documentation and do it yourself. At least then you’ll understand what the code is even further.
I do the same, I am not sure if it saves time. Some times not. Other times if it is a task I really don’t want to work on this helps me to get started and break through procreation
Lol, work as your coitus interruptus.
I know you meant procrastination btw.
Maybe they meant procrasturbation
Agree, my spouse and I do the same. You need to know how to code and understand the basic principles otherwise it’s a bit like the Chinese room thing where you may or may not operate currently not have no actual clue of what you’re doing. You need to be about to see when llms follow their hobby and blow three lines of code unnecessarily out of proportion by adding 60 lines of unneeded shit that opens the door to more bugs.