

demonstrating your power level
lolwut? I’m so tired of tech people acting like they’re the next Genghis Khan or Julius Caesar…
demonstrating your power level
lolwut? I’m so tired of tech people acting like they’re the next Genghis Khan or Julius Caesar…
Before LLMs people were often saying this about people smarter than the rest of the group.
Smarter by whose metric? If you can’t write software that meets the bare minimum of comprehensibility, you’re probably not as smart as you think you are.
Software engineering is an engineering discipline, and conformity is exactly what you want in engineering — because in engineering you don’t call it ‘conformity’, you call it ‘standardization’. Nobody wants to hire a maverick bridge-builder, they wanna hire the guy who follows standards and best practices because that’s how you build a bridge that doesn’t fall down. The engineers who don’t follow standards and who deride others as being too stupid or too conservative to understand their vision are the ones who end up crushed to death by their imploding carbon fiber submarine at the bottom of the Atlantic.
AI has exactly the same “maverick” tendencies as human developers (because, surprise surprise, it’s trained on human output), and until that gets ironed out, it’s not suitable for writing anything more than the most basic boilerplate — which is stuff you can usually just copy-paste together in five minutes anyway.
The company I work for has recently mandated that we must start using AI tools in our workflow and is tracking our usage, so I’ve been experimenting with it a lot lately.
In my experience, it’s worse than useless when it comes to debugging code. The class of errors that it can solve is generally simple stuff like typos and syntax errors — the sort of thing that a human would solve in 30 seconds by looking at a stack trace. The much more important class of problem, errors in the business logic, it really really sucks at solving.
For those problems, it very confidently identifies the wrong answer about 95% of the time. And if you’re a dev who’s desperate enough to ask AI for help debugging something, you probably don’t know what’s wrong either, so it won’t be immediately clear if the AI just gave you garbage or if its suggestion has any real merit. So you go check and manually confirm that the LLM is full of shit which costs you time… then you go back to the LLM with more context and ask it to try again. It’s second suggestion will sound even more confident than the first, (“Aha! I see the real cause of the issue now!”) but it will still be nonsense. You go waste more time to rule out the second suggestion, then go back to the AI to scold it for being wrong again.
Rinse and repeat this cycle enough times until your manager is happy you’ve hit the desired usage metrics, then go open your debugging tool of choice and do the actual work.
Who wrote this headline, Colin Mockrie?