“No Duh,” say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

  • Lettuce eat lettuce@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 minutes ago

    You mean relying blindly on a statistical prediction engine to produce sophisticated software without any understanding of the underlying principles or concepts doesn’t magically replace years of actual study and real-world experience?

    But trust me, bro, the singularity is imminent, LLMs are the future of human evolution, true AGI is nigh!

    I can’t wait for this idiotic “AI” bubble to burst.

  • altphoto@lemmy.today
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 hours ago

    Its great for stupid boobs like me, but only to get you going. It regurgitates old code, it cannot come up with new stuff. Lately there have been less Python errors, but again the stuff you can do is limited. At least for the free stuff that you can get without signing up.

    • Corhen@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      24 minutes ago

      Yea, I use it for home assistant, it’s amazingly powerful… And so incredibly dumb

      It will take my if and statements, and shrunk it to 1/3 the length, while being twice as to robust… While missing that one of the arguments is entirely in the wrong place.

  • Aljernon@lemmy.today
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 hours ago

    Senior Management in much of Corporate America is like a kind of modern Nobility in which looking and sounding the part is more important than strong competence in the field. It’s why buzzwords catch like wildfire.

  • MrSulu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 hours ago

    Perhaps it should read “All AI is over hyped, over done and we should be over it”

  • Chaotic Entropy@feddit.uk
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    6 hours ago

    Are you trying to tell me that the people wanting to sell me their universal panacea for all human endeavours were… lying…? Say it ain’t so.

    • SparroHawc@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      6 hours ago

      I mean, originally they thought they had come upon a magic bullet. Turns out it wasn’t the case, and now they’re going to suffer for it.

  • Sadness Nexus@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    5 hours ago

    I’m not a programmer in any sense. Recently, I made a project where I used python and raspberry pi and had to train some small models on a KITTI data set. I used AI to write the broad structure of the code, but in the end, it took me a lot of time going through python documentation as well as the documentation of the specific tools/modules I used to actually get the code working. Would an experienced programmer get the same work done in an afternoon? Probably. But the code AI output still had a lot of flaws. Someone who knows more than me would probably input better prompts and better follow up requirements and probably get a better structure from the AI, but I doubt they’ll get a complete code. In the end, even to use AI, you have to know what you’re doing to use AI efficiently and you still have to polish the code into something that actually works.

    • Spice Hoarder@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      From my experience, AI just seems to be a lesson in overfitment. You can’t use it to do things nobody has done before. Furthermore, you only really get good responses from prompts related to Javascript

  • Corridor8031@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 hours ago

    I think maybe a good comparison is to written papers/ assignments. It can generate those just like it can generate code.

    But it is not about the words themself, but about the content.

  • ready_for_qa@programming.dev
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    6
    ·
    4 hours ago

    These types of articles always fail to mention how well trained the developers were on techniques and tools. In my experience that makes a big difference.

    My employer mandates we use AI and provides us with any model, IDE, service we ask for. But where it falls short is providing training or direction on ways to use it. Most developers seem to go for results prompting and get a terrible experience.

    I on the other hand provide a lot of context through documents and various mcp tooling, I talk about the existing patterns in the codebase and provide sources to other repositories as examples, then we come up with an implementation plan and execute on it with a task log to stay on track. I spend very little time fixing bad code because I spent the setup time nailing down context.

    So if a developer is just prompting “Do XYZ”. It’s no wonder they’re spending more time untangling a random mess.

    Another aspect is that everyone seems to always be working under the gun and they just don’t have the time to figure out all the best practices and techniques on their own.

    I think this should be considered when we hear things like this.

    • korazail@lemmy.myserv.one
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 hours ago

      I have 3 questions, and I’m coming from a heavily AI-skeptic position, but am open:

      1. Do you believe that providing all that context, describing the existing patterns, creating an implementation plan, etc, allows the AI to both write better code and faster than if you just did it yourself? To me, this just seems like you have to re-write your technical documentation in prose each time you want to do something. You are saying this is better than ‘Do XYZ’, but how much twiddling of your existing codebase do you need to do before an AI can understand the business context of it? I don’t currently do development on an existing codebase, but every time I try to get these tools to do something fairly simple from scratch, they just flail. Maybe I’m just not spending the hours to build my AI-parsable functional spec. Every time I’ve tried this, asking something as simple as (and paraphrased for brevity) “write an Asteroids clone using JavaScript and HTML 5 Canvas” results in a full failure, even with multiple retries chasing errors. I wrote something like that a few years ago to learn Javascript and it took me a day-ish to get something that mostly worked.

      2. Speaking of that context. Are you running your models locally, or do you have some cloud service? If you give your entire codebase to a 3rd party as context, how much of your company’s secret sauce have you disclosed? I’d imagine most sane companies are doing something to make their models local, but we see regular news articles about how ChatGPT is training on user input and leaking sensitive data if you ask it nicely and I can’t imagine all the pro-AI CEOs are aware of the risks here.

      3. How much pen-testing time are you spending on this code, error handling, edge cases, race conditions, data sanitation? An experienced dev understands these things innately, having fixed these kinds of issues in the past and knows the anti-patterns and how to avoid them. In all seriousness, I think this is going to be the thing that actually kills AI vibe coding, but it won’t be fast enough. There will be tons of new exploits in what used to be solidly safe places. Your new web front-end? It has a really simple SQL injection attack. Your phone app? You can tell it your username is admin’joe@google.com and it’ll let you order stuff for free since you’re an admin.

      I see a place for AI-generated code, for instant functions that do something blending simple and complex. “Hey claude, write a function to take a string and split it at the end of every sentence containing an uppercase A”. I had to write weird functions like that constantly as a sysadmin, and transforming data seems like a thing an AI could help me accelerate. I just don’t see that working on a larger scale, though, or trusting an AI enough to allow it to integrate a new function like that into an existing codebase.

  • badgermurphy@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    11 hours ago

    I work adjacent to software developers, and I have been hearing a lot of the same sentiments. What I don’t understand, though, is the magnitude of this bubble then.

    Typically, bubbles seem to form around some new market phenomenon or technology that threatens to upset the old paradigm and usher in a new boom. Those market phenomena then eventually take their place in the world based on their real value, which is nowhere near the level of the hype, but still substantial.

    In this case, I am struggling to find examples of the real benefits of a lot of these AI assistant technologies. I know that there are a lot of successes in the AI realm, but not a single one I know of involves an LLM.

    So, I guess my question is, “What specific LLM tools are generating profits or productivity at a substantial level well exceeding their operating costs?” If there really are none, or if the gains are only incremental, then my question becomes an incredulous, “Is this biggest in history tech bubble really composed entirely of unfounded hype?”

    • JcbAzPx@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 hours ago

      This struck upon one of the greatest wishes of all corporations. A way to get work without having to pay people for it.

    • SparroHawc@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      16
      ·
      10 hours ago

      From what I’ve seen and heard, there are a few factors to this.

      One is that the tech industry right now is built on venture capital. In order to survive, they need to act like they’re at the forefront of the Next Big Thing in order to keep bringing investment money in.

      Another is that LLMs are uniquely suited to extending the honeymoon period.

      The initial impression you get from an LLM chatbot is significant. This is a chatbot that actually talks like a person. A VC mogul sitting down to have a conversation with ChatGPT, when it was new, was a mind-blowing experience. This is a computer program that, at first blush, appears to be able to do most things humans can do, as long as those things primarily consist of reading things and typing things out - which a VC, and mid/upper management, does a lot of. This gives the impression that AI is capable of automating a lot of things that previously needed a live, thinking person - which means a lot of savings for companies who can shed expensive knowledge workers.

      The problem is that the limits of LLMs are STILL poorly understood by most people. Despite constructing huge data centers and gobbling up vast amounts of electricity, LLMs still are bad at actually being reliable. This makes LLMs worse at practically any knowledge work than the lowest, greenest intern - because at least the intern can be taught to say they don’t know something instead of feeding you BS.

      It was also assumed that bigger, hungrier LLMs would provide better results. Although they do, the gains are getting harder and harder to reach. There needs to be an efficiency breakthrough (and a training breakthrough) before the wonderful world of AI can actually come to pass because as it stands, prompts are still getting more expensive to run for higher-quality results. It took a while to make that discovery, so the hype train was able to continue to build steam for the last couple years.

      Now, tech companies are doing their level best to hide these shortcomings from their customers (and possibly even themselves). The longer they keep the wool over everyone’s eyes, the more money continues to roll in. So, the bubble keeps building.

      • badgermurphy@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 hours ago

        The upshot of this and a lot of the other replies I see here and elsewhere seem to suggest that one big difference between this bubble and other past ones is that with this most recent one, there is so much of the global economy now tied to the fate of this bubble that the entire financial world is colluding to delay the inevitable due to the expected severity of the consequences.

    • leastaction@lemmy.ca
      link
      fedilink
      English
      arrow-up
      7
      ·
      8 hours ago

      AI is a financial scam. Basically companies that are already mature promise great future profits thanks to this new technological miracle, which makes their stock more valuable than it otherwise would be. Cory Doctorow has written eloquently about this.

    • TipsyMcGee@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      10 hours ago

      When the AI bubble bursts, even janitors and nurses will lose their jobs. Financial institutions will go bust.

  • arc99@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    11 hours ago

    I have never seen an AI generated code which is correct. Not once. I’ve certainly seen it broadly correct and used it for the gist of something. But normally it fucks something up - imports, dependencies, logic, API calls, or a combination of all them.

    I sure as hell wouldn’t trust to use it without reviewing it thoroughly. And anyone stupid enough to use it blindly through “vibe” programming deserves everything they get. And most likely that will be a massive bill and code which is horribly broken in some serious and subtle way.

    • Terrasque@infosec.pub
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      6 hours ago

      I’ve used Claude code to fix some bugs and add some new features to some of my old, small programs and websites. Not things I can’t do myself, but things I can’t be arsed to sit down and actually do.

      It’s actually gone really well, with clean and solid code. easily readable, correct, with error handling and even comments explaining things. It even took a gui stream processing program I had and wrote a server / webapp with the same functionality, and was able to extend it with a few new features I’ve been thinking to add.

      These are not complex things, but a few of them were 20+ files big, and it manage to not only navigate the code, but understand it well enough to add features with the changes touching multiple files (model, logic, view layer for example, or refactor a too big class and update all references to use the new classes).

      So it’s absolutely useful and capable of writing good code.

      • chicagohuman@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 hours ago

        This is the truth. It has tremendous value but it isn’t a solution – it’s a tool. And if you don’t know how to code or what good code looks like, then it is a tool you can’t use!

    • ikirin@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      9 hours ago

      I’ve seen and used AI for snippets of code and it’s pretty decent at that.

      With my colleagues I always compare it to a battery powered drill. It’s very powerful and can make shit a lot easier. But you’d not try to build furniture from scratch with only a battery powered drill.

      You need the knowledge to use it - and also saws, screws, the proper bits for those screws and so on and so forth.

      • setsubyou@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        7 hours ago

        What bothers me the most is the amount of tech debt it adds by using outdated approaches.

        For example, recently I used AI to create some python scripts that use polars and altair to parse some data and draw charts. It kept insisting to bring in pandas so it could convert the polars dataframes to pandas dataframes just for passing them to altair. When I told if that altair can use polars dataframes directly, that helped, but two or three prompts later it would try to solve problems by adding the conversion again.

        This makes sense too, because the training material, on average, is probably older than the change that enabled altair to use polars dataframes directly. And a lot of code out there just only uses pandas in the first place.

        The result is that in all these cases, someone who doesn’t know this would probably be impressed that the scripts worked, and just not notice the extra tech debt from that unnecessary dependency on pandas.

        It sounds like it’s not a big deal, but these things add up and eventually, our AI enhanced code bases will be full of additional dependencies, deprecated APIs, unnecessarily verbose or complicated code, etc.

        I feel like this is one aspect that gets overlooked a bit when we talk about productivity gains. We don’t necessarily immediately realize how much of that extra LoC/time goes into outdated code and old fashioned verbosity. But it will eventually come back to bite us.

    • Auli@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      8 hours ago

      Eh I had it write a program that finds my PCs ip and sends it to the Unifi gateway to change a rule. Worked fine but I guess technically it is mostly using the go libraries written by someone else.

    • hietsu@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      10 hours ago

      How is it not correct if the code successfully does the very thing that was prompted?

      F.ex. in my company we don’t have any real programmers but have built handful of useful tools (approx. 400-1600 LOC, mainly Python) to do some data analysis, regex stuff to cleanup some output files, index some files and analyze/check their contents for certain mistakes, dashboards to display certain data, etc.

      Of course the apps may not have been perfect after the very first prompt, or even compiled, but after iterating an error or two, and explaining an edge case or two, they’ve started to perform flawlessly, saving tons of work hours per week. So how is this not useful? If the code creates results that are correct, doesn’t that make the app itself technically ”correct” too, albeit likely not nearly as optimized as equivalent human code would be.

      • maskofdaisies@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 hours ago

        To add on to what others have said, vibe coding is ushering in a new golden age for black hat hackers. If someone is rely entirely on AI to generate code they likely don’t understand what the code they have is actually doing. This tends to lead to an app that works correctly for what the prompted specified but behaves badly the instant it has to handle anything outside of the prompt, like a malformed request or data outside the prompted parameters. As a result these apps tend to be easy to exploit by malicious actors, often in ways the original prompter never thought of.

        • korazail@lemmy.myserv.one
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          I think this is what will kill vibe coding, but not before there’s significant damage done. Junior developers will be let go and senior devs will be told they have to use these tools instead and to be twice as efficient. At some point enough major companies will have had data breaches through AI-generated code that they all go back to using people, but there will be tons of vulnerable code everywhere. And letting Cursor touch your codebase for a year, even with oversight, will make it really tricky to find all the places it subtly fucked up.

      • arc99@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 hours ago

        If the code doesn’t compile, or is badly mangled, or uses the wrong APIs / imports or forgets something really important then it’s broken. I can use AI to inform my opinion and sometimes makes use of what it outputs but critically I know how to program and I know how to spot good and bad code.

        I can’t speak for how you use it, but if you don’t have any real programmers and you’re iterating until something works then you could be producing junk and not know it. Maybe it doesn’t matter in your case if its a bunch for throwaway scripts and helpers but if you have actual code in production where money, lives, reputation, safety or security are at risk then it absolutely does.

      • LaMouette@jlai.lu
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 hours ago

        It’s not bad for your use case but going beyond that without issues and actual developpers to fix the vibe code is not yet possible for llms

    • Alaknár@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      4
      ·
      10 hours ago

      There already are. People all over LinkedIn are changing their titles to “AI Code Cleanup Specialist”.

    • aidan@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      12 hours ago

      I mean largely for most of us I hope. But I feel like the tech sector was oversatured because of all the hype of it being an easy get rich quick job. Which for some people it was.

    • ceiphas@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 hours ago

      I am a software architect, an mainly usw it to refactor my own old code… But i am maybe not a typical architect…

      • JackbyDev@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 hours ago

        I don’t really care if people use it, it’s more that it feels like a quarter of our architect meeting presentations are about something AI related. It’s just exhausting.

  • Deflated0ne@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    7 hours ago

    According to Deutsche Bank the AI bubble is a the pillar of our economy now.

    So when it pops. I guess that’s kinda apocalyptic.

    Edit - strikethrough