• schema@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    8 hours ago

    Similar for me. What i find ironic is that AI already ran into a brick wall. It’s inherent statelessness by design means that AI is unlikely to be suited for anything more than isolated well defined tasks in the near future. Still usable as a tool, but without someone who is actually experienced, it will result in disaster.

    and even in smaller tasks it can fucks up, especially if the person prompting it is incapable of writing the code themselves as they don’t know how to properly design it and don’t spot the issues. Like everything with AI, it looks impressive at first glance until you look at it for more than 10 seconds and spot the metaphorical 6th finger.

    What we see currently with AI getting “better” at coding is more or less duct tape to make it work. Basically, they create the agents to bolt on the state, more layers between user and model. Iterative processes to make the answers better, etc, and to create “memory”, which in essence is just an ever growing prompt managed by the agent. But in the end, this won’t fix the inherent problem, so it will only do so much and is already hitting another ceiling. It introduces state decay. With the agent method its not really possible to “take away” memory, so if you gave it multiple versions of the same code (as you would if you work with AI), the AI never really forgets about old code. It can supress it through agent instructions (more duct tape), but the more there is the more it bleeds through, which can make the AI reintroduce old code or base assumptions on outdated things.

    There is no fix without changing the inherent way how models work, which would introduce complexity beyond what is currently feasible in computing (and the current AI is already gobbling up all computing reaoureces as is)