• pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Very true, though theres a certain threshold you can get past where the context, at least, is usable in size where the machine can at least hold enough data at once for common tasks.

    One of the pieces of tech we are really missing atm is an automation of being able to filter info.

    Specifically, for the LLM to be able to “release” info as it goes asap as unimportant and forget it, or at least it gets stored into some form of long term storage it can use a tool to look up.

    But for a given convo the LLM can do a lot of reasoning but all that reasoning takes up context.

    Itd be nice if after it reasons, it then can discard a bunch of the data from that and only keep what matters.

    This eould tremendously lower context pressure and allow the LLM to last way longer memory wise

    I think tooling needs to approach how we manage LLM context in a very different way to make further advancement.

    LLMs have to be trained to have different types of output, that control if they’ll actually remember it or not.