• Bob Robertson IX @discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    1 day ago

    I agree with you that it needs to be local and self-hosted… I currently have an incredible AI assistant running locally using Qwen3-Coder-Next. It is fast, smart and very capable. However, I could not have gotten it setup as well as I have without the help of Claude Code… and even now, as great as my local model is, it still isn’t to the point that it can handle modifying its own code as well as Claude. The future is local, but to help us get there a powerful cloud-based AI adds a lot of value.

    • SuspciousCarrot78@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      Thank you for honestly stating that. I am in similar position myself.

      How do you like Qwen 3 next? With only 8GB vram I’m limited in what I can self host (maybe the Easter bunny will bring me a Strix lol).

      • Bob Robertson IX @discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        12 hours ago

        Yeah, some communities on Lemmy don’t like it when you have a nuanced take on something so I’m pleasantly surprised by the upvotes I’ve gotten.

        I’m running a Framework Desktop with a Strix Halo and 128GB RAM and up until Qwen3 Next I was having a hard time running a useful local LLM, but this model is very fast, smart and capable. I’m currently building a frontend for it to give it some structure and make it a bit autonomous so it can monitor my systems and network and help keep everything healthy. I’ve also integrated it into my Home Assistant and it does great there as well.