• Rimu@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    ·
    21 hours ago

    That’s interesting.

    How much ram did it use while running?

    If you used a GPU, how much does it cost in today’s prices?

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      17 hours ago

      It’s a MacBook Pro. 36GB of ram. I am sure Macs have some kind of gpu and I understand it somehow combines GPU ram with system ram, but I don’t really know Mac hardware very well.

      It’s beefy for a laptop, but the desktop I built for myself several years ago had 32 GB of ram and a GTX 1660, so I’m guessing they are similar in capability. I gave that to my daughter, so I can’t run a comparison right now.

      EDIT: After doing just a bit of research, I’ve learned the unified memory architecture that Macs use, while not ideal for many purposes, is actually a big advantage for running larger inference models. So it’s possible that this particular model wouldn’t run at all on my Linux box or would run much slower because the full model wouldn’t fit in the 6GB of VRAM and create a lot of memory thrashing.

      • boonhet@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        Yup, you want memory accessible to the GPU for local AI. AMD Strix Point and Mac devices are popular options. CPU can run LLMs but very slowly. I’ve got 32 GB of RAM and 8 VRAM and it’s borderline useless for models that don’t fit in the VRAM.

      • SabinStargem@lemmy.today
        link
        fedilink
        English
        arrow-up
        3
        ·
        16 hours ago

        You can use something like KoboldCPP on Linux, which allows both RAM and VRAM combined to run a model. O’course, not as fast when compared to pure VRAM or the Mac approach, but it is an option. I use my 128gb RAM with some GPUs for running models.

          • SabinStargem@lemmy.today
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 minute ago

            Speed depends on how much of the model is on VRAM, and the dense/MoE architecture of that model. The RAM’s benefit is more about having the ability to run the model in the first place. In any case, a dense Qwen3.6 27b would take up about 27-33gb-ish of memory, plus whatever context size you set.

            Upcoming implementation of MTP will increase the size of models, but in exchange, they will also run faster. About a 30%ish boost for dense models, a bit less for Mixture of Expert varieties, from the looks of it.