• brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    12 days ago

    I’m self hosting LLMs for family use (cause screw OpenAI and corporate, closed AI), and I am dying for more VRAM and RAM now.

    Seriously looking at replacing my 7800X3D with Strix Halo when it comes out, maybe a 128GB board if they sell one. Or a 48GB Intel Arc if Intel is smart enough to sell that. And I would use every last megabyte, even if I had a 512GB board (which is the bare minimum to host Deepseek V3).

    • Altima NEO@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      I’ve got a 3090, and I feel ya. Even 24 gigs is hitting the cap pretty often and slowing to a crawl once system ram starts being used.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        You can’t let it overflow if you’re using LLMs on windows. There’s a toggle for it in the Nvidia settings, and get llama.cpp to offload though its settings (or better yet, use exllama instead).

        But…. Yeah. Qwen 32B fits in 24GB perfectly, and it’s great, but 72B really feels like the intelligence tipping point where I can dump so many API models, and that barely won’t fit in 24GB.

    • uis@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      Aren’t LLMs external algorithms at this point? As in the all data will not fit in RAM.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        No, all the weights, all the “data” essentially has to be in RAM. If you “talk to” a LLM on your GPU, it is not making any calls to the internet, but making a pass through all the weights every time a word is generated.

        There are system to augment the prompt with external data (RAG is one word for this), but fundamentally the system is closed.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          Yeah, I’ve had decent results running the 7B/8B models, particularly the fine tuned ones for specific use cases. But as ya mentioned, they’re only really good in thier scope for a single prompt or maybe a few follow-ups. I’ve seen little improvement with the 13B/14B models and find them mostly not worth the performance hit.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            12 days ago

            Depends which 14B. Arcee’s 14B SuperNova Medius model (which is a Qwen 2.5 with some training distilled from larger models) is really incrtedible, but old Llama 2-based 13B models are awful.

            • Hackworth@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              12 days ago

              I’ll try it out! It’s been a hot minute, and it seems like there are new options all the time.

              • brucethemoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                12 days ago

                Try a new quantization as well! Like an IQ4-M depending on the size of your GPU, or even better, an 4.5bpw exl2 with Q6 cache if you can manage to set up TabbyAPI.

    • repungnant_canary@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      I don’t know how’s the pricing, but maybe it’s worth building a separate server with second-hand TPU. Used server CPUs and RAMs are apparently quite affordable in the US (assuming you live there) so maybe it’s the case for TPUs as well. And commercial GPUs/TPUs have more VRAM

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        second-hand TPU

        From where? I keep a look out for used Gaudi/TPU setups, but they’re like impossible to find, and usually in huge full-server configs. I can’t find Xeon Max GPUs or CPUs either.

        Also, Google’s software stack isn’t really accessible. TPUs are made for internal use at Google, not for resale.

        You can find used AMD MI100s or MI210s, sometimes, but the go-to used server card is still the venerable Tesla P40.

    • rebelsimile@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      I know it’s a downvote earner on Lemmy but my 64gb M1 Max with its unified memory runs these large scale LLMs like a champ. My 4080 (which is ACHING for more VRAM) wishes it could. But when it comes to image generation, the 4080 smokes the Mac. The issue with image generation and VRAM size is you can think of the VRAM like an aperture, and the lesser VRAM closes off how much you can do in a single pass.

      • thebestaquaman@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        Not running any LLMs, but I do a lot of mathematical modelling, and my 32 GB RAM, M1 Pro MacBook is compiling code and crunching numbers like an absolute champ! After about a year, most of my colleagues ditched their old laptops for a MacBook themselves after just noticing that my machine out-performed theirs every day, and that it saved me a bunch of time day-to-day.

        Of course, be a bit careful when buying one: Apple cranks up the price like hell if you start specing out the machine a lot. Especially for RAM.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        The issue with Macs is that Apple does price gouge for memory, your software stack is effectively limited to llama.cpp or MLX, and 70B class LLMs do start to chug, especially at high context.

        Diffusion is kinda a different duck. It’s more compute heavy, yes, but the “generally accessible” software stack is also much less optimized for Macs than it is for transformers LLMs.

        I view AMD Strix Halo as a solution to this, as its a big IGP with a wide memory bus like a Mac, but it can use the same CUDA software stacks that discrete GPUs use for that speed/feature advantage… albeit with some quirks. But I’m willing to put up with that if AMD doesn’t price gouge it.

        • rebelsimile@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          Apple price gouges for memory, yes, but a 64gb theoretical 4090 would have cost as much in this market as the whole computer did. If you’re using it to its full capabilities then I think it’s one of the best values on the market. I just run the 20b models because they meet my needs (and in open webui I can combine a couple at that size), as I use the Mac for personal use also.

          I’ll look into the Amd Strix though.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            12 days ago

            GDDR is actually super cheap! I think it would only be like another $75 on paper to double the 4090’s VRAM to 48GB (like they do for pro cards already).

            Nvidia just doesn’t do it for market segmentation. AMD doesn’t do it for… honestly I have no idea why? They basically have no pro market to lose, the only explanation I can come up with is that their CEOs are colluding because they are cousins. And Intel doesn’t do it because they didn’t make a (consumer) GPU that was eally worth it until the B580.

            • rebelsimile@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              12 days ago

              Oh I didn’t mean “should cost $4000” just “would cost $4000”. I wish that the vram on video cards was modular, there’s so much ewaste generated by these bottlenecks.

              • brucethemoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                12 days ago

                Oh I didn’t mean “should cost $4000” just “would cost $4000”

                Ah, yeah. Absolutely. The situation sucks though.

                I wish that the vram on video cards was modular, there’s so much ewaste generated by these bottlenecks.

                Not possible, the speeds are so high that GDDR physically has to be soldered. Future CPUs will be that way too, unfortunately. SO-DIMMs have already topped out at 5600, with tons of wasted power/voltage, and I believe desktop DIMMs are bumping against their limits too.

                But look into CAMM modules and LPCAMMS. My hope is that we will get modular LPDDR5X-8533 on AMD Strix Halo boards.

      • uis@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        You can always uses system memory too. Not exactly an UMA, but close enough.

        Or just use iGPU.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            12 days ago

            You don’t want it to anyway, as “automatic” spillover with an LLM painfully slow.

            The RAM/VRAM split is manually configurable in llama.cpp, but if you have at least 10GB VRAM, generally you want to keep the whole model within that.

              • brucethemoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                12 days ago

                Oh, 16GB should be plenty for SDXL.

                For flux, I actually use a script that quantizes it down to 8 bit (not FP8, but true quantization with huggingface quanto), but I would also highly recommend checking this project out. It should fit everything in vram and be dramatically faster: https://github.com/mit-han-lab/nunchaku

                • rebelsimile@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  12 days ago

                  I just run SD1.5 models, my process involves a lot of upscaling since things come out around 512 base size; I don’t really fuck with SDXL because generating at 1024 halves and halves again the number of images I can generate in any pass (and I have a lot of 1.5-based LORA models). I do really like SDXL’s general capabilities but I really rarely dip into that world (I feel like I locked in my process like 1.5 years ago and it works for me, don’t know what you kids are doing with your fancy pony diffusions 😃)