• magikmw@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    He heck is HHD+? Is this some new fangled storage tech I’m too SSD to understand?

  • Rimu@piefed.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    YouTube is usually the first thing I open on first boot of a new machine. That way I know if the sound is working, network is working and video drivers are ok all at once.

  • friend_of_satan@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    In the opposite end, what is the cheapest device that you could watch YT on? I’m thinking one of those retro game consoles, which are like $60, run Linux, and have WiFi.

    • pumpkinseedoil@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      12 days ago

      Runs flawlessly on my raspberry pi (4, 2 GB RAM, bought new for 28€)

      *requires a keyboard and screen

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      I used to be able to watch linux on a 30 bucks android tv device in which I installed coreelec.

      Sadly youtube apps on there stopped working for me a while ago due the war on adblocks. But the device was perfectly capable of playing YouTube.

      I suppose that with tubearchivist and jellyfin you could still somehow watch youtube.

  • Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    I’m all for doing this, but I’m not particularly interested in compiling kernel modules to make my base hardware work, which is why I used VMware until June when my iMac died. This worked for me for 15 years. My Mac had 64 GB of RAM and was plenty fast to run my main Debian desktop inside a VM with several other VMs doing duty as Docker hosts, client test environments, research environments and plenty more.

    Now I’m trying to figure out which bare bones modern hardware I can buy in Australia that will run Debian out of the box with no surprises.

    I’ve started investigating EC2 instances, but the desktop UI experience seems pretty crap.

    • computergeek125@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      If you’re trying to do VDI in the cloud, that can get expensive fast on account of the GPU processing needed

      Most of the protocols I know of the run CPU-only (and I’m perfectly happy to be proven wrong and introduced to something new) tend to fray at high latency or high resolution. The usual top two I’ve seen are VNC and RDP (XRDP project on Linux), with NoMachine and plain x11 over SSH being right behind that. I think NoMachine had the best performance of those three, but it’s been a hot minute since I’ve personally used it. XRDP is the one I’ve used the most often, but getting login/lock/unlock working was fiddly at first but seems to be stable holding.

      Jumping from the “basic connection, maybe barely but not always suitable for video” to “ultra high grade high speed”, we have Parsec and Sunshine+Moonlight. Parsec is currently limited to only Windows/Mac hosting (with Linux client available), and both Parsec and Sunshine require or recommend a reasonable GPU to handle the encoding stage (although I believe Sunshine may support an X264 encoder which may exert a heavy CPU tax depending on your resolution). The specific problem of sourcing a GPU in the cloud (since you mention EC2) becomes the expensive part. This class of remote access tends to fray at high resolution and frame rate less because it’s designed to transport video and games, rather than taking shortcuts to get a minimum desktop visible.

      • Onno (VK6FLAB)@lemmy.radio
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        Yeah, I was getting ready to use NoMachine on a recommendation, until I saw the macos uninstall script and the lack of any progress by the development team, going so far as to delete knowledge base articles and promising updates on the next release three versions ago.

        An added wrinkle is getting local USB devices visible on a VDI, like say a local thumb drive (in this case it’s a Zoom H5 audio recorder) so I can edit audio, not to mention, getting actual audio across the network, let alone being synchronised.

        It’s not trivial :)

        At the moment I’m experimenting with a proxmox cluster, but any VM from VMware don’t just run, so for ancient operating systems in a VM like Win98se, you need drivers which are no longer available … odd since that’s precisely why I run it in a VM. Not to mention that the Proxmox UI expects you to run a series of commands in the console every time you want to add a drive, something which happens fairly often.

        For shits and giggles try finding a way to properly shutdown a cluster without having to write scripts or shut each node down individually.

        As I said, not trivial :)

    • curbstickle@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      Pretty much anything… I haven’t compiled a kernel module in quite a few years on any Debian system, and thats basically all I run. Was 15 years ago the last time you tried installing Linux on bare metal? Because things have definitely changed since 2009.

      If you want to avoid GPU hassles, go with Intel or AMD. Everything will autodetect.

        • curbstickle@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          To be fair, I’d just recommend avoiding WiFi in general.

          Intel WiFi would be on my recommended list, or anything atheros. I can’t understand self-hosting over WiFi though.

      • Onno (VK6FLAB)@lemmy.radio
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        I’ve installed Debian on several bits of bare metal hardware since, Raspberry pi, suddenly doesn’t detect the usb wifi dongle that worked in the previous release. Or the hours trying to get an extended Mac USB keyboard to work properly.

        Supermicro servers that didn’t support the on board video card in VGA mode (for a text console).

        Then there was a solid-state “terminal” device which didn’t have support for the onboard ethernet controller.

        It’s not been without challenge, hence my reluctance. I moved to VMware to stabilise the experience and it was the best decision I’ve ever made, other than standardizing on Debian.

        I note that I’ve been installing Debian for a while. This is me in 2000:

        https://lists.debian.org/debian-sparc/2000/09/msg00038.html

        • curbstickle@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          Now thats a message that makes me miss my pizza box (and my DEC space heater).

          Choosing a Mac keyboard I’m unsurprised you had to put in some extra work. The supermicro is odd, unless you got that one board some years back - I think it was an x9dri? - which was all kinds of finicky, even with VMware where I had to disable some power management features or it broke USB.

          Pretty much any standard hardware will do. I’d also mention you dont really need server grade hardware at this point, a cluster of desktop grade will outperform for the price (unless you’ve got heavy and sustained loads, different story there, but that’s not the majority of self hosters).

          I’m running proxmox nodes on tiny/mini/micros for the most part, where all my self hosting happens, a couple ryzen machines with arch or Debian, an OL box for some work stuff, etc. Less power use with T/M/M than my server grade hardware (which I still have sometimes for work stuff and testing), and performance with my cluster is on par or better IMO.

          • Onno (VK6FLAB)@lemmy.radio
            link
            fedilink
            English
            arrow-up
            0
            ·
            12 days ago

            I miss my SPARC, it had to be given away when I started travelling around Australia for five years. The last IBM ThinkPad replaced it, anyone remember recompiling kernels to support the PATA/SATA driver so you could boot the thing? I never did get all the onboard hardware to work and one day someone in the Debian X11 team decided that using multiple monitors as a single desktop wasn’t required any longer.

            I bought a 17” MacBook Pro and installed VMware on it, never looked back.

            I take your point on not needing server hardware. The proxmox cluster was a gift on the way to landfill when my iMac died. I’m using it to figure out which platform to migrate to after Broadcom bought VMware.

            I think it would be irresponsible to go back to it in light of the developments since the purchase.

            • curbstickle@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              12 days ago

              I think it would be irresponsible to go back to it in light of the developments since the purchase.

              Absolutely agree. I’m actually shifting client hardware over from VMWare, last one is slated for end of Jan actually.

              Laptops I’d say are more problematic because the hardware choices are usually less standard stuff and more whatever cheap bits they can shove in, I think the worst recent issue though with a (lenovo) thinkpad was brightness controls not working. So I used ahkx11, AFAIK no Wayland support yet, but that’s fine for the like 8 or 9 year old laptop its on (now my wife’s laptop).

              I have a tendency to stick to the CLI for… Just about everything tbh, but regarding the shutdown bit, startup order and delay is the reverse for the shutdown process, no scripting needed if your issue is just proper sequencing.

              And I get it, a bunch of my hardware has been getting decommissioned hardware one way or the other! I just mostly take home the little desktops most buy these days (can’t wait to get a couple of the slightly fat ones for my rack, those little guys are monsters).

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    12 days ago

    I’m self hosting LLMs for family use (cause screw OpenAI and corporate, closed AI), and I am dying for more VRAM and RAM now.

    Seriously looking at replacing my 7800X3D with Strix Halo when it comes out, maybe a 128GB board if they sell one. Or a 48GB Intel Arc if Intel is smart enough to sell that. And I would use every last megabyte, even if I had a 512GB board (which is the bare minimum to host Deepseek V3).

    • uis@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      Aren’t LLMs external algorithms at this point? As in the all data will not fit in RAM.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        No, all the weights, all the “data” essentially has to be in RAM. If you “talk to” a LLM on your GPU, it is not making any calls to the internet, but making a pass through all the weights every time a word is generated.

        There are system to augment the prompt with external data (RAG is one word for this), but fundamentally the system is closed.

        • Hackworth@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          Yeah, I’ve had decent results running the 7B/8B models, particularly the fine tuned ones for specific use cases. But as ya mentioned, they’re only really good in thier scope for a single prompt or maybe a few follow-ups. I’ve seen little improvement with the 13B/14B models and find them mostly not worth the performance hit.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            12 days ago

            Depends which 14B. Arcee’s 14B SuperNova Medius model (which is a Qwen 2.5 with some training distilled from larger models) is really incrtedible, but old Llama 2-based 13B models are awful.

            • Hackworth@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              12 days ago

              I’ll try it out! It’s been a hot minute, and it seems like there are new options all the time.

              • brucethemoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                12 days ago

                Try a new quantization as well! Like an IQ4-M depending on the size of your GPU, or even better, an 4.5bpw exl2 with Q6 cache if you can manage to set up TabbyAPI.

    • repungnant_canary@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      I don’t know how’s the pricing, but maybe it’s worth building a separate server with second-hand TPU. Used server CPUs and RAMs are apparently quite affordable in the US (assuming you live there) so maybe it’s the case for TPUs as well. And commercial GPUs/TPUs have more VRAM

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        second-hand TPU

        From where? I keep a look out for used Gaudi/TPU setups, but they’re like impossible to find, and usually in huge full-server configs. I can’t find Xeon Max GPUs or CPUs either.

        Also, Google’s software stack isn’t really accessible. TPUs are made for internal use at Google, not for resale.

        You can find used AMD MI100s or MI210s, sometimes, but the go-to used server card is still the venerable Tesla P40.

    • Altima NEO@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      I’ve got a 3090, and I feel ya. Even 24 gigs is hitting the cap pretty often and slowing to a crawl once system ram starts being used.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        You can’t let it overflow if you’re using LLMs on windows. There’s a toggle for it in the Nvidia settings, and get llama.cpp to offload though its settings (or better yet, use exllama instead).

        But…. Yeah. Qwen 32B fits in 24GB perfectly, and it’s great, but 72B really feels like the intelligence tipping point where I can dump so many API models, and that barely won’t fit in 24GB.

    • rebelsimile@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      I know it’s a downvote earner on Lemmy but my 64gb M1 Max with its unified memory runs these large scale LLMs like a champ. My 4080 (which is ACHING for more VRAM) wishes it could. But when it comes to image generation, the 4080 smokes the Mac. The issue with image generation and VRAM size is you can think of the VRAM like an aperture, and the lesser VRAM closes off how much you can do in a single pass.

      • uis@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        You can always uses system memory too. Not exactly an UMA, but close enough.

        Or just use iGPU.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            12 days ago

            You don’t want it to anyway, as “automatic” spillover with an LLM painfully slow.

            The RAM/VRAM split is manually configurable in llama.cpp, but if you have at least 10GB VRAM, generally you want to keep the whole model within that.

              • brucethemoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                12 days ago

                Oh, 16GB should be plenty for SDXL.

                For flux, I actually use a script that quantizes it down to 8 bit (not FP8, but true quantization with huggingface quanto), but I would also highly recommend checking this project out. It should fit everything in vram and be dramatically faster: https://github.com/mit-han-lab/nunchaku

                • rebelsimile@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  12 days ago

                  I just run SD1.5 models, my process involves a lot of upscaling since things come out around 512 base size; I don’t really fuck with SDXL because generating at 1024 halves and halves again the number of images I can generate in any pass (and I have a lot of 1.5-based LORA models). I do really like SDXL’s general capabilities but I really rarely dip into that world (I feel like I locked in my process like 1.5 years ago and it works for me, don’t know what you kids are doing with your fancy pony diffusions 😃)

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        The issue with Macs is that Apple does price gouge for memory, your software stack is effectively limited to llama.cpp or MLX, and 70B class LLMs do start to chug, especially at high context.

        Diffusion is kinda a different duck. It’s more compute heavy, yes, but the “generally accessible” software stack is also much less optimized for Macs than it is for transformers LLMs.

        I view AMD Strix Halo as a solution to this, as its a big IGP with a wide memory bus like a Mac, but it can use the same CUDA software stacks that discrete GPUs use for that speed/feature advantage… albeit with some quirks. But I’m willing to put up with that if AMD doesn’t price gouge it.

        • rebelsimile@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          Apple price gouges for memory, yes, but a 64gb theoretical 4090 would have cost as much in this market as the whole computer did. If you’re using it to its full capabilities then I think it’s one of the best values on the market. I just run the 20b models because they meet my needs (and in open webui I can combine a couple at that size), as I use the Mac for personal use also.

          I’ll look into the Amd Strix though.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            12 days ago

            GDDR is actually super cheap! I think it would only be like another $75 on paper to double the 4090’s VRAM to 48GB (like they do for pro cards already).

            Nvidia just doesn’t do it for market segmentation. AMD doesn’t do it for… honestly I have no idea why? They basically have no pro market to lose, the only explanation I can come up with is that their CEOs are colluding because they are cousins. And Intel doesn’t do it because they didn’t make a (consumer) GPU that was eally worth it until the B580.

            • rebelsimile@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              0
              ·
              12 days ago

              Oh I didn’t mean “should cost $4000” just “would cost $4000”. I wish that the vram on video cards was modular, there’s so much ewaste generated by these bottlenecks.

              • brucethemoose@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                12 days ago

                Oh I didn’t mean “should cost $4000” just “would cost $4000”

                Ah, yeah. Absolutely. The situation sucks though.

                I wish that the vram on video cards was modular, there’s so much ewaste generated by these bottlenecks.

                Not possible, the speeds are so high that GDDR physically has to be soldered. Future CPUs will be that way too, unfortunately. SO-DIMMs have already topped out at 5600, with tons of wasted power/voltage, and I believe desktop DIMMs are bumping against their limits too.

                But look into CAMM modules and LPCAMMS. My hope is that we will get modular LPDDR5X-8533 on AMD Strix Halo boards.

      • thebestaquaman@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        Not running any LLMs, but I do a lot of mathematical modelling, and my 32 GB RAM, M1 Pro MacBook is compiling code and crunching numbers like an absolute champ! After about a year, most of my colleagues ditched their old laptops for a MacBook themselves after just noticing that my machine out-performed theirs every day, and that it saved me a bunch of time day-to-day.

        Of course, be a bit careful when buying one: Apple cranks up the price like hell if you start specing out the machine a lot. Especially for RAM.

  • gubblebumbum@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    i just upgraded from a core2duo with 2gb ddr2 to a 7th gen i3 with 8 gb ddr4 and for the first time in my life an actual gpu (nvidia k620).

    • Cort@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      Heh, I did something similar for my dad. He went from 2x core2quad 24gb DDR2 to a 12th Gen i5 with 32gb ddr5. Something like triple the compute power, at under $500 when he paid ~$5k for the original

    • Zetta@mander.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      That’s crazy! If you don’t mind me asking what do you typically do on your machine and how much time do you spend on it?

      I’m just curious because I spend a lot of time on my PC and can only imagine how horrible all the stuff I do would be on that hardware lol

      • gubblebumbum@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        I have only used it for doomscrolling and watching videos so far. I spend at least 12 hours on it, Im disabled and live in the third world so I dont have a lot going on in my life lol.

  • ekZepp@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    I have a smart tv but is kinda old, so, when i want to watch a bit of youtube without ads i connect my steam deck and open it with firefox. The ads-free experience is well worth the time to do it.

    • HandBash@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      I think I will start doing that. Just dock it and use KDE on phone to wireless control it I’m thinking.

  • Toribor@corndog.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    I upgraded to a new GPU a few weeks ago but all I’ve been doing is playing Factorio which would run just fine on 15 year old hardware.