• Onno (VK6FLAB)@lemmy.radio
    link
    fedilink
    English
    arrow-up
    0
    ·
    12 days ago

    I’m all for doing this, but I’m not particularly interested in compiling kernel modules to make my base hardware work, which is why I used VMware until June when my iMac died. This worked for me for 15 years. My Mac had 64 GB of RAM and was plenty fast to run my main Debian desktop inside a VM with several other VMs doing duty as Docker hosts, client test environments, research environments and plenty more.

    Now I’m trying to figure out which bare bones modern hardware I can buy in Australia that will run Debian out of the box with no surprises.

    I’ve started investigating EC2 instances, but the desktop UI experience seems pretty crap.

    • curbstickle@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      Pretty much anything… I haven’t compiled a kernel module in quite a few years on any Debian system, and thats basically all I run. Was 15 years ago the last time you tried installing Linux on bare metal? Because things have definitely changed since 2009.

      If you want to avoid GPU hassles, go with Intel or AMD. Everything will autodetect.

        • curbstickle@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          To be fair, I’d just recommend avoiding WiFi in general.

          Intel WiFi would be on my recommended list, or anything atheros. I can’t understand self-hosting over WiFi though.

      • Onno (VK6FLAB)@lemmy.radio
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        12 days ago

        I’ve installed Debian on several bits of bare metal hardware since, Raspberry pi, suddenly doesn’t detect the usb wifi dongle that worked in the previous release. Or the hours trying to get an extended Mac USB keyboard to work properly.

        Supermicro servers that didn’t support the on board video card in VGA mode (for a text console).

        Then there was a solid-state “terminal” device which didn’t have support for the onboard ethernet controller.

        It’s not been without challenge, hence my reluctance. I moved to VMware to stabilise the experience and it was the best decision I’ve ever made, other than standardizing on Debian.

        I note that I’ve been installing Debian for a while. This is me in 2000:

        https://lists.debian.org/debian-sparc/2000/09/msg00038.html

        • curbstickle@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          12 days ago

          Now thats a message that makes me miss my pizza box (and my DEC space heater).

          Choosing a Mac keyboard I’m unsurprised you had to put in some extra work. The supermicro is odd, unless you got that one board some years back - I think it was an x9dri? - which was all kinds of finicky, even with VMware where I had to disable some power management features or it broke USB.

          Pretty much any standard hardware will do. I’d also mention you dont really need server grade hardware at this point, a cluster of desktop grade will outperform for the price (unless you’ve got heavy and sustained loads, different story there, but that’s not the majority of self hosters).

          I’m running proxmox nodes on tiny/mini/micros for the most part, where all my self hosting happens, a couple ryzen machines with arch or Debian, an OL box for some work stuff, etc. Less power use with T/M/M than my server grade hardware (which I still have sometimes for work stuff and testing), and performance with my cluster is on par or better IMO.

          • Onno (VK6FLAB)@lemmy.radio
            link
            fedilink
            English
            arrow-up
            0
            ·
            12 days ago

            I miss my SPARC, it had to be given away when I started travelling around Australia for five years. The last IBM ThinkPad replaced it, anyone remember recompiling kernels to support the PATA/SATA driver so you could boot the thing? I never did get all the onboard hardware to work and one day someone in the Debian X11 team decided that using multiple monitors as a single desktop wasn’t required any longer.

            I bought a 17” MacBook Pro and installed VMware on it, never looked back.

            I take your point on not needing server hardware. The proxmox cluster was a gift on the way to landfill when my iMac died. I’m using it to figure out which platform to migrate to after Broadcom bought VMware.

            I think it would be irresponsible to go back to it in light of the developments since the purchase.

            • curbstickle@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              0
              ·
              12 days ago

              I think it would be irresponsible to go back to it in light of the developments since the purchase.

              Absolutely agree. I’m actually shifting client hardware over from VMWare, last one is slated for end of Jan actually.

              Laptops I’d say are more problematic because the hardware choices are usually less standard stuff and more whatever cheap bits they can shove in, I think the worst recent issue though with a (lenovo) thinkpad was brightness controls not working. So I used ahkx11, AFAIK no Wayland support yet, but that’s fine for the like 8 or 9 year old laptop its on (now my wife’s laptop).

              I have a tendency to stick to the CLI for… Just about everything tbh, but regarding the shutdown bit, startup order and delay is the reverse for the shutdown process, no scripting needed if your issue is just proper sequencing.

              And I get it, a bunch of my hardware has been getting decommissioned hardware one way or the other! I just mostly take home the little desktops most buy these days (can’t wait to get a couple of the slightly fat ones for my rack, those little guys are monsters).

    • computergeek125@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      12 days ago

      If you’re trying to do VDI in the cloud, that can get expensive fast on account of the GPU processing needed

      Most of the protocols I know of the run CPU-only (and I’m perfectly happy to be proven wrong and introduced to something new) tend to fray at high latency or high resolution. The usual top two I’ve seen are VNC and RDP (XRDP project on Linux), with NoMachine and plain x11 over SSH being right behind that. I think NoMachine had the best performance of those three, but it’s been a hot minute since I’ve personally used it. XRDP is the one I’ve used the most often, but getting login/lock/unlock working was fiddly at first but seems to be stable holding.

      Jumping from the “basic connection, maybe barely but not always suitable for video” to “ultra high grade high speed”, we have Parsec and Sunshine+Moonlight. Parsec is currently limited to only Windows/Mac hosting (with Linux client available), and both Parsec and Sunshine require or recommend a reasonable GPU to handle the encoding stage (although I believe Sunshine may support an X264 encoder which may exert a heavy CPU tax depending on your resolution). The specific problem of sourcing a GPU in the cloud (since you mention EC2) becomes the expensive part. This class of remote access tends to fray at high resolution and frame rate less because it’s designed to transport video and games, rather than taking shortcuts to get a minimum desktop visible.

      • Onno (VK6FLAB)@lemmy.radio
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        Yeah, I was getting ready to use NoMachine on a recommendation, until I saw the macos uninstall script and the lack of any progress by the development team, going so far as to delete knowledge base articles and promising updates on the next release three versions ago.

        An added wrinkle is getting local USB devices visible on a VDI, like say a local thumb drive (in this case it’s a Zoom H5 audio recorder) so I can edit audio, not to mention, getting actual audio across the network, let alone being synchronised.

        It’s not trivial :)

        At the moment I’m experimenting with a proxmox cluster, but any VM from VMware don’t just run, so for ancient operating systems in a VM like Win98se, you need drivers which are no longer available … odd since that’s precisely why I run it in a VM. Not to mention that the Proxmox UI expects you to run a series of commands in the console every time you want to add a drive, something which happens fairly often.

        For shits and giggles try finding a way to properly shutdown a cluster without having to write scripts or shut each node down individually.

        As I said, not trivial :)