• kalleboo@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    All my stuff is running on a 6-year-old Synology D918+ that has a Celeron J3455 (4-core 1.5 GHz) but upgraded to 16 GB RAM.

    Funny enough my router is far more powerful, it’s a Core i3-8100T, but I was picking out of the ThinkCentre Tiny options and was paranoid about the performance needed on a 10 Gbit internet connection

  • Deway@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    My first @home server was an old defective iMac G3 but it did the job (and then died for good) A while back, I got a RP3 and then a small thin client with some small AMD CPU. They (barely) got the job done.

    I replaced them with an HP EliteDesk G2 micro with a i5-6500T. I don’t know what to do with the extra power.

      • Deway@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Prosody (XMPP server), a git instance, a searXNG instance, Tandoor (recipe manager), Next Cloud, Syncthing for my phone and my partner’s (one could say Next Cloud should be enough but I use it for different purposes), and a few other stuff.

        It doesn’t even use an eight of its total RAM and I’ve never seen the CPU go past 20℅. But it uses a lot less power than the thin client it replaced so not a bad investment, especially considering its price.

  • Rooty@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Enterprise level hardware costs a lot, is noisy and needs a dedicated server room, old laptops cost nothing.

    • pixelscript@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      I got a 1U rack server for free from a local business that was upgrading their entire fleet. Would’ve been e-waste otherwise, so they were happy to dump it off on me. I was excited to experiment with it.

      Until I got it home and found out it was as loud as a vacuum cleaner with all those fans. Oh, god no…

      I was living with my parents at the time, and they had a basement I could stick it in where its noise pollution was minimal. I mounted it up to a LackRack.

      Since moving out to a 1 bedroom apartment, I haven’t booted it. It’s just a 70 pound coffee table now. :/

  • jws_shadotak@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I was for a while. Hosted a LOT of stuff on an i5-4690K overclocked to hell and back. It did its job great until I replaced it.

    Now my servers don’t lag anymore.

  • seathru@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Yup. Gateway E-475M. It has trouble transcoding some plex streams, but it keeps chugging along. $5 well spent.

    • adarza@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      it can do it!

      … just not today

      got a ripping and converting pc that ain’t any better. it’s all it does, so speed don’t matter any. hb has queue, so nbd. i just let it go… and go… and go…

    • bdonvr@thelemmy.club
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      I had quite a few docker containers going on a Raspberry Pi 4. Worked fine. Though it did have 8GB of RAM to be fair

  • robalees@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    2012 Mac Mini with a fucked NIC because I man handled it putting in a SSD. Those things are tight inside!

        • Cort@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Lol, I used to have an 08 Mac mini and that required a razor blade and putty knives to open. I got pretty good at it after separately upgrading the RAM adding an SSD and swapping out the cpu for the most powerful option that Apple didn’t even offer

          • robalees@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            When I used to work at the “Fruit Stand” I never had to repair those white back Mini’s thankfully, but I do remember the putty knives being around. The unibody iMac was the worse, had to pizza cutter the whole LCD off the frame to replace anything, then glue it back on!

            • Cort@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              Lol by the time I actually needed to upgrade from that mini, all the fruit stand stuff wasn’t really upgradable anymore. It was really frustrating, so I jumped ship to Windows.

              Those iMac screens seemed so fiddley to remove just to get access to the drives. Why won’t they just bolt them in instead of using glue! (I know why, but I still don’t like it)

  • empireOfLove2@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    your hardware ain’t shit until it’s a first gen core2duo in a random Dell office PC and 2gb of memory that you specifically only use just because it’s a cheaper way to get x86 when you can use your raspberry pi.

    Also they lie most of the time and it may technically run fine on more memory, especially if it’s older when dimm capacities were a lot lower than they can be now. It just won’t be “supported”.

  • dan@upvote.au
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    You can do quite a bit with 4GB RAM. A lot of people use VPSes with 4GB (or less) RAM for web hosting, small database servers, backups, etc. Big providers like DigitalOcean tend to have 1GB RAM in their lowest plans.

  • ipkpjersi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Not anymore. My main self-hosting server is an i7 5960x with 32GB of ECC RAM, RTX 4060, 1TB SATA SSD, and 6x6TB 7200RPM drives.

    I did used to host some services on like a $5 or $10 a month VPS, and then eventually a $40 a month dedi, though.

      • ipkpjersi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        I use it for Plex/Jellyfin, it’s the cheapest NVIDIA GPU that supports both AV1 encoding and decoding, even though Plex doesn’t support AV1 yet IIRC it’s still more futureproof that way. I picked it up for like around $200 on a sale, it was well worth it IMO.

    • ripcord@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Yeah, not here either. I’m now at a point where I keep wanting to replace my last host thats limited to 16GB. All the others - at least the ones I care about RAM on - all support 64GB or more now.

      • ipkpjersi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        64GB would be a nice amount of memory to have. I’ve been okay with 32GB so far thankfully.

  • SolaceFiend@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I’m still interested in Self-Hosting but I actually tried getting into self-hosting a year or so ago. I bought a s***** desktop computer from Walmart, and installed window server 2020 on it to try to practice on that.

    Thought I could use it to put some bullet points on my resume, and maybe get into self hosting later with next cloud. I ended up not fully following through because I felt like I needed to first buy new editions of the server administration and network infrastructure textbooks I had learned from a decade prior, before I could continue with giving it an FQDN, setting it up as a primary DNS Server, or pointing it at one, and etc.

    So it was only accessible on my LAN, because I was afraid of making it a remotely accessible server unless I knew I had good firewall rules, and had set up the primary DNS server correctly, and ultimately just never finished setting it up. The most ever accomplished was getting it working as a file server for personal storage, and creating local accounts with usernames and passwords for both myself and my mom, whom I was living with at the time. It could authenticate remote access through our local Wi-Fi, but I never got further.

    • PeaceFrog@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Hard to understad why it was difficult. For some reason windows admins are afraid of experimenting, breaking things. Practically I became sys admin by drinking beer and playing with linux, containers, etc.

  • biscuitswalrus@aussie.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    3x Intel NUC 6th gen i5 (2 cores) 32gb RAM. Proxmox cluster with ceph.

    I just ignored the limitation and tried with a single sodim of 32gb once (out of a laptop) and it worked fine, but just backed to 2x16gb dimms since the limit was still 2core of CPU. Lol.

    Running that cluster 7 or so years now since I bought them new.

    I suggest only running off shit tier since three nodes gives redundancy and enough performance. I’ve run entire proof of concepts for clients off them. Dual domain controllers and FC Rd gateway broker session hosts fxlogic etc. Back when Ms only just bought that tech. Meanwhile my home “ARR” just plugs on in docker containers. Even my opnsense router is virtual running on them. Just get a proper managed switch and take in the internet onto a vlan into the guest vm on a separate virtual NIC.

    Point is, it’s still capable today.

    • renzev@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      How is ceph working out for you btw? I’m looking into distributed storage solutions rn. My usecase is to have a single unified filesystem/index, but to store the contents of the files on different machines, possibly with redundancy. In particular, I want to be able to upload some files to the cluster and be able to see them (the directory structure and filenames) even when the underlying machine storing their content goes offline. Is that a valid usecase for ceph?

      • biscuitswalrus@aussie.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I’m far from an expert sorry, but my experience is so far so good (literally wizard configured in proxmox set and forget) even during a single disk lost. Performance for vm disks was great.

        I can’t see why regular file would be any different.

        I have 3 disks, one on each host, with ceph handling 2 copies (tolerant to 1 disk loss) distributed across them. That’s practically what I think you’re after.

        I’m not sure about seeing the file system while all the hosts are all offline, but if you’ve got any one system with a valid copy online you should be able to see. I do. But my emphasis is generally get the host back online.

        I’m not 100% sure what you’re trying to do but a mix of ceph as storage remote plus something like syncthing on a endpoint to send stuff to it might work? Syncthing might just work without ceph.

        I also run zfs on an 8 disk nas that’s my primary storage with shares for my docker to send stuff, and media server to get it off. That’s just truenas scale. That way it handles data similarly. Zfs is also very good, but until scale came out, it wasn’t really possible to have the “add a compute node to expand your storage pool” which is how I want my vm hosts. Zfs scale looks way harder than ceph.

        Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else. See how it acts when you take a machine offline. When you know what you want, do a final blow away and implement it with the way you learned to do it best.

        • renzev@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else.

          This is good advice, thanks! Pretty much what I’m doing right now. Already tried it with IPFS, and found that it didn’t meet my needs. Currently setting up a tahoe-lafs grid to see how it works. Will try out ceph after this.

  • ebc@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Running a bunch of services here on a i3 PC I built for my wife back in 2010. I’ve since upgraded the RAM to 16GB, added as many hard drives as there are SATA ports on the mobo, re-bedded the heatsink, etc.

    It’s pretty much always ran on Debian, but all services are on Docker these days so the base distro doesn’t matter as much as it used to.

    I’d like to get a good backup solution going for it so I can actually use it for important data, but realistically I’m probably just going to replace it with a NAS at some point.

    • N0x0n@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      A NAS is just a small desktop computer. If you have a motherboard/CPU/ram/Ethernet/case and a lot of SSDs/HDDs you are good to go.

      Just don’t bother to buy something marketed as NAS. It’s expensive and less modular than any desktop PC.

      Just my opinion.

  • lnxtx (xe/xem/xyr)@feddit.nl
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Maybe not shit, but exotic at that time, year 2012.
    The first Raspberry Pi, model B 512 MB RAM, with an external 40 GB 3.5" HDD connected to USB 2.0.

    It was running ARM Arch BTW.

    Next, cheap, second hand mini desktop Asus Eee Box.
    32 bit Intel Atom like N270, max. 1 GB RAM DDR2 I think.
    Real metal under the plastic shell.
    Could ever run without active cooling (I broke a fan connector).

      • lnxtx (xe/xem/xyr)@feddit.nl
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Mainly telemetry, like temperature inside, outside.
        Script to read a data and push it into a RRD, later PostreSQL.
        ligthttpd to serve static content, later PHP.

        Once it served as a bridge, between LAN and LTE USB modem.

    • ThunderLegend@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      This was my media server and kodi player for like 3 years…still have my Pi 1 lying around. Now I have a shitty Chinese desktop I built this year with i5 3rd. Gen with 8gb ram

    • Dave@lemmy.nz
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I have one of these that I use for Pi-hole. I bought it as soon as they were available. Didn’t realise it was 2012, seemed earlier than that.

  • FrederikNJS@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    My home Kubernetes cluster started out on a Core i7-920 with 8 GB of memory.

    Upgraded to 16 GB memory

    Upgraded to a Core i5-2400S

    Upgraded to a Core i7-3770

    Upgraded to 32 GB memory

    Recently Upgraded to a Core i5-7600K

    I think I’ll stay with that for rather long…

    I did however add 2 Intel NUCs (gen 6 and gen 8) to the cluster to have a distributed control plane and some distributed storage.