• wreckedcarzz@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      7 days ago

      I have a ThinkServer with a similar Xeon, running proxmox -> Debian, so I was looking like “huh, interesting” until I saw the internals.

      Fuuuuuuuuuuuuuuuuuck all that. Damn it Dell, quit your weird bullshit. It’s just a motherboard, cpu, cooler, and ram. Slap in intake and exhaust fans. Figure it the fuck out.

      E: and it better have a goddamn standard psu, too. Fuck yourself, Dell. I’ve seen your shit.

      • NaibofTabr@infosec.pub
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 days ago

        Hmm, I don’t have direct experience with ThinkServers, but what I see on eBay looks like standard ATX hardware… which is not really what you want in a server.

        The Dell motherboard has dual CPU sockets and 8 RAM slots. The PSUs are not the common ATX desktop format because there are 2 of them and they are hot swappable. This is basically a rack server repacked into a desktop tower case, not an ATX desktop with a server CPU socket.

      • Benjaben@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        7 days ago

        The one saving grace is that their one-off custom damn shit always feels well designed, and they move a lotta units (which helps with repairs when everything is GD custom). Dunno if that’s changed in recent years.

        With that said I avoid them for personal use usually for the same reason, why have a desktop if you don’t get the benefit of parts compatibility?!

    • Evil_Shrubbery@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      7 days ago

      Isn’t that a bit like buying an old truck instead of a year old Miata?

      Afaik those CPUs use so much juice when idling … sure, you dont get all them lanes or ECC, but a PC at the same price with a few year old CPU outclasses that CPU by a lot & at a fraction of the running cost (also quietly).

      Just something to keep in mind as an alternative, especially when you don’t intend to fill all the pcie bussy (several users with several intensive tasks that benefit from wider bus to RAM & PCI even with a slow CPU).
      Ok, and you miss out on some fancy admin stuff, but … it’s just for home use …

      • NaibofTabr@infosec.pub
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 days ago

        I always recommend buying enterprise grade hardware for this type of thing, for two reasons:

        1. Consumer-grade hardware is just that - it’s not built for long-term, constant workloads (that is, server workloads). It’s not built for redundancy. The Dell PowerEdge has hotswappable drive bays, a hardware RAID controller, dual CPU sockets, 8 RAM slots, dual built-in NICs, the iDrac interface, and redundant hot-swappable PSUs. It’s designed to be on all the time, reliably, and can be remotely managed.

        2. For a lot of people who are interested in this, a homelab is a path into a technology career. Working with enterprise hardware is better experience.

        Consumer CPUs won’t perform server tasks like server CPUs. If you want to run a server, you want hardware that’s built for server workloads - stability, reliability, redundancy.

        So I guess yes, it is like buying an old truck? Because you want to do work, not go fast.

      • lud@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        7 days ago

        Yeah server hardware isn’t the most efficient if you want to save power. It’s probably better to get a NUC or something.

        With that said my old Dell PowerEdge R730 only uses around 84 watt (running around 5 VMs that are doing pretty much nothing) The server runs Proxmox and has 128 GB of ram, two Xeon E5-2667 v4 CPUs, 4 old used 1 TB HDDs I bought for cheap, and 4 old used 128 GB SATA SSDs I also bought for cheap (all storage is 2,5 drives).

        All I had to do was change a few BIOS settings to prioritize efficiency over performance. 84 watts is obviously still not great but it’s not that bad.

        • Evil_Shrubbery@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          6 days ago

          Sounds nice, but yes, uses quite a bit of power.

          I should measure mine - I have a Ryzen 5900 (24t, 64MB … some 20k cinebench score) as the main, and a Core 12700 (16+4t, 12MB).
          (And Intel gen 7 and 2 at my patents. All of them proxmoxed.)

          Never ever managed to bottleneck anything on them, not really, but got them super cheap used.

          Buying anything server/enterprise that powerful would cost me a lot of moneys. And prob have two CPUs which doubles a lot of power hungry bits.

          • lud@lemm.ee
            link
            fedilink
            arrow-up
            0
            ·
            6 days ago

            The only reason that I have measured my server is that it has that feature built into the iDRAC. I have been thinking of buying an external power meter for years but have never bothered to do that.

            Luckily I got my server for free from work. It was part of an old SAN so it came with 4 dual 16 Gbit fiber channel cards and 2 dual 10 gigabit ethernet cards. Before I took those out of the server it consumed around 150 watts at idle which is crazy.