A year ago I built a NAS to reduce my reliance on cloud services, and set up an arr stack. I went with TrueNAS Scale, which was on Bluefin at the time. In the past 12 months, TrueNAS Scale has been through FOUR major OS versions, with a fifth already announced. At least one of those involved a release train switch so, despite diligently checking for updates in the dashboard, I was left in the dust with an obsolete OS, and didn’t find out until it was already a huge hassle to upgrade.

I’ve been really happy with the utility and benefit of having this tool, but holy smokes how is anybody supposed to keep up with all of this? This is far from my only hobby, and I simply do not have the time, patience, or interest for a constant race to keep up with vetting new release versions and fixing what breaks every 3 weeks. I have enough tinkering hobbies as it is.

On top of that, there’s the whole blow up with TrueCharts, which has also left me with an entire suite of obsolete albatrosses around my NAS that I need to deal with. Am I still waiting for them to figure out an upgrade path? I don’t even know anymore.

Sorry for the rant, but I guess what I’m looking for is: how do you keep up with the constant maintenance and updates, and where do I go from here, in February 2025, with a system running Bluefin 22.12, a 32TB ZFS pool (RAIDZ1) that has to remain intact, and a handful of TrueCharts apps that I don’t want to lose the data from (e.g. Jellyfin configs/watch history)?

  • MangoPenguin@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    I run proxmox on the host with docker in a VM for 90% of my stuff, OS updates I do like every 6 months maybe, I’ve done 1 major version upgrade on proxmox with no issues at all.

    The docker containers auto-update via Komodo, and nothing really ever breaks anymore other than the occasional container error that needs a simple fix.

    Everything important is backed up nightly using both proxmox backup server, and to backblaze B2 with restic.

    • Pika@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      15 days ago

      I’ve never heard of komodo, I’ve heard a lot about Watchtower but I found it more annoying to set up due to its labeling systems. Is there any added benefit for Komodo over using a standard watch tower setup?

      I haven’t set up either of them, but my main concern is having a breaking change be automatically updated

      • MangoPenguin@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        15 days ago

        Komodo is a full management setup, similar to Portainer, Dockge, etc… It works reasonably well.

        Watchtower doesn’t require any labeling unless you want to exclude a container.

        but my main concern is having a breaking change be automatically updated

        Pinning to a major version usually solves this, ie; instead of using postgres:latest use postgres:14 which will give you updates only from version 14.

        But also have backups in place, worst case you just roll back to before it updated.

        • Pika@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          15 days ago

          Oh ok, thank you, I already use Portainer for my existing setup so it wouldn’t make much sense to fully rework it. I haden’t thought of version pinning though so I may implement that instead, it makes sense “breaking changes” wouldn’t happen within the same major version.

  • Darkassassin07@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    17 days ago

    OS updates I only bother with every 6-12mo, though I also use debian which doesn’t push major updates all that regularly.

    As far as software goes; pretty much everything is in a docker container with watchtower automatically pulling new updates to those nightly at 4am. It sends me email notifications, so It’ll tell me if an update fails; combined with uptime-kuma notifying me if any of my services is unavailable for whatever reason.

    The rest I’ll usually do with the OS updates. Just because an update was released, doesn’t mean you’ve gotta drop everything and install it right this moment.

  • drkt@scribe.disroot.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    For one I don’t use software that updates constantly. If I had to log in to a container more than once a year to fix something, I’d figure out something else. My NAS is just harddrives on a Debian machine.

    Everything I use runs either Debian or is some form of BSD

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      Same, but openSUSE. Tumbleweed on my desktop and laptop, Leap on my servers.

      And yeah, if I need to babysit something, I’ll use an alternative. I’ll upgrade when I’m ready to, which is usually over holidays when I’m bored and looking for a project.

  • ragebutt@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    Is it exposed to the internet?

    Mine is local only so I’m not as diligent with updates. I push them like once every 2-3 weeks. Some containers automatically update but some don’t because in the past that has broken associated scripts

  • irish_link@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    Similar to the others although I have messed with Ubuntu, CentOS, Fedora, and even a few others for like a day or two each.

    At the moment I am using Fedora. My drives are raided and my main storage has all the data and the docker config directory’s.

    Using docker for everything, watchtower for updates, and pertained to manage the containers with a gui. All the containers are directed to /mnt/drive/allMyData. In there is my data folders. Shows, movies, plex configs for recording over the air, ebooks, documents, etc.

    Mainly I set it up this way so I can easily change distros if I wanted to and have all my services back up in an hour or so.

    I started a text file that contains the command lines I have used to start all of my docker containers. This way if I need to I reference it and use the exact same commands mapped volumes to the same folders. Now I am back up and running in a few clicks. No need to backup the container if all the data in it is setup in folders in my main data directory.

    However I am running a separate hardware raid setup prior to os. This way all my data stays safe as a separate volume.

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    I have automatic updates on everything. If it breaks, I fix it when I have time. If I don’t, it remains broken.

    I could also just not do updates, but I like new features.

  • Shimitar@downonthestreet.eu
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    Gentoo.

    Daily automatic updates of the OS.

    Services and containers are updated at random when i have time.

    Its been many years, I have fun doing it.

    Not a chore.

  • Matt The Horwood@lemmy.horwood.cloud
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    First off, backups of the configs any user data that you can’t torrent should the inevitable happen.

    Then set time aside to do updates, I spend Wednesday evenings updating and improving my setup.

    Then find a way to track update announcements, I use both an RSS reader and newrealeases.io to know when something I run gets an update

  • DontTakeMySky@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    I run Debian on most of my systems and run all of my services in docker (with rare exceptions for node_exporter or stable core tools). My base systems get automatic security upgrades, and then I’ll manually check in every few weeks whenever I feel like it.

    My services in docker are version locked to a specific major version (when there’s a tag available) so I can usually re-pull to get minor version updates freely without breaking issues. My few more finnickey services get manual upgrades from me every 6 months or so only.

    I usually stick to an OS version for as long as I can, and to that aim I stick to LTS versions with long support windows.

    4 major versions in 12mo is…a lot. Especially if those include breaking changes for you. Yikes

  • InnerScientist@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    17 days ago

    I have rss feeds for my main service updates so I know what new features I have, the services mostly run in podman containers and update automatically each Monday. I also have daily backups (timed to run just before the update on monday) in case anything does break.

    If it breaks I fix it depending on how much I want/need it, mostly it’s a matter of half an hour to fix it and with my current NixOS/Podman system I haven’t yet needed to fix anything this year so it breaks infrequently.

    Also why are you using Kubernetes on a single host if you want minimal maintenance? XD

    My recommendation is to switch to just managing containers, you should just be able to export the volumes out of kubernetes and import them as normal volumes, as long as they’re mounted in the right place you keep your data and if it doesn’t work just try again. Not like you need to destroy the current system to slowly replace it.

    Edit: I also recommend to update and reboot frequently, this stops updates and unstable configurations from piling up.

  • Azzu@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    17 days ago

    I’ve got backups. Haven’t updated or looked at my server in months. If I’m ever compromised by missing security updates, I just load a backup and regenerate all keys.

    I don’t put any critical data on public facing servers.

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    17 days ago

    Use Debian LTS or Ubuntu LTS (10 years support with free Ubuntu Pro). Turn on automatic unattended updates. Upgrade OS when you’re bored one of those years.

    Keywords:

    • Debian
    • Ubuntu
    • LTS
    • ZFS
    • Docker (compose)
  • 31337@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    If it works, I don’t update unless I’m bored or something. I also spread things out on multiple machines, so there’s less chance of stuff happening like you describe with the charts feature going away. My NAS is pretty much just a NAS now.

    You can probably backup your configs/data, upgrade, then deploy jellyfin again, restore, and reconfigure. You should probably backup your data on your ZFS pool. But, I recently updated to the latest TrueNas Scale from ~5 year old FreeBSD version of TrueNas and the pools still worked fine (none of the “apps” or jails worked, obviously). The upgrade process even ported my service configurations over. I didn’t care about much of the data in the pools, so only backed up the most important stuff.

    • Onomatopoeia@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      I don’t update unless I’m bored

      Hahahaha, one of my kind!

      My upgrades usually occur because I’m setting up a new system anyway, that way my effort is building for tomorrow in addition to the upgrades, and I get testing time to ensure changeover is pretty smooth.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    You might want to think about running a “stable” or “LTS” OS and spin up things in Docker instead. That way you only have to do OS level updates very rarely.

    • HeyJoe@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      17 days ago

      I learned this the hard way as well… I did a big OS update on mine once and it broke almost every application running on it. Docker worked perfectly still. I transferred everything I could to Docker after that.

    • Zink@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      Thanks for this. I’ve recently been recreating my home server on good hardware and have been thinking it’s time to jump into selfhosting more stuff. I’ve used Docker a bit, so I guess I’ll have to do it the right way. It’s always good to know what choices now will avoid future issues.