I currently have a 1 TiB NVMe drive that has been hovering at 100 GiB left for the past couple months. I’ve kept it down by deleting a game every couple weeks, but I would like to play something sometime, and I’m running out of games to delete if I need more space.

That’s why I’ve been thinking about upgrading to a 2 TiB drive, but I just saw an interesting forum thread about LVM cache. The promise of having the storage capacity of an HDD with (usually) the speed of an SSD seems very appealing, but is it actually as good as it seems to be?

And if it is possible, which software should be used? LVM cache seems like a decent option, but I’ve seen people say it’s slow. bcache is also sometimes mentioned, but apparently that one can be unreliable at times.

Beyond that, what method should be used? The Arch Wiki page for bcache mentions several options. Some only seem to cache writes, while some aim to keep the HDD idle as long as possible.

Also, does anyone run a setup like this themselves?

  • Rimu@piefed.social
    link
    fedilink
    English
    arrow-up
    8
    ·
    10 hours ago

    Back when SSDs were expensive and tiny they used to sell hybrid drives which were a normal sized HDD with a few gigs of SSD cache built in. Very similar to your proposal. When I upgraded from a HDD to a hybrid it was like getting a new computer, almost as good as a real SSD would have been.

    I say go for it.

    If it’s all Steam games then you could just move games around as needed, no need for a fancy automatic solution.

  • Deckweiss@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    13 hours ago

    For many games, the loading times are not thaaaat different when comparing HDD vs SSD vs NVME. (Depends on how impatient you are tbh.) And it barely affects FPS.

    The biggest appeal of NVME/SSD for me is having a snappy OS.

    So I would put your rarely played games on a cheap, big HDD and keep your OS and a couple of the most frequent games on the NVME. (In the Steam interface you can easily move the games to a new drive)

    I find it to be a much simpler solution than setting up a multi tiered storage system.


    Some sources:

    https://www.legitreviews.com/game-load-time-benchmarking-shootout-six-ssds-one-hdd_204468

    https://www.phoronix.com/review/linux-gaming-disk/3

    https://www.pcgamer.com/anthem-load-times-tested-hdd-vs-ssd-vs-nvme/

  • schizo@forum.uncomfortable.business
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    15 hours ago

    …depends what your use pattern is, but I doubt you’d enjoy it.

    The problem is the cached data will be fast, but the uncached will, well, be on a hard drive.

    If you have enough cached space to keep your OS and your used data on it, it’s great, but if you have enough disk space to keep your OS and used data on it, why are you doing this in the first place?

    If you don’t have enough cache drive to keep your commonly used data on it, then it’s going to absolutely perform worse than just buying another SSD.

    So I guess if this is ‘I keep my whole steam library installed, but only play 3 games at a time’ kinda usecase, it’ll probably work fine.

    For everything else, eh, I probably wouldn’t.

    Edit: a good usecase for this is more the ‘I have 800TB of data, but 99% of it is historical and the daily working set of it is just a couple hundred gigs’ on a NAS type thing.

    • tiddy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      14 hours ago

      I’m curious what type of workflow you have to utilise mainly the sane data consistently, I’m probably biased because I like to try software out - but I can’t imagine (outside office use) a loop that would remain this closed

      • schizo@forum.uncomfortable.business
        link
        fedilink
        English
        arrow-up
        6
        ·
        14 hours ago

        It is mostly professional/office use where this make sense. I’ve implemented this (well, a similar thing that does the same thing) for clients that want versioning and compliance.

        I’ve worked with/for a lot of places that keep everything because disks are cheap enough that they’ve decided it’s better to have a copy of every git version than not have one and need it some day.

        Or places that have compliance reasons to have to keep copies of every email, document, spreadsheet, picture and so on. You’ll almost never touch “old” data, but you have to hold on to it for a decade somewhere.

        It’s basically cold storage that can immediately pull the data into a fast cache if/when someone needs the older data, but otherwise it just sits there forever on a slow drive.

  • majestictechie@lemmy.fosshost.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    15 hours ago

    I used to run an HDD with an SSD cache. It’s deffo not as fast as a normal SSD. NVMe storage is also very cheap. You can get a 2tb NVMe for the same price as SATA.

    In all honesty, I’d just keep things simple and go for a SSD.

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    15 hours ago

    How much storage do you need? If it’s just 2TB - or if you’re just future proofing it and don’t anticipate needing more than 2TB within the next few years - I’d pick one SSD. Even the cheaper ones will give you a better performance than an SSD + HDD combo.

    • qaz@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 hours ago

      Thanks for the advice, I’m probably going to need about 2-3 TiB. I guess I’ll just need to figure out a way to offload some more data to my server to keep it under 2 TiB.

  • tiddy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    14 hours ago

    Currently have 2 1tb NVME’s over around 6 tb of HDDs, works really nice to keep a personal steam cache on the HDD’s in case I pick up an old game with friends, or want to play a large game but only use part of it (ie cod zombies).

    Also is super helpful for shared filesystem’s (syncthing or NFS), as its able to support peripheral computers a lot more dynamically then I’d ever care to personally configure. (If thats unclear, I use it for a jellyfin server, crafty instance, some coding projects - things that see heavy use in bursts, but tend to have an attention lifespan).

    Using bcachefs with backups myself, and after a couple months my biggest worry is the kernel drama more than the fs itself

  • signofzeta@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 hours ago

    Apple tried it a decade ago. It was called the Fusion Drive. It performed about as well as you’d expect. macOS saw the combined storage, but the hardware and OS managed the pair as a single unit.

    If there’s a good tiered storage daemon on your OS of choice, go for it!