I am planning on creating a home server with either 2 (RAID1) or 3 (RAID5) HDDs as bulk storage and 1 SSD as bcache.

The question is, what file system should I use for the HDDs? I am thinking of ext4 or xfs, as I heard btrfs is not recommended for my use case for some reason.

Do you all have some advice to give on what file system to use, as well as some other tips?

  • Svinhufvud@sopuli.xyzOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    Could you elaborate on btrfs on top of md raid?

    This one seems the most likely solution for me.

    • Limonene@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      Sure. First you set up a RAID5/6 array in mdadm. This is a purely software thing, which is built into the Linux kernel. It doesn’t require any hardware RAID system. If you have 3-4 drives, RAID5 is probably best, and if you have 5+ drives RAID6 is probably best.

      If your 3 blank drives are sdb1, sdc1, and sdd1, run this:

      mdadm --create --verbose /dev/md0 --level=5 -n 3 /dev/sdb1 /dev/sdc1 /dev/sdd1

      This will create a block device called /dev/md0 that you can use as if it were a single large hard drive.

      mkfs.btrfs /dev/md0

      That will make the filesystem on the block device.

      mkdir /mnt/bigraid
      mount /dev/md0 /mnt/bigraid
      

      This creates a mount point and mounts the filesystem.

      To get it to mount every time you boot, add an entry for this filesystem in /etc/fstab

      • Svinhufvud@sopuli.xyzOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        10 days ago

        Do you need to do some maintenance to keep the data in the array intact?

        I read of some btrfs scrub commands and md checks and such, but I am unsure how often to do them, and what they actually do.

        • Limonene@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          In my system, the raid arrays seem to do periodic data scrubbing automatically. Maybe it’s something that’s part of Debian, or maybe it’s just a default kernel setting. I don’t think it helps much with data integrity – I think it helps more just by ensuring the continued functionality of the drives.

          When it’s running, you can type cat /proc/mdstat to see the progress.

          That command will also show you if there is a failing drive, so that you can replace it.