About a year ago I switched to ZFS for Proxmox so that I wouldn’t be running technology preview.
Btrfs gave me no issues for years and I even replaced a dying disk with no issues. I use raid 1 for my Proxmox machines. Anyway I moved to ZFS and it has been a less that ideal experience. The separate kernel modules mean that I can’t downgrade the kernel plus the performance on my hardware is abysmal. I get only like 50-100mb/s vs the several hundred I would get with btrfs.
Any reason I shouldn’t go back to btrfs? There seems to be a community fear of btrfs eating data or having unexplainable errors. That is sad to hear as btrfs has had lots of time to mature in the last 8 years. I would never have considered it 5-6 years ago but now it seems like a solid choice.
Anyone else pondering or using btrfs? It seems like a solid choice.
The question is how do you get a bad performance with ZFS?
I just tried to read a large file and it gave me uncached 280 MB/s from two mirrored HDDs.
The fourth run (obviously cached) gave me over 3.8 GB/s.
I have never heard of anyone getting those speeds without dedicated high end hardware
Also the write will always be your bottleneck.
This is an old PC (Intel i7 3770K) with 2 HDDs (16 TB) attached to onboard SATA3 controller, 16 GB RAM and 1 SSD (120 GB). Nothing special. And it’s quite busy because it’s my home server with a VM and containers.
I’m seeing very similar speeds on my two-HDD RAID1. The computer has an AMD 8500G CPU but the load from ZFS is minimal. Reading / writing a 50GB /dev/urandom file (larger than the cache) gives me:
What’s your setup?
Maybe I am CPU bottlenecked. I have a mix of i5-8500 and i7-6700k
The drives are a mix but I get almost the same performance across machines
I have similar speeds on a truenas that I installed on a simple i3 8100
How much ram and what is the drive size?
I suspect this also could be an issue with SSDs. I have seen a lot a posts around describing similar performance on SSDs.