I have a ZFS RAIDZ2 array made of 6x 2TB disks with power on hours between 40,000 and 70,000. This is used just for data storage of photos and videos, not OS drives. Part of me is a bit concerned at those hours considering they’re a right old mix of desktop drives and old WD reds. I keep them on 24/7 so they’re not too stressed in terms of power cycles bit they have in the past been through a few RAID5 rebuilds.
Considering swapping to 2x ‘refurbed’ 12TB enterprise drives and running ZFS RAIDZ1. So even though they’d have a decent amount of hours on them, they’d be better quality drives and fewer disks means less change of any one failing (I have good backups).
The next time I have one of my current drives die I’m not feeling like staying with my current setup is worth it, so may as well change over now before it happens?
Also the 6x disks I have at the moment are really crammed in to my case in a hideous way, so from an aesthetic POV (not that I can actually seeing the solid case in a rack in the garage),it’ll be nicer.
Don’t fill copy-on-write fs more that about 80%, it really slows down and struggles because new data is written to a new place before the old stuff is returned to the pool. Just sayin’.
I wouldn’t worry if you’re backed up. The SMART values and daemon will tell you if one is about to die.
6 years old and running perfectly fine.
I have 5 WD RED disks in a RAIDZ1 config. In the first year I was experimenting with the sleep or spindown options. Then I have read that drives live longer if they run constantly. Now they are spinning 24/7.
The additional SSD has broken and been replaced 2x during these years.
Yeah flat out spinning is definitely better for reliability.
About 10k power on hours. That’s honestly a little surprising since I’ve had them for 7 years or so, but it’s only been on 24/7 for the last year or two (used to just turn on when watching a movie or something).
From those hours, I should expect a few more trouble free years.
My OS drive is >30k hours since it used to be my desktop boot drive (tiny 120GB SATA SSD). I’ve been thinking about upgrading to NVME, since my desktop NVME is getting a little full (500GB), and it could also make for a nice cache. It’s nowhere near dying though, with ~16TBW, so I’m in no hurry.
Mine are 3x 27k and 1x 47k. I just started replacing them… not because they’re old or have any issues, just because they’re becoming too small. Going from 4 to 8 tb disks and transferring the old ones to an external raid enclosure for backups.
Actually brings up a question I had… what do people think about refurbished drives for a NAS?
I just went all refurbished on my new drives. Time will tell. Oldest one has about 8 months runtime on it.
I went with 5x recertified Seagate exos 20tb, and one recertified ironwolf pro 20tb.
Nice, we’ll all look out for an update in a year!
I try to mix brands and lots (buy a few from one retailer and some from another). I used to work for a storage/NAS company and we had many incidents when we’d fill a 12 or 24 drive raid with drives right from the same order and had multiple drives die within hours of each other. Which isn’t usually enough for replacement/resilvering.
Yep, seen a similar thing with servers…
A few years ago I built up a system with ~ 20 servers. Powered them all up and did all the RAID initialisation (RAID5 across 6-8 disks per server IIRC)
One server basically needed all it’s disks replacing and some of the others needed a disk or 2 replaced - within a month!
Since replacing those disks and building all those arrays I’m happy to build a NAS / server, let it bed-in for a while and if nothing fails I’ll just keep powering up & down my NAS as needed and I’ll run the drives until they die…
I recently decommissioned my old poweredge T620. Beast of a thing, 5U heavy af. It had 8x10T drives and was the primary media server.
Now that it is replaced I bought 2x Synology RS822+ and filled them with the old disks. Using SHR2. They are mixed brands bought at different times so I’ve made sure each NAS has a mix of disks.
Lowest is 33k hours, highest is 83k.
I’m glad you asked because I’ve sort of been meaning to look into that.
I have 4 8TB drives that have ~64,000 hours (7.3 years) powered on.
I have 2 10TB drives that have ~51,000 hours (5.8 years) powered on.
I have 2 8TB drives that have ~16,800 hours (1.9 years) powered on.Those 8 drives make up my ZFS pool. Eventually I want to ditch them all and create a new pool with fewer drives. I’m finding that 45TB is overkill, even when storing lots of media. The most data I’ve had is 20TB and it was a bit overwhelming to keep track of it all, even with the *arrs doing the work.
To rebuild it with 4 x 16TB drives, I’d have half as many drives, reducing power consumption. It’d cost about $1300. With double parity I’d have 27TB usable. That’s the downside to larger drives, having double parity costs more.
To rebuild it with 2 x 24TB drives, I’d have 1/4 as many drives, reducing power consumption even more. It’d cost about $960. I would only have single parity with that setup, and only 21TB usable.
Increasing to 3 x 24TB drives, the cost goes to $1437 with the only benefit being double parity. Increasing to 4*24TB gives double parity, 41TB, and costs almost $2k. That would be overkill.
Eventually I’ll have to decide which road to go down. I think I’d be comfortable with single parity, so 2 very large drives might be might be my next move, since my price per kWh is really high, around $.33.
Edit: one last option, and a really good one, is to keep the 10TB drives, ditch all of the 8TB drives, and add 2 more 10TB drives. That would only cost $400 and leave me with 4 x 10TB drives. Double parity would give me 17TB. I’ll have to keep an eye on things to make sure it doesn’t get full of junk, but I have a pretty good handle on that sort of thing now.
Second hand so I’m sure ancient.
I tend to buy two at a time. Some are months old, others three years old.
Professionally, I have seen drives over 10 years always on at low utilization without issue. (The data was easily replaceable.)
crammed in to my case in a hideous way
Heat is a killer. Check them regularly.
They’re in a drafty garage. This time of year I keep them spinning to stop them freezing 🤣
As someone who runs 3 large arrays with 8TB, 16TB, and 21TB drives respectively, know that:
- RAIDZ1 will cause tons of fear when a disk fails if you’re used to Z2. Don’t change.
- When a disk goes, the larger the disk, the slower the rebuild time, and the more taxing it is on the other disks. With Z1, if another fails during the rebiluild, you’re SOL.
Less disks is simpler, but more disks is safer. 6 disks is the perfect sized array IMO. If you don’t need more space, I’d buy a 2TB hot spare and call it a day. But if space is a concern, Z2 with 4 disks.
Edit: Those three arrays mirror each other in different locations, and the fear was still there when the Z1 had an issue. Mostly due to the headache, but still.
The reason I went RAIDZ2 in my current setup was because of the number of disks increasing the chance of multi failures. But with fewer disks that goes down. I’m not at all worried about data loss, as I said I have good backups so I can always restore. So if the remaining disk dies during a rebuild, that’s unfortunate, but it only affects my uptime, not my data.
Hate to be that guy, but those maths aren’t mathing.
Less drives does not equal less chance of multiple failures. The statistical failure rate of one drive has no impact on another. In fact, analysis of Backblaze’s data showed that larger drives were more prone to failure (platter density vs platter count).
Who has more chance of a single disk failing today: me with 6 disks, or Backblaze with their 300,000 drives?
Same thing works with 6 vs 2.
Mine are only 25k hours or so, around 3 years. My prior set of disks had a single failure at 6 years but I replaced them all and went to bigger capacity. There is also the power saving aspect of going down to 2 drives as well, it definitely saves some power not spinning 4 extra drives all the time.
I have a SAMSUNG HD103UJ 1TB I’m about to retire, but not because it’s bad, only replaced with other bigger HDD’s. It’s a bit sad to be honest, these Samsungs is rock solid!
142578 hours (16 years)! 🤘
Wow. W Bush was president (or Obama depending on month).
Edit: yep, W. Bush. Oct 6th 2008, so Obama hadn’t even been elected yet.
4x8tb they had 8.5k hours on them when I got them four years ago, they work non-stop since.
Ultimately it’s a matter of personal choice and risk tolerance.
The Z1 will be simpler and have larger capacity, but if you have a drive fail you’ll need to quickly get it replaced or risk having to rebuild/restore if the mirror drive follows the first one to the grave.
Your Z2 setup right now can have two drives fail and still be online, and having a wider spread of power-on hours is usually a good thing in terms of failure probability.
I manage a large (14,000±) number of on-site RAID1 arrays in various environments and there is definitely a trend for drives shipped at the same time to fail at roughly the same time. It’s common enough that we often intentionally swap drives out before shipping a new unit to the customer site.
On my homelab, I’m much more tolerant of risk since I have trust in my 3-2-1 backup solution and if my NAS goes down it’s not going to substantially affect anything while I wait for a drive replacement.
According to my Synology:
- WD40EFRX-68WT0N0 - 86,272 hours
- WD40EFRX-68WT0N0 - 86,207 hours
- WD40EFRX-68N32N0 - 34,417 hours
- ST4000VN006-3CW104 - 10,054 hours
According to my Synology:
Where are you finding this data? It’s not Info Center -> Storage…
Look into the S.M.A.R.T. reports of each drive.
On Synology DSM 7.x: Storage Manager › HDD/SSD › Health Info › S.M.A.R.T. › S.M.A.R.T. Attribute › Details › Power_On_Hours
The power-on hours are shown directly on the Health Info page, no need to click through to the SMART attributes.
My first batch (6x 20TB) is at 8611 hours.
The 2nd batch (3x 20TB) is at 5612 hours
So a year ago you spent over 3k on disks?
Yeah and a new 12 bay NAS to put them all in. I had a 2 bay that I expanded with a bunch of USB drives before, but that was starting to get really messy. Basically took my entire thirteenth salary.