Linux
Welcome to c/linux!
Welcome to our thriving Linux community! Whether you're a seasoned Linux enthusiast or just starting your journey, we're excited to have you here. Explore, learn, and collaborate with like-minded individuals who share a passion for open-source software and the endless possibilities it offers. Together, let's dive into the world of Linux and embrace the power of freedom, customization, and innovation. Enjoy your stay and feel free to join the vibrant discussions that await you!
Rules:
-
Stay on topic: Posts and discussions should be related to Linux, open source software, and related technologies.
-
Be respectful: Treat fellow community members with respect and courtesy.
-
Quality over quantity: Share informative and thought-provoking content.
-
No spam or self-promotion: Avoid excessive self-promotion or spamming.
-
No NSFW adult content
-
Follow general lemmy guidelines.
view the rest of the comments
At what point do you consider replacing a drive?
When I worked at a data center, I would notice drives would die around 50k hours. Some last a lot longer but when your testing hundreds of drives you start to see the patterns. So when my drive get to 50k I replace them preemptively just to not have data loss. I might still glue them km a redundant backup or something like that.
When they fail or when the capacity becomes a hindrance. Other than that if you follow your 3 2 1. You shouldn't lose data.
Replacing after 50,000 hours in enterprise data center setting makes sense. At home it's not too much issue for me to have a day of downtime replicating data back across drives. It'll just cost me my time. In an Enterprise setting it will also cost you money. Possibly even enough or more to justify retiring them at 50,000 hours. Though again if you have raid setup with spare drives etc. You can just keep on running while the raid rebuilds itself. Only replacing a drive when they go bad. Or started acting up preparing to go bad.
It all honestly depends upon your it departments budget competence and staffing. It's not wrong to replace some after 50,000. But it could be wasteful. There are after all people like myself who buy those drives and run them for years without incident.
Vibrational mode failure is more a thing in large SAS backplane enterprise jbod rack mount deployments. Small workstation/NAS deployments with three to five drives etc. Using rubber grommets and all shouldn't have too many issues causing failure from vibration. However a large Bay full of drives spinning up and down reaching harmonics can absolutely tear themselves apart over time for sure.