I'm currently scanning a number of old drives to detect errors.
If you google detecting bad sectors on a mechanical spinning disk (rather than ssd) you'll usually come across:
windows
chkdsk/r drive:
linux (possibly different arguments
badblocks -wsv /dev/drive > file
and then to pass that file to the file system so as not to use those blocks.
But a modern hard drive will keep a certain amount of free to automatically reallocate these.
So am I right in saying that, if the disk is doing it's job, these bad blocks won't show up in the badblocks or chkdsk tests anyway as they'll be reallocated. The tests still serve a purpose in identifying the blocks to the drive but won't really show anything helpful until it's run out of sectors to reallocate.
You should really be keeping an eye on reallocated sectors in the SMART information for the drive.
But is there any way to know:
- How many spare sectors the drive is keeping back for this reallocation
- Similarly, how many reallocations is acceptable. I guess you're looking for a rate of increase here to show problems?
- If you were scripting some monitoring of the reallocations, how you'd set those parameters.
Or have I missed the point here?
TL;DR Given that a drive will run out of reallocated sectors at some point. How would you script to send warnings when this occurs to allow you to start telling the file system to take account of bad blocks (assuming that the rate of change wasn't enough to indicate a significant failure).
No comments:
Post a Comment