Tuesday, July 7, 2015

mdadm - Currently unreadable sectors on RAID 5 linux drive



I have every 30 minutes smartd messages on /var/log/messages:



smartd[3588]: Device: /dev/sdc, 176 Currently unreadable (pending) sectors




This drive (sdc) is part of RAID 5 configured with mdadm.
Mdadm monitor tells RAID is ok but i want to know if i need to change the drive or not. Also if its neccesary to mark as bad this sectors or OS already did it.
If i need to change the drive, how can i chose the replacement one? I can´t find the number of blocks in hard drive specifications so if i chose one with less blocks than original, i will be in trouble.
Thanks.


Answer



Yes, change the drive.



Unreadable (pending) sectors are sector whose contents could not be read. On a normal non-RAID situation that would result in either a read error, or a long delay while the drive attempts to read the sector again and again until it succeeds (or until it eventually gives up).



With RAID two things are happening:





  1. Your disk is probably configured with a short TLER value. Thus is will give up its attempts to read that sector within a reasonable time. (Thus preventing long hangs).

  2. Your RAID array notices the failure and reads the data from another disk. This is the advantage of RAID 5; you have a spare copy.



What you want to do is:




  1. Check your backups. You should not need them if all goes well.


  2. Fetch a replacement disk of equal or larger size. You can check the size with smartctl -a /dev/sdc. Do not assume all drives of size X have the same capacity. Manufacturers like round numbers; one 500GB drive might well be smaller than another 500 GB drive.

  3. Bring the disk with problems off-line. (mdadm --manage --remove /dev/mdX /dev/sdc)

  4. Replace the disk with new hardware and let the array rebuild itself. (mdadm --add /dev/mdX /dev/sdc)



If you used large disks then this will take a lot of time. Sometimes it is faster to just rebuild the RAID array from scratch and restore from backups. (TEST those backups first!)



While the RAID is rebuilding you have no redundancy. Thus is another disk fails (e.g. due to the stress of rebuilding) then you have a problem. This sometimes happens with large disks (long rebuild times) and batches of drives from the same date.


No comments:

Post a Comment

linux - How to SSH to ec2 instance in VPC private subnet via NAT server

I have created a VPC in aws with a public subnet and a private subnet. The private subnet does not have direct access to external network. S...