In this tutorial we will going to learn the way to replace a failed drive in RAID setup on Linux system. First, of all we assume that we are having this type of raid setup.
There are four partitions in total:
/dev/md0 - swap
/dev/md1 - /boot
/dev/md2 - /
/dev/md3 - /home
Now, let us assume /dev/sda got corrupted and failed the missing or defective drive is shown by [U_] and/or [_U]. If the RAID array is intact, it shows [UU].
Now, we have to first remove the defective drive from the array before adding the new drive. It is quite simple and easily done by simple command given below:
Now, you have to contact your system administrator or datacenter support to replace the drive with new and confirm back. Once this is done then you have to prepare the new drive for the RAID.
Here, the condition is both drives in the RAID array need to have the exact same partitioning. For that we have to copy the partition table from sdb to sda
Now, the sda drive then needs to be assigned a new random UUID:
Then after, Integration of the new drive to RAID array
As the serial number of the sda disk changed after replacement we need to generate a new device-map with GRUB2:
Now, you are done. Enjoy!
Code:
root@hoststud [~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[2]
4190208 blocks super 1.2 [2/2] [UU]
md2 : active raid1 sdb3[1] sda3[2]
524156928 blocks super 1.2 [2/2] [UU]
bitmap: 4/4 pages [16KB], 65536KB chunk
md1 : active raid1 sdb2[1] sda2[2]
523712 blocks super 1.2 [2/2] [UU]
md3 : active raid1 sdb4[1] sda4[2]
3377878848 blocks super 1.2 [2/2] [UU]
bitmap: 0/26 pages [0KB], 65536KB chunk
unused devices: <none>
/dev/md0 - swap
/dev/md1 - /boot
/dev/md2 - /
/dev/md3 - /home
Now, let us assume /dev/sda got corrupted and failed the missing or defective drive is shown by [U_] and/or [_U]. If the RAID array is intact, it shows [UU].
Code:
root@hoststud [~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[2] (F)
4190208 blocks super 1.2 [2/2] [U_]
md2 : active raid1 sdb3[1] sda3[2] (F)
524156928 blocks super 1.2 [2/2] [U_]
bitmap: 4/4 pages [16KB], 65536KB chunk
md1 : active raid1 sdb2[1] sda2[2] (F)
523712 blocks super 1.2 [2/2] [U_]
md3 : active raid1 sdb4[1] sda4[2] (F)
3377878848 blocks super 1.2 [2/2] [U_]
bitmap: 0/26 pages [0KB], 65536KB chunk
unused devices: <none>
Code:
# mdadm /dev/md0 -r /dev/sdb1
# mdadm /dev/md1 -r /dev/sdb2
# mdadm /dev/md2 -r /dev/sdb3
# mdadm /dev/md3 -r /dev/sdb4
Here, the condition is both drives in the RAID array need to have the exact same partitioning. For that we have to copy the partition table from sdb to sda
Code:
# sgdisk --backup=sdb_parttable_gpt.bak /dev/sdb
# sgdisk --load-backup=sdb_parttable_gpt.bak /dev/sda
Code:
# sgdisk -G /dev/sda
Code:
# mdadm /dev/md0 -a /dev/sda1
# mdadm /dev/md1 -a /dev/sda2
# mdadm /dev/md2 -a /dev/sda3
# mdadm /dev/md3 -a /dev/sda4
Code:
# grub2-install /dev/sda