Redundant array of inexpensive disk is abbreviated as RAID. Here, the disk word is replaced with drive hence, represented as redundant array of inexpensive drives. In those days, it’s been shopped in high cost even if you want to buy a small amount of disk. But now, it is available for the same cost to buy high amount of disk. To make volume logically, you can mention as a pool of disk collection
In this article, let us see how to do RAID removal in Linux:
You would have created RAID 5 software in Linux before also placed it in directory for storage purpose. So, it is easy to do RAID removal by deactivating it simply.
Let us look on the setup status:
Status of RAID:
You will have anfour disk active known as md0 device.
Let start the process of removing RAID 5 here:
It is a same process like as usual. Do take the RAID backup if needed and empty the space from filesystem.
Now, execute the below command to remove RAID:
Remove the device by executing below command:
More cases, you can’t easily remove the md device because it’s been stopped with the command of stop already:
Take a look on superblocks removal which is connected with the all the disk:
Hence, it is not that much difficult activity to remove RAID 5 from Linux. Just by executing the command you can remove RAID easily from disk associated.
In this article, let us see how to do RAID removal in Linux:
You would have created RAID 5 software in Linux before also placed it in directory for storage purpose. So, it is easy to do RAID removal by deactivating it simply.
Let us look on the setup status:
Code:
Operating system : CentOS release 6.5 (Final)
RAID device : /dev/md0
: md0 : active raid5 sdd1[4] sdc1[3] sdb1[1] sda1[0]
You will have anfour disk active known as md0 device.
Code:
root@hoststud ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[4] sdc1[3] sdb1[1] sda1[0]
4189184 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
unused devices:
[root@hoststud ~]#mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Apr 21 01:47:10 2021
Raid Level : raid5
Array Size : 4189184 (4.00 GiB 4.29 GB)
Used Dev Size : 2094592 (2045.84 MiB 2144.86 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Wed Apr 21 21:59:48 2021
State : clean
Code:
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : srv6:0 (local to host srv6)
UUID : 4e7c1751:cd467d3f:8e86a6a1:3c88f6a4
Events : 139
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
3 8 33 2 active sync /dev/sdc1
4 8 49 3 active sync /dev/sdd1
It is a same process like as usual. Do take the RAID backup if needed and empty the space from filesystem.
Code:
[root@hoststud ~]#df -hTP /raid5_disk/
FilesystemType Size Used Avail Use% Mounted on
/dev/md0 ext4 5.9G 213M 5.4G 4% /raid5_disk
[root@srv6 ~]#umount /raid5_disk
[root@srv6 ~]#sed -i '/md0/d' /etc/fstab
Code:
[root@hoststud ~]#mdadm --stop /dev/md0
mdadm: stopped /dev/md0
Code:
mdadm --remove /dev/md0
Code:
[root@hoststud ~]#mdadm --stop /dev/md0
mdadm: stopped /dev/md0
[root@hoststud ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices:
[root@hoststud ~]# ls -l /dev/md0
ls: cannot access /dev/md0: No such file or directory
[root@hoststud ~]#mdadm --remove /dev/md0
mdadm: error opening /dev/md0: No such file or directory
Code:
[root@hoststud ~]#mdadm --zero-superblock /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1