How to create RAID 5 (Striping with Distributed Parity)

How To Create RAID 5 (Striping With Distributed Parity)

Redundant Array of Independent Disks Mode 5 (Redundant Array of Independent Disks Mode 5) A popular disk or solid state drive (SSD) subsystem that increases safety by computing parity data and increasing speed by interleaving data across three or more drives (striping).

 

About Parity.

Parity is the simplest common method of detecting errors in data storage. Parity stores information in each disk, Let’s say we have 4 disks, in 4 disks one disk space will be split into all disks to store the parity information. If any one of the disks fails still we can get the data by rebuilding from parity information after replacing the failed disk.

 

About RAID 5.

RAID 5 is mostly used in enterprise levels. RAID 5 work by distributed parity method. Parity info will be used to rebuild the data. It rebuilds from the information left on the remaining good drives. This will protect our data from drive failure.

Assume we have 4 drives, if one drive fails and while we replace the failed drive we can rebuild the replaced drive from parity informations. Parity information’s are Stored in all 4 drives, if we have 4 numbers of 1TB hard-drive. The parity information will be stored in 256GB in each drivers and other 768GB in each drives will be defined for Users. RAID 5 can be survive from a single Drive failure, If drives fails more than 1 will cause loss of data’s.

  1. Excellent Performance

  2. Reading will be extremely very good in speed.

  3. Writing will be Average, slow if we won’t use a Hardware RAID Controller.

  4. Rebuild from Parity information from all drives.

  5. Full Fault Tolerance.

  6. 1 Disk Space will be under Parity.

  7. Can be used in file servers, web servers, very important backups.

 

Requirements:

Minimum 3 hard drives are required to create Raid 5, but you can add more disks, only if you have a dedicated hardware raid controller with multi ports. Here, we are using software RAID and the ‘mdadm‘ package to create a raid.

To create RAID 0 follow the nest steps:

  • Update the system and install “mdadm” package:

    The mdadm is a small program, which will allow us to configure and manage RAID devices in Linux.

After the ‘mdadm‘ package installation, let’s list the three 20GB disks which we have added to our system using ‘fdisk‘ command.

# fdisk -l | grep sd

Now it’s time to examine the attached three drives for any existing RAID blocks on these drives using the following command.

# mdadm -E /dev/sd[b-d]
# mdadm --examine /dev/sdb /dev/sdc /dev/sdd

 

 

Creating partitions for RAID 5

First and foremost, we have to partition the disks (/dev/sdb, /dev/sdc, and /dev/sdd) before adding to a RAID, So let us define the partition using the ‘fdisk’ command, before forwarding it to the next steps.

# fdisk /dev/sdb
# fdisk /dev/sdc
# fdisk /dev/sdd

 

Create /dev/sdb Partition

Please follow the below instructions to create a partition on the /dev/sdb drive.

  1. Press ‘n‘ for creating a new partition.

  2. Then choose ‘P‘ for the Primary partition. Here we are choosing Primary because there are no partitions defined yet.

  3. Then choose ‘1‘ to be the first partition. By default, it will be 1.

  4. Here for cylinder size, we don’t have to choose the specified size because we need the whole partition for RAID so just Press Enter two times to choose the default full size.

  5. Next press ‘p‘ to print the created partition.

  6. Change the Type, If we need to know every available types Press ‘L‘.

  7. Here, we are selecting ‘fd‘ as my type is RAID.

  8. Next press ‘p‘ to print the defined partition.

  9. Then again use ‘p‘ to print the changes that we have made.

  10. Use ‘w‘ to write the changes.

 

Create /dev/sdc Partition

Now partition the sdc and sdd drives by following the steps given in the screenshot or you can follow the above steps.

# fdisk /dev/sdc

 

Create /dev/sdc Partition

Now partition the sdc and sdd drives by following the steps given in the screenshot or you can follow the above steps.

# fdisk /dev/sdc

After creating partitions, check for changes in all three drives sdb, sdc, & sdd.

# mdadm --examine /dev/sdb /dev/sdc /dev/sdd

or

# mdadm -E /dev/sd[b-d]

Now Check for the RAID blocks in newly created partitions. If no super-blocks are detected then we can move forward to create a new RAID 5 setup on these drives.

 

Creating RAID 5 md device md0

Now create a Raid device ‘md0‘ (i.e. /dev/md0) and include raid level on all newly created partitions (sdb1, sdc1, and sdd1) using the below command.

# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1

or

# mdadm -C /dev/md0 -l=5 -n=3 /dev/sd[b-d]1

After creating raid device, check and verify the RAID, devices included, and RAID Level from the mdstat output.

# cat /proc/mdstat

If you want to monitor the current building process, you can use the ‘watch‘ command, just pass through the ‘cat /proc/mdstat‘ with the watch command which will refresh the screen every 1 second.

# watch -n1 cat /proc/mdstat

After the creation of the raid, Verify the raid devices using the following command.

# mdadm -E /dev/sd[b-d]1

Next, verify the RAID array to assume that the devices which we’ve included in the RAID level are running and started to re-sync.

# mdadm --detail /dev/md0

 

Creating file system for md0

Create a file system for the ‘md0‘ device using ext4 before mounting.

# mkfs.ext4 /dev/md0

Now create a directory under ‘/mnt‘ then mount the created filesystem under /mnt/raid5 and check the files under mount point, you will see the lost+found directory.

# mkdir /mnt/raid5
# mount /dev/md0 /mnt/raid5/
# ls -l /mnt/raid5/

Create few files under mount point /mnt/raid5 and append some text in any one of the files to verify the content.

# touch /mnt/raid5/raid5_nexonhost_{1..5}
# ls -l /mnt/raid5/
# echo "nexonhost raid setups" > /mnt/raid5/raid5_nexonhost_1
# cat /mnt/raid5/raid5_nexonhost_1
# cat /proc/mdstat

I need to add an entry in fstab, else will not display our mount point after system reboot. To add an entry, we should edit the fstab file and append the following line as shown below. The mount point will differ according to your environment.

# vim /etc/fstab

/dev/md0                /mnt/raid5              ext4    defaults        0 0

Next, run the ‘mount -av‘ command to check whether any errors in the fstab entry.

# mount -av

 

Save Raid 5 Configuration

As mentioned earlier in the requirement section, by default RAID doesn’t have a config file. We have to save it manually. If this step is not followed RAID device will not be in md0, it will be in some other random number.

So, we must have to save the configuration before the system reboot. If the configuration is saved it will be loaded to the kernel during the system reboot and RAID will also get loaded.

# mdadm --detail --scan --verbose >> /etc/mdadm.conf

 

Adding Spare Drives

What is the use of adding a spare drive? it is very useful if we have a spare drive, if any one of the disks fails in our array, this spare drive will get active and rebuild the process and sync the data from other disks, so we can see a redundancy here.

For more instructions on how to add spare drive and check Raid 5 fault tolerance, read #Step 6 and #Step 7 in the following article.