How to create RAID 6 (Striping with Double Distributed Parity)

How To Create RAID 6 (Striping With Double Distributed Parity)

On Linux based operating system (OS), software RAID functionality is provided with the help of the md(4) (Multiple Device) driver and managed by the mdadm(8) utility. “md” and “mdadm” in RHEL 6 support RAID levels 0, 1, 4, 5, 6, and 10. This document uses RAID-1 as an example while working with Software RAID.

 

About RAID 6

RAID 6 is upgraded version of RAID 5, where it has two distributed parity which provides fault tolerance even after two drives fails. Mission critical system still operational incase of two concurrent disks failures. It’s alike RAID 5, but provides more robust, because it uses one more disk for parity.

RAID 6 with double distributed parity. Don’t expect extra performance than any other RAID, if so we have to install a dedicated RAID Controller too. Here in RAID 6 even if we loose our 2 disks we can get the data back by replacing a spare drive and build it from parity.

  1. Performance are good.

  2. RAID 6 is expensive, as it requires two independent drives are used for parity functions.

  3. Will loose a two disks capacity for using parity information (double parity).

  4. No data loss, even after two disk fails. We can rebuilt from parity after replacing the failed disk.

  5. Reading will be better than RAID 5, because it reads from multiple disk, But writing performance will be very poor without dedicated RAID Controller.

 

Requirements

Minimum 4 numbers of disks are required to create a RAID 6. If you want to add more disks, you can, but you must have dedicated raid controller. In software RAID, we will won’t get better performance in RAID 6. So we need a physical RAID controller.

To create RAID 6 follow the nest steps:

  • Update the system and install “mdadm” package:

    The mdadm is a small program, which will allow us to configure and manage RAID devices in Linux.

After the ‘mdadm‘ package installation, let’s list the three 20GB disks which we have added to our system using ‘fdisk‘ command.

Software RAID 6 or Striping with Double Distributed Parity in Linux systems or servers using four 20GB disks named /dev/sdb, /dev/sdc, /dev/sdd and /dev/sde.

 

Creating partitions for RAID 6

Now create partitions for raid on ‘/dev/sdb‘, ‘/dev/sdc‘, ‘/dev/sdd‘ and ‘/dev/sde‘ with the help of following fdisk command. Here, we will show how to create partition on sdb drive and later same steps to be followed for rest of the drives.

Create /dev/sdb Partition
# fdisk /dev/sdb

Please follow the instructions as shown below for creating partition.

  1. Press ‘n‘ for creating new partition.

  2. Then choose ‘P‘ for Primary partition.

  3. Next choose the partition number as 1.

  4. Define the default value by just pressing two times Enter key.

  5. Next press ‘P‘ to print the defined partition.

  6. Press ‘L‘ to list all available types.

  7. Type ‘t‘ to choose the partitions.

  8. Choose ‘fd‘ for Linux raid auto and press Enter to apply.

  9. Then again use ‘P‘ to print the changes what we have made.

  10. Use ‘w‘ to write the changes.

Create /dev/sdb Partition
# fdisk /dev/sdc
Create /dev/sdd Partition
# fdisk /dev/sdd
Create /dev/sde Partition
# fdisk /dev/sde

After creating partitions, it’s always good habit to examine the drives for super-blocks. If super-blocks does not exist than we can go head to create a new RAID setup.

# mdadm -E /dev/sd[b-e]1


or

# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

 

Creating RAID 6 md device

Now it’s time to create Raid device ‘md0‘ (i.e. /dev/md0) and apply raid level on all newly created partitions and confirm the raid using following commands.

# mdadm --create /dev/md0 --level=6 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
# cat /proc/mdstat

You can also check the current process of raid using watch command as shown in the screen grab below.

# watch -n1 cat /proc/mdstat

Verify the raid devices using the following command.

# mdadm -E /dev/sd[b-e]1

Next, verify the RAID array to confirm that the re-syncing is started.

# mdadm --detail /dev/md0

 

Creating FileSystem on Raid Device

Create a filesystem using ext4 for ‘/dev/md0‘ and mount it under /mnt/raid6. Here we’ve used ext4, but you can use any type of filesystem as per your choice.

# mkfs.ext4 /dev/md0

Mount the created filesystem under /mnt/raid6 and verify the files under mount point, we can see lost+found directory.

# mkdir /mnt/raid6
# mount /dev/md0 /mnt/raid6/
# ls -l /mnt/raid6/

Create some files under mount point and append some text in any one of the file to verify the content.

# touch /mnt/raid6/raid6_test.txt
# ls -l /mnt/raid6/
# echo "nexonhost raid setups" > /mnt/raid6/raid6_test.txt
# cat /mnt/raid6/raid6_test.txt

Add an entry in /etc/fstab to auto mount the device at the system startup and append the below entry, mount point may differ according to your environment.

# vim /etc/fstab

/dev/md0                /mnt/raid6              ext4    defaults        0 0

Next, execute ‘mount -a‘ command to verify whether there is any error in fstab entry.

# mount -av

 

Save RAID 6 Configuration

15. Please note by default RAID don’t have a config file. We have to save it by manually using below command and then verify the status of device ‘/dev/md0‘.

# mdadm --detail --scan --verbose >> /etc/mdadm.conf
# mdadm --detail /dev/md0

Now it has 4 disks and there are two parity information’s available. In some cases, if any one of the disk fails we can get the data, because there is double parity in RAID 6.

May be if the second disk fails, we can add a new one before loosing third disk. It is possible to add a spare drive while creating our RAID set, But I have not defined the spare drive while creating our raid set. But, we can add a spare drive after any drive failure or while creating the RAID set. Now we have already created the RAID set now let me add a spare drive for demonstration.

For the demonstration purpose, I’ve hot-plugged a new HDD disk (i.e. /dev/sdf), let’s verify the attached disk.

# ls -l /dev/ | grep sd

Now again confirm the new attached disk for any raid is already configured or not using the same mdadm command.

# mdadm --examine /dev/sdf
fdisk /dev/sdf

Again after creating new partition on /dev/sdf, confirm the raid on the partition, include the spare drive to the /dev/md0 raid device and verify the added device.

# mdadm --examine /dev/sdf
# mdadm --examine /dev/sdf1
# mdadm --add /dev/md0 /dev/sdf1
# mdadm --detail /dev/md0

 

Check Raid 6 Fault Tolerance

19. Now, let us check whether spare drive works automatically, if anyone of the disk fails in our Array. For testing, I’ve personally marked one of the drive is failed.

Here, we’re going to mark /dev/sdd1 as failed drive.

mdadm –manage –fail /dev/md0 /dev/sdd1

Let me get the details of RAID set now and check whether our spare started to sync.

# mdadm --detail /dev/md0
cat /proc/mdstat