Raid 1+0 with LVM

Understanding Raid 1+0 at CDOT

WARNING: I should mention that this is a very dangerous process to do anything I say below. I recommend using a virtual machine if you care about your files or don’t know what you are doing.

I have recently started working at CDOT for a co-op work term at seneca college. My first task is to understand raid, how it works, and how best to implement it on our systems in addition to using LVM. Since there are tons of sources for raid and LVM out there I’m only going to talk about a little more advanced(specific) setup and how I created it. I would appreciate any comments on how to do this easier or in a more efficient way.

I chose to do raid 1+0 because it seemed more versatile and powerful than raid 0+1. I don’t believe this is always the case, depending on the setup you have. There is also raid 10 which does a lot of behind the scenes configurations, so research them all and in what situations they work best.

This setup was run inside a Fedora 18 beta virtual machine(KVM).
Note: This setup does not contain the root file system, it is mostly just a raid 1+0 array for data storage for solid state drives.

Pretending I have 3 solid state drives and 1 hard disk drives.
SSD: /dev/vdb /dev/vdc /dev/vdd
HHD: /dev/vde

WARNING: Remember that doing all this stuff without understanding it will probably break everything ever…

Step 1:
Remove all partitions on each device with fdisk and create new partitions.

fdisk /dev/vdb
> d          # d means to delete a partition
> 1          # select partition # if you only have 1 partition, then you don't need to select it
> w          # w means write changes to disc

Note: if the hard drive contained raid at some point before, it might still be set up with the raid super block. This means it might reset back to using raid during a reboot. To remove the raid super block and stop this from happening use:

mdadm --zero-superblock /dev/vdb

Create the new partiton

fdisk /dev/vdb
> n          # n means create a new partition
> p          # p selects primary
> 1          # 1 select partition 1
> [ enter]   # Press enter to select default
> +2G        # Make this partition 2GB, you can change this to whatever
> w          # w means write changes to disc 

Redo the above steps for each SSD you would like in the array. In this case I’m using /dev/vdb /dev/vdc /dev/vdd.

Step 2:
Create the partition for the HDD. The size of the HDD partition should be equal to the size of the all the SSDs combined.(Delete partitions if you need to above like above)

fdisk /dev/vdb
> n          # n means create a new partition
> p          # p selects primary
> 1          # 1 select partition 1
> [ enter]   # Press enter to select default
> +6G        # Make this partition 6GB, you can change this to whatever
> w          # w means write changes to disc 

Step 3:
Fedora comes with LVM stuff installed and setup. I’m putting the HDD inside the default volume group, but if you want you can make your own volume group: vgcreate volgroupname /dev/vde1

In order to add a hard drive to a volume group I used the commands:

pvcreate /dev/vde1
lvcreate --name backup-0 --size 2G fedora
lvcreate --name backup-1 --size 2G fedora
lvcreate --name backup-2 --size 2G fedora

This will create a physical volume and then create a logical volume called backup.

Step 4:
Now that we have created all the partitions and logical volumes, we will create the raid 1 devices.

mdadm --create /dev/md0 --level=raid1 --raid-devices=2 /dev/vdb1 --write-mostly /dev/fedora/backup-0
mdadm --create /dev/md1 --level=raid1 --raid-devices=2 /dev/vdc1 --write-mostly /dev/fedora/backup-1
mdadm --create /dev/md2 --level=raid1 --raid-devices=2 /dev/vdd1 --write-mostly /dev/fedora/backup-2

With these commands we have now created the raid 1 mirrors across all our 3 ssds. The –write-mostly option is used on the hdds, this basically means, read from the ssds instead of the hdds. This is done because it is much faster to read from ssds. With the following commands you can make sure the system doesn’t try and read from a hdd unless a ssd fails. This setup has created 3 new raid devices: /dev/md0 /dev/md1 /dev/md2. You can view information on these raid devices inside the file /proc/mdstat.

You can see the progress on their rebuilds, and what the status of each is.

cat /proc/mdstat

You may notice something weird in here. The labels may look something like this:

md0 : active raid1 dm-2[1](W) vdb1[0]

A quick explaination: md0 is the raid device name, dm-2 is the logical volume label(or something like that), [1] or [0] is the device number which increments as devices are added(even if you remove the same device and add it back, this number will increment), and the (W) stands for the options –write-mostly.

you can match the dm-#(put in number if you want) to a logical volume with the following command:

ls -l /dev/disk/by-id | grep dm-

Step 5:
Finally we will create the raid 0 array using all the mirrors we have setup.

mdadm --create /dev/md3 --level=raid0 --raid-devices=3 /dev/md0 /dev/md1 /dev/md2

We can view our new raid device the same way as before.

cat /proc/mdstat

Might also be a good idea to make a file system on it.

mkfs -t ext4 /dev/md3

Conclusion and removal of everything we’ve done:
Now we have a raid 1+0 setup. I chose the raid 1+0 because it’s fun to play with it, change it, and mess it up. Something interesting to try: Increase the size of one of the mirror raid 1s by failing a single drive, removing it, changing it, and then adding it again. Here are some commands that may help people play around with it.

This will mark the drive as failed, which means it will no longer be used. Then remove the drive from the raid array. Finally it will add the device back to the raid array, and sync the array back together.

mdadm --manage /dev/md0 --fail /dev/vdb1
mdadm --manage /dev/md0 --remove /dev/vdb1
mdadm --manage /dev/md0 --add /dev/vdb1

Lastly to delete a raid array and everything we have done so far.

mdadm --manage --stop /dev/md3
mdadm --manage --stop /dev/md2
mdadm --manage --stop /dev/md1
mdadm --manage --stop /dev/md0
mdadm --zero-superblock /dev/vdb1
mdadm --zero-superblock /dev/vdc1
mdadm --zero-superblock /dev/vdd1
mdadm --zero-superblock /dev/fedora/backup-0
mdadm --zero-superblock /dev/fedora/backup-1
mdadm --zero-superblock /dev/fedora/backup-2
lvremove /dev/fedora/backup-0
lvremove /dev/fedora/backup-1
lvremove /dev/fedora/backup-2
vgreduce fedora /dev/vde1
pvremove /dev/vde1

This should have removed everything we’ve done… I think. If I have missed something, hopefully someone else or myself will catch it at some point. Have fun!


About oatleywillisa

Computer Networking Student
This entry was posted in SBR600 and tagged , , , , , . Bookmark the permalink.

One Response to Raid 1+0 with LVM

  1. Alessandro says:

    Thank you for this guide, much helpful and very interesting.
    However you should correct one part of this post.
    The third fdisk is done on /dev/vde no /dev/vdb … this could lead to mistake.
    After making a logical schema I understand what have you done…but someone that have done less practice with software raid configuration maybe caught offside.
    Pratically you configure a volume group on the HDD, after that you create three logical volume on that volume group.
    Finally you declare the first raid1 md using the physical partition of the first SSD and the logical volume backup-0 , the second one with that the physical partition of the second SSD and the logical volume backup-1 and so on for the third SSD and lv backup-2.
    The final step was to create a raid0 configuration over these three raid1 md.

    Sorry for my rusty english.
    Hope with this to have been helpful to someone.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s