Linux Create Software RAID 1 (Mirror) Array

last updated in Categories , ,

How do I create Software RAID 1 arrays on Linux systems without using GUI tools or installer options? How do I setup RAID 1 array under Linux systems?

You need to install mdadm which is used to create, manage, and monitor Linux software MD (RAID) devices. RAID devices are virtual devices created from two or more real block devices. This allows multiple devices (typically disk drives or partitions) to be combined into a single device to hold (for example) a single filesystem. Some RAID levels include redundancy and can survive some degree of device failure.

Linux Support For Software RAID

Currently, Linux supports the following RAID levels (quoting from the man page):

  2. RAID0 (striping)
  3. RAID1 (mirroring)
  4. RAID4
  5. RAID5
  6. RAID6
  7. RAID10

MULTIPATH is not a Software RAID mechanism, but does involve multiple devices: each device is a path to one common physical storage device. FAULTY is also not true RAID, and it only involves one device. It provides a layer over a true device that can be used to inject faults.

Install mdadm

Type the following command under RHEL / CentOS / Fedora Linux:
# yum install mdadm
Type the following command under Debian / Ubuntu Linux:
# apt-get update && apt-get install mdadm

How Do I Create RAID1 Using mdadm?

Type the following command to create RAID1 using /dev/sdc1 and /dev/sdd1 (20GB size each). First run fdisk on /dev/sdc and /dev/sdd with “Software Raid” type i.e. type 0xfd:
# fdisk /dev/sdc
# fdisk /dev/sdd

See fdisk(8) man page to setup partition type. Do not format partition. Just create the same. Now, create RAID-1 as follows.

If the device contains a valid md superblock, the block is overwritten with zeros:

# mdadm --zero-superblock /dev/sdc /dev/sdd

Create RAID1 using /dev/sdc1 and /dev/sdd1

# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1

Format /dev/md0 as ext3:

# mkfs.ext3 /dev/md0

Mount /dev/md0

# mkdir /raid1
# mount /dev/md0 /raid1
# df -H

Edit /etc/fstab

Make sure RAID1 get mounted automatically. Edit /etc/fstab and append the following line:

/dev/md0 /raid1 ext3 noatime,rw 0 0

Save and close the file.

How Do I See RAID Array Building Progress and Current Status?

Type the following command:
# watch -n 2 cat /proc/mdstat
# tail -f /proc/mdstat

Update /etc/mdadm.conf File

Update or edit /etc/mdadm/mdadm.conf or /etc/mdadm.conf (distro specific location) file as follows:

ARRAY /dev/md0 devices=/dev/sdc1,/dev/sdd1 level=1 num-devices=2 auto=yes

This config file lists which devices may be scanned to see if they contain MD super block, and gives identifying information (e.g. UUID) about known MD arrays. Please note that Linux kernel v2.6.xx above can use both /dev/mdX or /dev/md/XX names. You can also create partitions for /dev/md/XX as /dev/md/d1/p2.

How Do I Get Information On Existing Array?

Type the following command
# mdadm --query /dev/md0
This will find out if a given device is a raid array, or is part of one, and will provide brief information about the device.


  • See man pages: mdadm(8) and mdadm.conf(5)
  • RAID 5 vs RAID 10: Recommended RAID For Safety and Performance


Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

32 comment

  1. This is a nice article. Unfortunately it does not seem to give complete information. I have followed the steps, and have a /dev/md0 created, but I can’t format it. I get an error from mkfs.ext3 saying “Device size reported to be zero.” When I do mdadm –query /dev/md0 I get the following response “/dev/md0: is an md device which is not active”. Googling these errors has not produced anything helpful.

  2. I got the following error,
    mdadm: Cannot open /dev/sda1: Device or resource busy

    Can any one help me? I am New to Linux

    1. In the rescue mode, while all drives dismounted, I’ve successfully mirrored my /, /home , and /usr, Using mdadm –create /dev/md0 –level=1 –raid-devices=2 /dev/sda1 /dev/sdb1.
      But the system is unable to boot from md0( / ). I updated etc/fstabe before I proceed with array creation. Also made changes in grub.conf, but the system unable to boot successfully, during booting process it asks about file system check on md0, I used e2fsck command but there is no result.

      1. two steps more, 1st is to create an initrd (initial ramdisk) with RAID1 module (/sbin/mkinitrd ./boot/grub/initrdname uname -r –preload raid1 –with=raid1 –fstab=/etc/fstab)
        2nd step: resize2fs and fsck all md……

        and it is bootable

  3. Thank you very much for detailed tutorials. Your site is the best place for Linux beginners, average and even experts.

  4. Hi, many thanks for this.

    I got as far as creating my array using 2 HDDs (2TB each) already in the system. (fedora 14)

    I create /dev/md1 mounted at /raid1. (md0 is the operating system array which I created when I set-up the system)

    After I rebooted I could no longer mount/dev/md1 so I looked at “disk utility” and found the the raid array “degraded” and called /dev/md127 (which I CAN mount)

    /cat /proc/mdstat : show that the array is only using one HDD: /dev/sdd[1]

    First time this happened I started from the beginning but now that I’s happened again, I wonder how to repair the raid array.

    Any help would be much appreciated.


    1. Hi,
      Before rebooting your machine try to mount md1 into any mount-point, (don’t make changes into fstab file until you can successfully mount)
      It may not mount because you have to resize md1 before you mount it.

      First you have to add the second drive into the array then fsck and then resize..
      After you done make changes into fstab file(if not done before)
      mdadm –add /dev/md1 /dev/sdx (here sdx is the secondary drive, you need to add in the array because it is degraded)
      after synched , first e2fsck /dev/md1
      then resize2fs /dev/md1.
      If there is any error accrued you need to do from beginning, if accorded kindly let me know.


      Owais Hyder
      Sr.Technical Consultant
      Synergy Computers (Pvt.) Ltd.

  5. I was searching a way to make software RAID on my Redhat Linux Advanced Server 5.5 Machine and this article helps me a-lot. very warm thanx for the author.

  6. A million thanks i was trying about 100 times everything went right except for the mdadm.conf file which i did not no what to do
    and the /etc/fstab
    Thanks for the article
    Thank u very very much

  7. Awesome, this doc definitely helps the failed partitioner on Fedora 16 installer (for software raid that you want unmounted during install). Thanks!!

  8. In my experience the mdadm.conf array definition needs the level spelled out.

    Not this:
    ARRAY /dev/md0 devices=/dev/sdc1,/dev/sdd1 level=1 num-devices=2 auto=yes

    But rather this:
    ARRAY /dev/md0 devices=/dev/sdc1,/dev/sdd1 level=raid1 num-devices=2 auto=yes

    This is on Debian. For me, the first variation results in the array definition not being recognized, and the array being auto-mounted as /dev/md128 instead of /dev/md0. Your mileage, as always, may vary.

  9. I have server with
    HardRAID : LSI MegaRAID 9271 6 Gbps FastPath
    Hard drive : 2 x 300GB SSD
    When i check with df -h

    Filesystem            Size  Used Avail Use% Mounted on
    rootfs                9.8G  605M  8.7G   7% /
    /dev/root             9.8G  605M  8.7G   7% /
    devtmpfs              126G  348K  126G   1% /dev
    /dev/sda2             265G   60M  251G   1% /home
    tmpfs                 126G     0  126G   0% /dev/shm
    /dev/root             9.8G  605M  8.7G   7% /var/named/chroot/etc/named
    /dev/root             9.8G  605M  8.7G   7% /var/named/chroot/var/named
    /dev/root             9.8G  605M  8.7G   7% /var/named/chroot/etc/named.conf
    /dev/root             9.8G  605M  8.7G   7% /var/named/chroot/etc/named.rfc1912.zones
    /dev/root             9.8G  605M  8.7G   7% /var/named/chroot/etc/rndc.key
    /dev/root             9.8G  605M  8.7G   7% /var/named/chroot/usr/lib64/bind
    /dev/root             9.8G  605M  8.7G   7% /var/named/chroot/etc/named.iscdlv.key
    /dev/root             9.8G  605M  8.7G   7% /var/named/chroot/etc/named.root.key

    Is it RAID or not?
    Why so much /dev/root s?

    1. Why so much /dev/root s?

      First /dev/root point to the root file system mounted at /. Rest of /dev/root are used by named server to secure and chroot the server. Use mount command to verify this. Use megactl or megaclisas command to get more info about your LSI raid device.


  10. Finally, i manage to do this, thanks for the info, i just skip the #mkdir /raid1 because i have to do another PV, VG and 7 LV on this disk…anywhere,thanks for sharing

  11. hi
    this is karthik ,here my problem is in my lapy bare material is windows ,and i installed vm in rhel6 ,i added two hard disk /dev/sda, /dev/sdb one partition from /dev/sda, two partitions from /dev/sdb i want create raid but i getting this errors plz tell me how to do ?
    error :
    mdadm: super1.x cannot open /dev/sdb6: No such file or directory
    mdadm: /dev/sdb6 is not suitable for this array.
    mdadm: super1.x cannot open /dev/sdb7: No such file or directory
    mdadm: /dev/sdb7 is not suitable for this array.
    mdadm: create aborted

  12. hi ,,
    you write such a good articles..
    i want to understand “multipathing” properly. i am not able to understand via different articles over the internet.
    could you please send me any doc which can help me understand this topic like how to configure, what is multipathing, types of multipathing,what does it actually mean….

    Thanks in advance….

  13. I’m not able to find the /etc/mdadm.conf or /etc/mdadm/mdadm.conf. why is it so ?

    Version : 1.2
    Creation Time : Tue Mar 29 21:16:39 2016
    Raid Level : raid1
    Array Size : 20948288 (19.98 GiB 21.45 GB)
    Used Dev Size : 20948288 (19.98 GiB 21.45 GB)
    Raid Devices : 2
    Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Tue Mar 29 21:30:07 2016
    State : active
    Active Devices : 2
    Working Devices : 2
    Failed Devices : 0
    Spare Devices : 0

    Name : localhost.localdomain:0 (local to host localhost.localdomain)
    UUID : ff2aa97d:6aae2570:0d44f49a:735d94b9
    Events : 18

    Number Major Minor RaidDevice State
    0 8 17 0 active sync /dev/sdb1
    1 8 33 1 active sync /dev/sdc1

    [root@localhost etc]# ls -ld /etc/md*
    ls: cannot access /etc/md*: No such file or directory
    [root@localhost etc]#

    Personalities : [raid1]
    md0 : active raid1 sdc1[1] sdb1[0]
    20948288 blocks super 1.2 [2/2] [UU]

    unused devices:

  14. I only ran into two problems, one, I had two different size HD’s, so I took the smallest one in the fdisk readout and copied the ‘end’ to the larger when fdisking it, and magic, the sizes are now the same.
    Second, I could not fdisk the partition id type fd. I found out, that I had to totally delete the existing partition and use the ‘Create a new label’ and use the ‘o’ to create a new empty DOS partition table, as the partition I was creating was a ‘g’ GPT partition for some unknown reason. Once I got past that, all is well, the finished partition in fdisk should look similar to this: “Device Boot Start End Sectors Size Id Type
    /dev/sda1 2048 1953458142 1953456095 931.5G fd Linux raid autodetect” , after getting the raid setup , I then was able to successfull for the first time ever, get DLNA working, woohoo !

    1. Oh, and thanks for the instructions, it would have taken me quite a long time, if ever, for me to figure it all out. I did this on an Raspberry Pi vi win7 putty client.

  15. Hello,

    If I do RAID configuration using Setup Utility of my server, will I lose any existing data on the server?

    Still, have a question? Get help on our forum!