Linux/mdadm
mdadm
"mdadm is a Linux utility by Neil Brown that is used to manage RAID devices, previously known as mdctl. Besides managing, it can create, delete, or monitor Linux software RAIDs. Available under version 2 or later of the GNU General Public License, mdadm is free software. Mdadm derives its name from the “md” (multiple disk) device nodes it manages." [1]
Commands
Show all md devices status:
cat /proc/mdstat
Show md device details: (needs full path)
mdadm --detail /dev/md0 mdadm -D /dev/md/thebig
Create RAID 0 (stripe):
mdadm --create --verbose /dev/md0 --level=raid0 --raid-devices=3 /dev/hd[cde]1
Create RAID 5 (stripe with parity):
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sd[cde]1 mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 --spare-devices=1 /dev/sd[cdef]1
Create RAID 10:
mdadm --create --verbose /dev/md0 --level=10 --spare-devices=0 /dev/sd[bcde]1 mdadm --create --verbose /dev/md0 --level=10 --spare-devices=1 /dev/sd[bcdef]1
Add partition to md device:
# mdadm /dev/md0 --add /dev/sdb1 mdadm md0 --add /dev/sdb1
Fail and remove partition of md device:
# mdadm /dev/md0 --fail /dev/sdb1 # mdadm /dev/md0 --remove /dev/sdb1 mdadm md0 --fail /dev/sdb1 mdadm md0 --remove /dev/sdb1
Start drive without resync:
# mdadm -S /dev/md0 # --stop # mdadm -A --assume-clean /dev/md0 # --assemble mdadm -S md0 # --stop mdadm -A --assume-clean md0 # --assemble
Create /etc/mdadm.conf (high to low detail, any will work) [2]
# Ubuntu 24 mdadm --detail --scan --verbose >> /etc/mdadm/mdadm.conf
# make 24 universal with ubuntu 20 below: ln -s /etc/mdadm/mdadm.conf /etc/mdadm.conf
# Ubuntu 20 ?? mdadm --detail --scan --verbose >> /etc/mdadm.conf # mdadm --detail --scan >> /etc/mdadm.conf # mdadm --examine --scan >> /etc/mdadm.conf
# ubuntu uses /etc/mdadm/mdadm.conf sudo mdadm --examine --scan --config=/etc/mdadm/mdadm.conf sudo mdadm --examine --scan --config=/etc/mdadm/mdadm.conf >> /etc/mdadm/mdadm.conf
# sudo mdadm -A /dev/md0 # --assemble # sudo mdadm -A md0 # --assemble
Stop array:
# mdadm --stop /dev/md0 mdadm --stop md0 mdadm -S /dev/md/thebig
Start (assemble) array:
# mdadm --assemble /dev/md0 mdadm --verbose --assemble md0 mdadm -v -A md0 mdadm -v -A thebig
Add disk to RAID 5:
unmount /dev/md1 mdadm --add /dev/md1 /dev/sdb3 mdadm --grow --raid-devices=4 /dev/md1
# wait for rebuild... watch -dc cat /proc/mdstat e2fsck -f /dev/md3 resize2fs /dev/md3
fdisk to 'Linux raid':
type 'fd'
Device Boot Start End Blocks Id System /dev/sdb1 2048 1953525167 976761560 fd Linux raid autodetect
Sample mdadm.conf
#DEVICE partitions containers HOMEHOST <system> MAILADDR root ARRAY /dev/md/md0 level=raid10 num-devices=6 metadata=1.2 spares=1 UUID=fc5714ab:ecb661c5:5abf4c99:ea07a6ba devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1,/dev/sdf1,/dev/sdg1,/dev/sdh1 ARRAY /dev/md/md1 level=raid10 num-devices=4 metadata=1.2 spares=1 UUID=b14087e3:cf905322:9434e4af:6362e366 devices=/dev/nvme0n1p1,/dev/nvme1n1p1,/dev/nvme2n1p1,/dev/nvme3n1p1
or using descriptive names:
#DEVICE partitions containers HOMEHOST <system> MAILADDR root # will create symlink /dev/md/fast ARRAY /dev/md/fast level=raid10 num-devices=6 metadata=1.2 spares=1 UUID=fc5714ab:ecb661c5:5abf4c99:ea07a6ba devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1,/dev/sdf1,/dev/sdg1,/dev/sdh1 # will create symlink /dev/md/big ARRAY big level=raid10 num-devices=4 metadata=1.2 spares=1 UUID=b14087e3:cf905322:9434e4af:6362e366 devices=/dev/nvme0n1p1,/dev/nvme1n1p1,/dev/nvme2n1p1,/dev/nvme3n1p1
default mdadm.conf
# mdadm.conf # # !NB! Run update-initramfs -u after updating this file. # !NB! This will ensure that initramfs has an uptodate copy. # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # This configuration was auto-generated on Tue, 07 Jan 2020 19:16:23 +0000 by mkconf
Common mdadm commands
Generate mdadm.conf
cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.save /usr/share/mdadm/mkconf --generate > /etc/mdadm/mdadm.conf
Create RAID
mdadm --create /dev/md2 --raid-devices=3 --spare-devices=0 --level=5 --run /dev/sd[cde]1
Remove disk from RAID
mdadm --fail /dev/md0 /dev/sda1 mdadm --remove /dev/md0 /dev/sda1
Copy the partition structure (when replacing a failed drive)
sfdisk -d /dev/sda | sfdisk /dev/sdb mdadm --zero-superblock /dev/sdb
Add a disk to a RAID array (to replace a removed failed drive)
mdadm --add /dev/md0 /dev/sdf1
Check RAID status
cat /proc/mdstat mdadm --detail /dev/md0
Reassemble a group of RAID disks
#This works to move an assembly from one physical machine to another. mdadm --assemble /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
Steps to emulate mdrun (which has been depreciated)
# haven't tested this. Use with care mdadm --examine --scan --config=partitions > /tmp/mdadm.conf mdadm --assemble --scan --config=/tmp/mdadm.conf
Add a disk to an existing RAID and resize the filesystem
mdadm --add /dev/md0 /dev/sdg1 mdadm --grow /dev/md0 -n 5 e2fsck -f /dev/md0 resize2fs /dev/md0 e2fsck -f /dev/md0
Replace all disks in an array with larger drives and resize
# For each drive in the existing array mdadm --fail /dev/md0 /dev/sda1 mdadm --remove /dev/md0 /dev/sda1 # physically replace the drive mdadm --add /dev/md0 /dev/sda1 # now, wait until md0 is rebuilt. # this can literally take days
# All drives have been replaced and sync'd, but they still use the original size. # Issue the following command to use all available space: mdadm --grow /dev/md0 --size=max
# Do not forget to resized the file system which sits on the raid set: # for ext2/3/4 e2fsck -f /dev/md0 && resize2fs /dev/md0 && e2fsck -f /dev/md0 # for lvm pv pvresize /dev/md0 # for ntfs ntfsresize /dev/md0 # note, most likely ntfs is NOT exported as a single partition. In the case # of a Xen hvm machine, it is a "disk device" so you will need to resize the # partition itself, then resize ntfs.
Stop and remove the RAID device
mdadm --stop /dev/md0 mdadm --remove /dev/md0
Destroy an existing array
mdadm --manage /dev/md2 --fail /dev/sd[cde]1 mdadm --manage /dev/md2 --remove /dev/sd[cde]1 mdadm --manage /dev/md2 --stop mdadm --zero-superblock /dev/sd[cde]1
Speed up a sync (after drive replacement)
cat /proc/sys/dev/raid/speed_limit_max 200000
Rename an existing array
mdadm --stop /dev/md127 mdadm -A /dev/md0 -m127 --update=super-minor /dev/sd[bcd]
Source: Linux Server Tech FAQ - Common mdadm commands - http://wiki.linuxservertech.com/index.php?action=artikel&cat=7&id=11&artlang=en
LVM
See LVM
Larger than 2TB RAID
To get larger than 2TB you need GPTs
parted /dev/sdb mklabel gpt print free mkpart primary 1M 4001GB p set 1 raid on align-check optimal 1 q
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sd[bcd]1 mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sd[bcde]1
mkfs.ext4 /dev/md0 -L /ci
cat /proc/mdstat
parted -a optimal /dev/sdf
Ref:
- server - How can I create a RAID array with >2TB disks? - Ask Ubuntu - https://askubuntu.com/questions/350266/how-can-i-create-a-raid-array-with-2tb-disks
- A guide to mdadm - Linux Raid Wiki - https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm
- Using parted to create a RAID primary partition — Lucid Solutions - https://plone.lucidsolutions.co.nz/linux/io/using-parted-to-create-a-raid-primary-partition
Handle Drive Failure
Fail device:
mdadm /dev/md0 --fail /dev/sdb1 # -f mdadm: set /dev/sdb1 faulty in /dev/md0
Remove failed device:
mdadm /dev/md0 --remove /dev/sdb1
Verify failed device:
mdadm --detail /dev/md0
Add device to md:
mdadm /dev/md0 --add /dev/sdb1 # -a
Verify rebuild:
mdadm --detail /dev/md0 cat /proc/mdstat
References:
- Handling a Drive Failure - http://www.gagme.com/greg/linux/raid-lvm.php#failure
Force Partial Failure Recovery
inactive array (missing 2 devices due to io failure)
md4 : inactive sdf3[5] sde3[4] sdd3[3] sdc3[2] 5780957696 blocks
/etc/mdadm.conf
ARRAY /dev/md4 level=raid5 num-devices=6 UUID=f0ce1e02:2cd38a68:6ffb704c:fe6e32b0
Stop array:
mdadm --stop /dev/md4
Forcefully Rebuild Array:
mdadm -A --force /dev/md4 /dev/sd[acdef]3 # notice 'b' is missing as it is dead dead # but 'a' is only partially dead (random io errors)
References:
- linux - How to get an inactive RAID device working again? - Super User - http://superuser.com/questions/117824/how-to-get-an-inactive-raid-device-working-again
Bring online auto-read-only
cat /proc/mdstat Personalities : [raid1] md1 : active (auto-read-only) raid1 sde1[1] sdd1[0]
Bring online:
mdadm --readwrite /dev/md1
Ref:
- raid - New md array is auto-read-only and has resync=PENDING - Unix & Linux Stack Exchange - https://unix.stackexchange.com/questions/101072/new-md-array-is-auto-read-only-and-has-resync-pending
- linux - How do I reactivate my MDADM RAID5 array? - Super User - https://superuser.com/questions/603481/how-do-i-reactivate-my-mdadm-raid5-array
Pause Sync
checkarray -x --all
echo 0 > /proc/sys/dev/raid/speed_limit_max echo 0 > /proc/sys/dev/raid/speed_limit_min
echo frozen > /sys/block/md0/md/sync_action echo none > /sys/block/md0/md/resync_start echo idle > /sys/block/md0/md/sync_action
ref [3]
Notes
See mdadm/Notes