Linux/mdadm: Difference between revisions

From Omnia
Jump to navigation Jump to search
 
(7 intermediate revisions by the same user not shown)
Line 5: Line 5:
== Commands ==
== Commands ==


Show all md devices status:
=== Show all md devices status ===
  cat /proc/mdstat
  cat /proc/mdstat


Show md device details: (needs full path)
=== Show md device details ===
(needs full path)
  mdadm --detail /dev/md0
  mdadm --detail /dev/md0
  mdadm -D /dev/md/thebig
  mdadm -D /dev/md/thebig


Create RAID 0 (stripe):
=== Create RAID 0 (stripe) ===
  mdadm --create --verbose /dev/md0 --level=raid0 --raid-devices=3 /dev/hd[cde]1
  mdadm --create --verbose /dev/md0 --level=raid0 --raid-devices=3 /dev/hd[cde]1


Create RAID 5 (stripe with parity):
=== Create RAID 1 (mirror) ===
  mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sd[cde]1
  mdadm --create --verbose /dev/md0 --level=raid1 --raid-devices=2 --spare-devices=0 /dev/hd[bc]1
  mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 --spare-devices=1 /dev/sd[cdef]1
  mdadm --create --verbose /dev/md0 --level=raid1 --raid-devices=2 --spare-devices=1 /dev/hd[bce]1


Add partition to md device:
=== Create RAID 5 (stripe with parity) ===
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 --spare-devices=0 /dev/sd[bcd]1
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 --spare-devices=1 /dev/sd[bcde]1
 
=== Create RAID 10 (striped mirrors) ===
mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 --spare-devices=0 /dev/sd[bcde]1
mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 --spare-devices=1 /dev/sd[bcdef]1
 
# RAID10 in mdadm is built using RAID1 pairs (mirrors), then striped across those pairs.
# So it's striped mirrors, technically: each stripe is made of mirrored disks.
# This is different from some hardware RAID implementations where RAID10 is built as a mirror of stripes (mirrored stripe), meaning two RAID0 arrays are mirrored.
# Offers redundancy and performance.
# Can be created with an odd number of disks (e.g., 3, 5, etc.), which is not possible in traditional RAID10.
 
=== Add partition to md device ===
  # mdadm /dev/md0 --add /dev/sdb1
  # mdadm /dev/md0 --add /dev/sdb1
  mdadm md0 --add /dev/sdb1
  mdadm md0 --add /dev/sdb1


Fail and remove partition of md device:
=== Fail and remove partition of md device ===
  # mdadm /dev/md0 --fail /dev/sdb1
  # mdadm /dev/md0 --fail /dev/sdb1
  # mdadm /dev/md0 --remove /dev/sdb1
  # mdadm /dev/md0 --remove /dev/sdb1
Line 29: Line 44:
  mdadm md0 --remove /dev/sdb1
  mdadm md0 --remove /dev/sdb1


Start drive without resync:
=== Start drive without resync ===
  # mdadm -S /dev/md0    # --stop
  # mdadm -S /dev/md0    # --stop
  # mdadm -A --assume-clean /dev/md0    # --assemble
  # mdadm -A --assume-clean /dev/md0    # --assemble
Line 35: Line 50:
  mdadm -A --assume-clean md0    # --assemble
  mdadm -A --assume-clean md0    # --assemble


Create /etc/mdadm.conf (high to low detail, any will work) [https://serverfault.com/questions/170817/software-mdadm-raid-5-inactive-md0-showing]
=== Create /etc/mdadm.conf ===
 
(high to low detail, any will work) [https://serverfault.com/questions/170817/software-mdadm-raid-5-inactive-md0-showing]
  # Ubuntu 24
  # Ubuntu 24
  mdadm --detail --scan --verbose >> /etc/mdadm/mdadm.conf
  mdadm --detail --scan --verbose >> /etc/mdadm/mdadm.conf
mdadm -Dsv >> /etc/mdadm/mdadm.conf


  # make 24 universal with ubuntu 20 below:
  # make 24 universal with ubuntu 20 below:
Line 54: Line 72:
  # sudo mdadm -A md0    # --assemble
  # sudo mdadm -A md0    # --assemble


Stop array:
=== Change Drive Name ===
 
/dev/mdX usually created based on drive "name" number (eg. myserver:2 will create /dev/md2)
 
Change the name with: (to /dev/md1) <ref>https://superuser.com/questions/346719/how-to-change-the-name-of-an-md-device-mdadm</ref>
mdadm -A --update=name --name=1 /dev/md1 /dev/nvme0n1p1 /dev/nvme1n1p1 /dev/nvme2n1p1 /dev/nvme3n1p1
 
If the "name" number collides with another array, it will pick some high number like /dev/md127
 
=== Stop array ===
  # mdadm --stop /dev/md0
  # mdadm --stop /dev/md0
  mdadm --stop md0
  mdadm --stop md0
  mdadm -S /dev/md/thebig
  mdadm -S /dev/md/thebig


Start (assemble) array:
=== Start (assemble) array ===
  # mdadm --assemble /dev/md0
  # mdadm --assemble /dev/md0
  mdadm --verbose --assemble md0
  mdadm --verbose --assemble md0
Line 65: Line 92:
  mdadm -v -A thebig
  mdadm -v -A thebig


Add disk to RAID 5:
=== Add disk to RAID 5 ===
  unmount /dev/md1
  unmount /dev/md1
  mdadm --add /dev/md1 /dev/sdb3
  mdadm --add /dev/md1 /dev/sdb3
Line 75: Line 102:
  resize2fs /dev/md3
  resize2fs /dev/md3


fdisk to 'Linux raid':
=== fdisk to 'Linux raid' ===
  type 'fd'
  type 'fd'


Line 94: Line 121:
   devices=/dev/nvme0n1p1,/dev/nvme1n1p1,/dev/nvme2n1p1,/dev/nvme3n1p1
   devices=/dev/nvme0n1p1,/dev/nvme1n1p1,/dev/nvme2n1p1,/dev/nvme3n1p1
</pre>
</pre>
=== descriptive names mdadm.conf ===


or using descriptive names:
or using descriptive names:
<pre>
<pre>
#DEVICE partitions containers
#DEVICE partitions containers
Line 106: Line 136:


# will create symlink /dev/md/big
# will create symlink /dev/md/big
ARRAY big level=raid10 num-devices=4 metadata=1.2 spares=1 UUID=b14087e3:cf905322:9434e4af:6362e366
ARRAY /dev/md/big level=raid10 num-devices=4 metadata=1.2 spares=1 UUID=b14087e3:cf905322:9434e4af:6362e366
   devices=/dev/nvme0n1p1,/dev/nvme1n1p1,/dev/nvme2n1p1,/dev/nvme3n1p1
   devices=/dev/nvme0n1p1,/dev/nvme1n1p1,/dev/nvme2n1p1,/dev/nvme3n1p1
</pre>
</pre>

Latest revision as of 20:14, 8 September 2025

mdadm

"mdadm is a Linux utility by Neil Brown that is used to manage RAID devices, previously known as mdctl. Besides managing, it can create, delete, or monitor Linux software RAIDs. Available under version 2 or later of the GNU General Public License, mdadm is free software. Mdadm derives its name from the “md” (multiple disk) device nodes it manages." [1]

Commands

Show all md devices status

cat /proc/mdstat

Show md device details

(needs full path)

mdadm --detail /dev/md0
mdadm -D /dev/md/thebig

Create RAID 0 (stripe)

mdadm --create --verbose /dev/md0 --level=raid0 --raid-devices=3 /dev/hd[cde]1

Create RAID 1 (mirror)

mdadm --create --verbose /dev/md0 --level=raid1 --raid-devices=2 --spare-devices=0 /dev/hd[bc]1
mdadm --create --verbose /dev/md0 --level=raid1 --raid-devices=2 --spare-devices=1 /dev/hd[bce]1

Create RAID 5 (stripe with parity)

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 --spare-devices=0 /dev/sd[bcd]1
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 --spare-devices=1 /dev/sd[bcde]1

Create RAID 10 (striped mirrors)

mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 --spare-devices=0 /dev/sd[bcde]1
mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 --spare-devices=1 /dev/sd[bcdef]1
# RAID10 in mdadm is built using RAID1 pairs (mirrors), then striped across those pairs.
# So it's striped mirrors, technically: each stripe is made of mirrored disks.
# This is different from some hardware RAID implementations where RAID10 is built as a mirror of stripes (mirrored stripe), meaning two RAID0 arrays are mirrored.
# Offers redundancy and performance.
# Can be created with an odd number of disks (e.g., 3, 5, etc.), which is not possible in traditional RAID10.

Add partition to md device

# mdadm /dev/md0 --add /dev/sdb1
mdadm md0 --add /dev/sdb1

Fail and remove partition of md device

# mdadm /dev/md0 --fail /dev/sdb1
# mdadm /dev/md0 --remove /dev/sdb1
mdadm md0 --fail /dev/sdb1
mdadm md0 --remove /dev/sdb1

Start drive without resync

# mdadm -S /dev/md0    # --stop
# mdadm -A --assume-clean /dev/md0    # --assemble
mdadm -S md0    # --stop
mdadm -A --assume-clean md0    # --assemble

Create /etc/mdadm.conf

(high to low detail, any will work) [2]

# Ubuntu 24
mdadm --detail --scan --verbose >> /etc/mdadm/mdadm.conf
mdadm -Dsv >> /etc/mdadm/mdadm.conf
# make 24 universal with ubuntu 20 below:
ln -s /etc/mdadm/mdadm.conf /etc/mdadm.conf
# Ubuntu 20 ??
mdadm --detail --scan --verbose >> /etc/mdadm.conf
# mdadm --detail --scan >> /etc/mdadm.conf
# mdadm --examine --scan >> /etc/mdadm.conf
# ubuntu uses /etc/mdadm/mdadm.conf
sudo mdadm --examine --scan --config=/etc/mdadm/mdadm.conf
sudo mdadm --examine --scan --config=/etc/mdadm/mdadm.conf >> /etc/mdadm/mdadm.conf
# sudo mdadm -A /dev/md0    # --assemble
# sudo mdadm -A md0    # --assemble

Change Drive Name

/dev/mdX usually created based on drive "name" number (eg. myserver:2 will create /dev/md2)

Change the name with: (to /dev/md1) [1]

mdadm -A --update=name --name=1 /dev/md1 /dev/nvme0n1p1 /dev/nvme1n1p1 /dev/nvme2n1p1 /dev/nvme3n1p1

If the "name" number collides with another array, it will pick some high number like /dev/md127

Stop array

# mdadm --stop /dev/md0
mdadm --stop md0
mdadm -S /dev/md/thebig

Start (assemble) array

# mdadm --assemble /dev/md0
mdadm --verbose --assemble md0
mdadm -v -A md0
mdadm -v -A thebig

Add disk to RAID 5

unmount /dev/md1
mdadm --add /dev/md1 /dev/sdb3
mdadm --grow --raid-devices=4 /dev/md1
# wait for rebuild...
watch -dc cat /proc/mdstat
e2fsck -f /dev/md3
resize2fs /dev/md3

fdisk to 'Linux raid'

type 'fd'
   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048  1953525167   976761560   fd  Linux raid autodetect

Sample mdadm.conf

#DEVICE partitions containers
HOMEHOST <system>
MAILADDR root

ARRAY /dev/md/md0 level=raid10 num-devices=6 metadata=1.2 spares=1 UUID=fc5714ab:ecb661c5:5abf4c99:ea07a6ba
   devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1,/dev/sdf1,/dev/sdg1,/dev/sdh1

ARRAY /dev/md/md1 level=raid10 num-devices=4 metadata=1.2 spares=1 UUID=b14087e3:cf905322:9434e4af:6362e366
   devices=/dev/nvme0n1p1,/dev/nvme1n1p1,/dev/nvme2n1p1,/dev/nvme3n1p1

descriptive names mdadm.conf

or using descriptive names:

#DEVICE partitions containers
HOMEHOST <system>
MAILADDR root

# will create symlink /dev/md/fast
ARRAY /dev/md/fast level=raid10 num-devices=6 metadata=1.2 spares=1 UUID=fc5714ab:ecb661c5:5abf4c99:ea07a6ba
   devices=/dev/sdb1,/dev/sdc1,/dev/sdd1,/dev/sde1,/dev/sdf1,/dev/sdg1,/dev/sdh1

# will create symlink /dev/md/big
ARRAY /dev/md/big level=raid10 num-devices=4 metadata=1.2 spares=1 UUID=b14087e3:cf905322:9434e4af:6362e366
   devices=/dev/nvme0n1p1,/dev/nvme1n1p1,/dev/nvme2n1p1,/dev/nvme3n1p1

default mdadm.conf

# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# This configuration was auto-generated on Tue, 07 Jan 2020 19:16:23 +0000 by mkconf

Common mdadm commands

Generate mdadm.conf

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf.save
/usr/share/mdadm/mkconf --generate > /etc/mdadm/mdadm.conf

Create RAID

mdadm --create /dev/md2 --raid-devices=3 --spare-devices=0 --level=5 --run /dev/sd[cde]1

Remove disk from RAID

mdadm --fail /dev/md0 /dev/sda1
mdadm --remove /dev/md0 /dev/sda1

Copy the partition structure (when replacing a failed drive)

sfdisk -d /dev/sda | sfdisk /dev/sdb 
mdadm --zero-superblock /dev/sdb

Add a disk to a RAID array (to replace a removed failed drive)

mdadm --add /dev/md0 /dev/sdf1

Check RAID status

cat /proc/mdstat
mdadm --detail /dev/md0

Reassemble a group of RAID disks

#This works to move an assembly from one physical machine to another.
mdadm --assemble /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1

Steps to emulate mdrun (which has been depreciated)

# haven't tested this. Use with care
mdadm --examine --scan --config=partitions > /tmp/mdadm.conf
mdadm --assemble --scan --config=/tmp/mdadm.conf

Add a disk to an existing RAID and resize the filesystem

mdadm --add /dev/md0 /dev/sdg1
mdadm --grow /dev/md0 -n 5
e2fsck -f /dev/md0
resize2fs /dev/md0
e2fsck -f /dev/md0

Replace all disks in an array with larger drives and resize

# For each drive in the existing array
mdadm --fail /dev/md0 /dev/sda1
mdadm --remove /dev/md0 /dev/sda1
# physically replace the drive
mdadm --add /dev/md0 /dev/sda1
# now, wait until md0 is rebuilt.
# this can literally take days
# All drives have been replaced and sync'd, but they still use the original size.
# Issue the following command to use all available space:
mdadm --grow /dev/md0  --size=max
# Do not forget to resized the file system which sits on the raid set:
# for ext2/3/4
e2fsck -f /dev/md0 && resize2fs /dev/md0 && e2fsck -f /dev/md0
# for lvm pv
pvresize /dev/md0
# for ntfs
ntfsresize /dev/md0
# note, most likely ntfs is NOT exported as a single partition. In the case
# of a Xen hvm machine, it is a "disk device" so you will need to resize the
# partition itself, then resize ntfs.

Stop and remove the RAID device

mdadm --stop /dev/md0
mdadm --remove /dev/md0

Destroy an existing array

mdadm --manage /dev/md2 --fail /dev/sd[cde]1
mdadm --manage /dev/md2 --remove /dev/sd[cde]1
mdadm --manage /dev/md2 --stop
mdadm --zero-superblock /dev/sd[cde]1

Speed up a sync (after drive replacement)

cat /proc/sys/dev/raid/speed_limit_max
200000

Rename an existing array

mdadm --stop /dev/md127
mdadm -A /dev/md0 -m127 --update=super-minor /dev/sd[bcd]

Source: Linux Server Tech FAQ - Common mdadm commands - http://wiki.linuxservertech.com/index.php?action=artikel&cat=7&id=11&artlang=en

LVM

See LVM

Larger than 2TB RAID

To get larger than 2TB you need GPTs

parted /dev/sdb
 mklabel gpt
 print free
 mkpart primary 1M 4001GB
 p
 set 1 raid on
 align-check
    optimal
    1
 q
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sd[bcd]1
mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sd[bcde]1
mkfs.ext4 /dev/md0 -L /ci
cat /proc/mdstat
parted -a optimal /dev/sdf


Ref:

Handle Drive Failure

Fail device:

mdadm /dev/md0 --fail /dev/sdb1    # -f
  mdadm: set /dev/sdb1 faulty in /dev/md0

Remove failed device:

mdadm /dev/md0 --remove /dev/sdb1

Verify failed device:

mdadm --detail /dev/md0

Add device to md:

mdadm /dev/md0 --add /dev/sdb1    # -a

Verify rebuild:

mdadm --detail /dev/md0
cat /proc/mdstat

References:

Force Partial Failure Recovery

inactive array (missing 2 devices due to io failure)

md4 : inactive sdf3[5] sde3[4] sdd3[3] sdc3[2]
      5780957696 blocks

/etc/mdadm.conf

ARRAY /dev/md4 level=raid5 num-devices=6 UUID=f0ce1e02:2cd38a68:6ffb704c:fe6e32b0

Stop array:

mdadm --stop /dev/md4

Forcefully Rebuild Array:

mdadm -A --force /dev/md4 /dev/sd[acdef]3
# notice 'b' is missing as it is dead dead
# but 'a' is only partially dead (random io errors)

References:

Bring online auto-read-only

cat /proc/mdstat 
Personalities : [raid1] 
md1 : active (auto-read-only) raid1 sde1[1] sdd1[0]

Bring online:

mdadm --readwrite /dev/md1

Ref:

Pause Sync

checkarray -x --all
echo 0 > /proc/sys/dev/raid/speed_limit_max
echo 0 > /proc/sys/dev/raid/speed_limit_min
echo frozen > /sys/block/md0/md/sync_action
echo none > /sys/block/md0/md/resync_start
echo idle > /sys/block/md0/md/sync_action

ref [3]

Notes

See mdadm/Notes

keywords