Linux/mdadm/Notes
mdadm notes
Notes
Copy partition table:
sfdisk -d /dev/sda | sfdisk /dev/sdb
Create md device:
mknod /dev/md<number> b <MAJOR> <MINOR>
References
- Managing RAID and LVM with Linux
- Root-on-LVM-on-RAID HOWTO
- HOWTO Install on Software RAID
- help interpreting MDADM readouts
- mdadm not stinking syncing
- error opening /dev/md2: No such file or directory
Build a RAID 5 device
Check for existing md devices:
cat /proc/mdstat
Check if the not used md device exists
ls /dev/md1
If you need to create the device:
# look at existing md devices for MAJOR and MINOR values file /dev/md* # Create md device # where 9 matches the already created # and the 1 is the next increment up from the already created mknod /dev/md1 b 9 1
Create 3 raid type partitions (type=fd):
/fdisk /dev/sda1 ... Command (m for help): t Selected partition 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect)
Create RAID 5 md device:
/sbin/mdadm --create --verbose /dev/<md_device_name> --level=5 --raid-devices=3 <partitions...> /sbin/mdadm --create --verbose /dev/md1 --level=5 --raid-devices=3 \ /dev/hdc1 /dev/hde1 /dev/hdg1 #or /sbin/mdadm --create --verbose /dev/md1 --level=5 --raid-devices=3 \ /dev/hd[ceg]1
Add device:
mdadm /dev/md0 --add /dev/sc1
Add spare: (spares are automatically selected when more than the capacity of disks is added)
mdadm /dev/md0 --add /dev/sc1
Wait for RAID to be rebuilt:
watch cat /proc/mdstat
Show MD details:
mdadm --detail <md_device_name> mdadm --detail /dev/md1
BASIC:
Format with file system:
mkfs.ext3 /dev/md1
Mount new device:
mkdir /data mount /dev/md1 /data
Make mount permanent:
vi /etc/fstab
#device mount point fs options fs_freq fs_passno /dev/md1 /data ext3 defaults,noatime 1 2
LVM:
Create Volume Group:
pvcreate /dev/md1 vgcreate lvm-raid /dev/md1
"The default value for the physical extent size can be too low for a large RAID array. In those cases you'll need to specify the -s option with a larger than default physical extent size. The default is only 4MB as of the version in Fedora Core 5. The maximum number of physical extents is approximately 65k so take your maximum volume size and divide it by 65k then round it to the next nice round number. For example, to successfully create a 550G RAID let's figure that's approximately 550,000 megabytes and divide by 65,000 which gives you roughly 8.46. Round it up to the next nice round number and use 16M (for 16 megabytes) as the physical extent size and you'll be fine:" [1]
vgcreate -s 16M <volume group name> <physical volume>
"Ok, you've created a blank receptacle but now you have to tell how many Physical Extents from the physical device (/dev/md0 in this case) will be allocated to this Volume Group. In my case I wanted all the data from /dev/md0 to be allocated to this Volume Group. If later I wanted to add additional space I would create a new RAID array and add that physical device to this Volume Group.
To find out how many PEs are available to me use the vgdisplay command to find out how many are available and now I can create a Logical Volume using all (or some) of the space in the Volume Group. In my case I call the Logical Volume lvm0." [2]
vgdisplay lvm-raid ... Free PE / Size 57235 / 223.57 GB lvcreate -l 57235 lvm-raid -n lvm0
This will create the following partition to use:
/dev/lvm-raid/lvm0
Format with file system:
mkfs.ext3 /dev/lvm-raid/lvm0
Mount new device:
mkdir /data mount /dev/lvm-raid/lvm0 /data
Make mount permanent:
vi /etc/fstab
#device mount point fs options fs_freq fs_passno /dev/lvm-raid/lvm0 /data ext3 defaults,noatime 1 2
References:
Configuration File
Generally you do not need a configuration file for mdadm devices. During a rebuild it does come in handy though.
Configuration file:
/etc/mdadm.conf
# from man mdadm # This will create a prototype config file that describes currently active arrays that are known to be made # from partitions of IDE or SCSI drives. This file should be reviewed before being used as it may contain # unwanted detail. echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf mdadm --detail --scan >> mdadm.conf
Fix Broken RAID
mdadm --stop /dev/md3 mdadm --create /dev/md3 --verbose --level=raid5 --raid-devices=3 --spare-devices=0 /dev/hd[ceg]1 mdadm --detail /dev/md3 cat /proc/mdstat mdadm /dev/md3 --fail /dev/hdc1 --remove /dev/hdc1 mdadm --stop /dev/md3 mdadm --assemble /dev/md3
Reassemble array manually:
# Find UUID with examine (--examine or -E): mdadm --examine /dev/sda1
# Assemble (--assemble or -A) by UUID (--uuid or -u) mdadm --assemble /dev/md1 --uuid [UUID]
# Combined mdadm --assemble /dev/md1 --uuid $( mdadm --examine /dev/sda1 | grep UUID | awk '{print $3}' )
See also Linux Recovery
References
TODO
- Will LVM survive reboot?
- Will MD survive reboot?
- Will LVM survive reinstall of OS?
- Will md device survive reinstall of OS? Create a pure MD device and LVM device, and test on OS reinstall.
RAID 5 Setup
Using this as a reference.
Another good tutorial: Root-on-LVM-on-RAID HOWTO
[root@fileserver ~]# fdisk /dev/hdc The number of cylinders for this disk is set to 30401. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): p Disk /dev/hdc: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System Command (m for help): n Command action e extended p primary partition (1-4) p Partition number (1-4): 1 First cylinder (1-30401, default 1): Using default value 1 Last cylinder or +size or +sizeM or +sizeK (1-30401, default 30401): Using default value 30401 Command (m for help): p Disk /dev/hdc: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdc1 1 30401 244196001 83 Linux Command (m for help): t Selected partition 1 Hex code (type L to list codes): fd Changed system type of partition 1 to fd (Linux raid autodetect) Command (m for help): p Disk /dev/hdc: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdc1 1 30401 244196001 fd Linux raid autodetect Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks.
Repeated for /dev/hde and /dev/hdg...
[root@fileserver ~]# fdisk -l Disk /dev/hda: 61.4 GB, 61492838400 bytes 255 heads, 63 sectors/track, 7476 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hda1 * 1 13 104391 83 Linux /dev/hda2 14 7476 59946547+ 8e Linux LVM Disk /dev/hdc: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdc1 1 30401 244196001 fd Linux raid autodetect Disk /dev/hde: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hde1 1 30401 244196001 fd Linux raid autodetect Disk /dev/hdg: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/hdg1 1 30401 244196001 fd Linux raid autodetect Disk /dev/dm-0: 59.2 GB, 59257126912 bytes 255 heads, 63 sectors/track, 7204 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/dm-1: 2080 MB, 2080374784 bytes 255 heads, 63 sectors/track, 252 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/dm-1 doesn't contain a valid partition table
Create RAID 5 array
[root@fileserver ~]# /sbin/mdadm --create --verbose /dev/md2 --level=5 --raid-devices=3 \ /dev/hdc1 /dev/hde1 /dev/hdg1 mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 64K mdadm: /dev/hdc1 appears to contain an ext2fs file system size=104388K mtime=Sat Nov 18 05:47:04 2006 mdadm: size set to 244195904K Continue creating array? y mdadm: array /dev/md2 started.
Examine the array
[root@fileserver ~]# mdadm --detail /dev/md2 /dev/md2: Version : 00.90.03 Creation Time : Thu Nov 23 23:24:27 2006 Raid Level : raid5 Array Size : 488391808 (465.77 GiB 500.11 GB) Device Size : 244195904 (232.88 GiB 250.06 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Thu Nov 23 23:24:27 2006 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : e5748042:19bf53d2:3d646fcb:1293ea5d Events : 0.1 Number Major Minor RaidDevice State 0 22 1 0 active sync /dev/hdc1 1 33 1 1 active sync /dev/hde1 0 0 0 9923072 removed 3 34 1 3 active sync /dev/hdg1
Curious what this line indicates...
0 0 0 9923072 removed
Following Initial set of LVM on top of RAID.
RAID 5 Performance
/dev/hdc1 and /dev/hde1 and /dev/hdg1 in RAID 5 configuration
[root@fileserver ~]# hdparm -Tt /dev/lvm-raid/lvm0 /dev/lvm-raid/lvm0: #1 Timing cached reads: 3332 MB in 2.00 seconds = 1665.86 MB/sec Timing buffered disk reads: 380 MB in 3.01 seconds = 126.18 MB/sec #2 Timing cached reads: 3328 MB in 2.00 seconds = 1662.68 MB/sec Timing buffered disk reads: 378 MB in 3.00 seconds = 125.98 MB/sec #3 Timing cached reads: 3356 MB in 2.00 seconds = 1677.87 MB/sec Timing buffered disk reads: 380 MB in 3.01 seconds = 126.32 MB/sec #4 Timing cached reads: 3348 MB in 2.00 seconds = 1674.49 MB/sec Timing buffered disk reads: 380 MB in 3.01 seconds = 126.18 MB/sec
It appears that the RAID 5 has the same performance as the 2 disk stripped test.
Raid build failure
[root@hal ~]# mdadm --create --verbose /dev/md2 --level=raid0 --raid-devices=3 /dev/hd[dfh]1 mdadm: error opening /dev/md2: No such file or directory error opening /dev/md2: No such file or directory http://www.issociate.de/board/post/145249/error_opening_/dev/md2:_No_such_file_or_directory.html mknod /dev/md2 b 9 2 [root@hal ~]# file /dev/md* /dev/md0: block special (9/0) /dev/md1: block special (9/1) [root@hal ~]# file /dev/md* /dev/md0: block special (9/0) /dev/md1: block special (9/1) /dev/md2: block special (9/2) [root@hal ~]# mdadm --create --verbose /dev/md2 --level=raid0 --raid-devices=3 /dev/hd[dfh]1 mdadm: chunk size defaults to 64K mdadm: array /dev/md2 started. [root@hal ~]# mkfs.ext3 /dev/md2 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 73269248 inodes, 146518752 blocks 7325937 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 4472 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 39 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Broken RAID 5?
[root@fileserver ~]# mdadm --stop /dev/md3
[root@fileserver ~]# mdadm --create /dev/md3 --verbose --level=raid5 --raid-devices=3 --spare-devices=0 /dev/hd[ceg]1 mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 64K mdadm: /dev/hdc1 appears to contain an ext2fs file system size=240974720K mtime=Wed Dec 31 17:00:00 1969 mdadm: /dev/hdc1 appears to be part of a raid array: level=raid5 devices=3 ctime=Fri Nov 24 12:27:47 2006 mdadm: /dev/hde1 appears to be part of a raid array: level=raid5 devices=3 ctime=Fri Nov 24 12:27:47 2006 mdadm: /dev/hdg1 appears to contain an ext2fs file system size=240974720K mtime=Wed Dec 31 17:00:00 1969 mdadm: /dev/hdg1 appears to be part of a raid array: level=raid5 devices=3 ctime=Fri Nov 24 12:27:47 2006 mdadm: size set to 120487360K Continue creating array? y mdadm: array /dev/md3 started.
[root@fileserver ~]# mdadm --detail /dev/md3 /dev/md3: Version : 00.90.03 Creation Time : Fri Nov 24 12:38:19 2006 Raid Level : raid5 Array Size : 240974720 (229.81 GiB 246.76 GB) Device Size : 120487360 (114.91 GiB 123.38 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Fri Nov 24 12:38:19 2006 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : 70b25481:ebbf8e5b:c8c3a366:13d2ddcd Events : 0.1 Number Major Minor RaidDevice State 0 22 1 0 active sync /dev/hdc1 1 33 1 1 active sync /dev/hde1 0 0 0 0 removed 3 34 1 3 active sync /dev/hdg1
[root@fileserver ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md3 : active raid5 hdg1[3] hde1[1] hdc1[0] 240974720 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_] md4 : active raid5 hdg2[3] hde2[1] hdc2[0] 247416832 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_] unused devices: <none>
[root@fileserver ~]# mdadm /dev/md3 --fail /dev/hdc1 --remove /dev/hdc1 mdadm: set /dev/hdc1 faulty in /dev/md3 mdadm: hot removed /dev/hdc1 [root@fileserver ~]# mdadm --stop /dev/md3 [root@fileserver ~]# mdadm --assemble /dev/md3 mdadm: device 3 in /dev/md3 has wrong state in superblock, but /dev/hdg1 seems ok mdadm: /dev/md3 assembled from 1 drive and 1 spare - not enough to start the array.
It appears that other people have had similar issues:
- http://www.linuxquestions.org/questions/showthread.php?t=434067
- http://www.linuxquestions.org/questions/showthread.php?t=491325
Fixed RAID 5
After discussing with Clint the problem, we came to the conclusion that when the array was created, the system did not process the spare to convert it into the array. We assumed there was a bug with the raid5 module or the kernel. I rebuilt the system using Fedora Core 5 64bit edition, and upon creating the array, the spare was processed correctly. I was also pleased to see that the RAID 5 (even being broken), and the RAID 0 with LVM survied the reinstall of the OS.
Every 2.0s: cat /proc/mdstat Fri Nov 24 19:04:03 2006 Personalities : [raid0] [raid6] [raid5] [raid4] md3 : active raid5 hdg1[3] hde1[1] hdc1[0] 240974720 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_] [>....................] recovery = 0.8% (969652/120487360) finish=43.1min speed=46173K/sec md4 : active raid0 hdg2[2] hde2[1] hdc2[0] 371125248 blocks 64k chunks unused devices: <none>
Creating md device
[root@hal ~]# mdadm --create --verbose /dev/md2 --level=raid0 --raid-devices=3 /dev/hd[dfh]1 mdadm: error opening /dev/md2: No such file or directory error opening /dev/md2: No such file or directory http://www.issociate.de/board/post/145249/error_opening_/dev/md2:_No_such_file_or_directory.html [root@hal ~]# file /dev/md* /dev/md0: block special (9/0) /dev/md1: block special (9/1) # where the 9 matches the others # and the 2 is the next increment above the exisiting mknod /dev/md2 b 9 2 [root@hal ~]# file /dev/md* /dev/md0: block special (9/0) /dev/md1: block special (9/1) /dev/md2: block special (9/2) [root@hal ~]# mdadm --create --verbose /dev/md2 --level=raid0 --raid-devices=3 /dev/hd[dfh]1 mdadm: chunk size defaults to 64K mdadm: array /dev/md2 started. [root@hal ~]# mkfs.ext3 /dev/md2 mke2fs 1.39 (29-May-2006) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) 73269248 inodes, 146518752 blocks 7325937 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 4472 block groups 32768 blocks per group, 32768 fragments per group 16384 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 39 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.
Chunk Size
The chunk-size deserves an explanation. You can never write completely parallel to a set of disks. If you had two disks and wanted to write a byte, you would have to write four bits on each disk, actually, every second bit would go to disk 0 and the others to disk 1. Hardware just doesn't support that. Instead, we choose some chunk-size, which we define as the smallest "atomic" mass of data that can be written to the devices. A write of 16 kB with a chunk size of 4 kB, will cause the first and the third 4 kB chunks to be written to the first disk, and the second and fourth chunks to be written to the second disk, in the RAID-0 case with two disks. Thus, for large writes, you may see lower overhead by having fairly large chunks, whereas arrays that are primarily holding small files may benefit more from a smaller chunk size.
Chunk sizes must be specified for all RAID levels, including linear mode. However, the chunk-size does not make any difference for linear mode.
For optimal performance, you should experiment with the value, as well as with the block-size of the filesystem you put on the array.
The argument to the chunk-size option in /etc/raidtab specifies the chunk-size in kilobytes. So "4" means "4 kB".
"A reasonable chunk-size for RAID-5 is 128 kB, but as always, you may want to experiment with this." [3]
To Read
- http://www.gagme.com/greg/linux/raid-lvm.php
- http://scotgate.org/?p=107
- http://www.networknewz.com/2003/0113.html
- http://www.linuxdevcenter.com/pub/a/linux/2002/12/05/RAID.html
- http://man-wiki.net/index.php/5:mdadm.conf
- http://acd.ucar.edu/~fredrick/linux/fedoraraid/
- http://xtronics.com/reference/SATA-RAID-debian-for-2.6.html
- http://dev.riseup.net/grimoire/storage/software-raid/
Virtual Play
My posting to PLUG 2009.07.08:
Mike, By the way, if you are wanting to play around with mdadm without actually using real drives you can setup a few virtual devices and play with mdadm to your hearts content without destroying real disks: dd if=/dev/zero of=/root/vd1 bs=1M count=100 # create virtual disk 1 dd if=/dev/zero of=/root/vd2 bs=1M count=100 # create virtual disk 2 dd if=/dev/zero of=/root/vd3 bs=1M count=100 # create virtual disk 3 dd if=/dev/zero of=/root/vd4 bs=1M count=100 # create virtual disk 4 losetup -a # show currently used loop devices losetup /dev/loop1 /root/vd1 # use an unused loop device losetup /dev/loop2 /root/vd2 # use an unused loop device losetup /dev/loop3 /root/vd3 # use an unused loop device losetup /dev/loop4 /root/vd4 # use an unused loop device mdadm --create /dev/md2 --level raid10 --raid-devices 4 /dev/loop[1234] # create md devices (use unused /dev/md?) mkfs.ext3 /dev/md2 # format as ext3 mount /dev/md2 /mnt/md2 # mount if you want now you can fail virtual disks, stop array, reassemble, fail disks, etc to your hearts content. # example rebuild with one less disk: mdadm --stop /dev/md2 mdadm --assemble /dev/md2 /dev/loop[123] When you are done, to clean up: umount /dev/md2 # if mounted mdadm --stop /dev/md2 losetup -d /dev/loop1 losetup -d /dev/loop2 losetup -d /dev/loop3 losetup -d /dev/loop4 rm /root/vd1 rm /root/vd2 rm /root/vd3 rm /root/vd4
Odd Failure on Rebuild
Aug 18 19:17:53 prime kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Aug 18 19:17:53 prime kernel: ata1.00: irq_stat 0x40000001 Aug 18 19:17:53 prime kernel: ata1.00: failed command: READ DMA EXT Aug 18 19:17:53 prime kernel: ata1.00: cmd 25/00:38:b3:01:43/00:03:75:00:00/e0 tag 4 dma 421888 in Aug 18 19:17:53 prime kernel: res 51/40:00:ba:04:43/00:00:75:00:00/00 Emask 0x9 (media error) Aug 18 19:17:53 prime kernel: ata1.00: status: { DRDY ERR } Aug 18 19:17:53 prime kernel: ata1.00: error: { UNC } Aug 18 19:17:53 prime kernel: ata1.00: configured for UDMA/133 Aug 18 19:17:53 prime kernel: ata1: EH complete Aug 18 19:17:56 prime kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Aug 18 19:17:56 prime kernel: ata1.00: irq_stat 0x40000001 Aug 18 19:17:56 prime kernel: ata1.00: failed command: READ DMA EXT Aug 18 19:17:56 prime kernel: ata1.00: cmd 25/00:38:b3:01:43/00:03:75:00:00/e0 tag 5 dma 421888 in Aug 18 19:17:56 prime kernel: res 51/40:00:ba:04:43/00:00:75:00:00/00 Emask 0x9 (media error) Aug 18 19:17:56 prime kernel: ata1.00: status: { DRDY ERR } Aug 18 19:17:56 prime kernel: ata1.00: error: { UNC } Aug 18 19:17:56 prime kernel: ata1.00: configured for UDMA/133 Aug 18 19:17:56 prime kernel: ata1: EH complete Aug 18 19:17:59 prime kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Aug 18 19:17:59 prime kernel: ata1.00: irq_stat 0x40000001 Aug 18 19:17:59 prime kernel: ata1.00: failed command: READ DMA EXT Aug 18 19:17:59 prime kernel: ata1.00: cmd 25/00:38:b3:01:43/00:03:75:00:00/e0 tag 6 dma 421888 in Aug 18 19:17:59 prime kernel: res 51/40:00:ba:04:43/00:00:75:00:00/00 Emask 0x9 (media error) Aug 18 19:17:59 prime kernel: ata1.00: status: { DRDY ERR } Aug 18 19:17:59 prime kernel: ata1.00: error: { UNC } Aug 18 19:17:59 prime kernel: ata1.00: configured for UDMA/133 Aug 18 19:17:59 prime kernel: ata1: EH complete Aug 18 19:18:01 prime kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Aug 18 19:18:01 prime kernel: ata1.00: irq_stat 0x40000001 Aug 18 19:18:01 prime kernel: ata1.00: failed command: READ DMA EXT Aug 18 19:18:01 prime kernel: ata1.00: cmd 25/00:38:b3:01:43/00:03:75:00:00/e0 tag 7 dma 421888 in Aug 18 19:18:01 prime kernel: res 51/40:00:ba:04:43/00:00:75:00:00/00 Emask 0x9 (media error) Aug 18 19:18:01 prime kernel: ata1.00: status: { DRDY ERR } Aug 18 19:18:01 prime kernel: ata1.00: error: { UNC } Aug 18 19:18:02 prime kernel: ata1.00: configured for UDMA/133 Aug 18 19:18:02 prime kernel: ata1: EH complete Aug 18 19:18:04 prime kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Aug 18 19:18:04 prime kernel: ata1.00: irq_stat 0x40000001 Aug 18 19:18:04 prime kernel: ata1.00: failed command: READ DMA EXT Aug 18 19:18:04 prime kernel: ata1.00: cmd 25/00:38:b3:01:43/00:03:75:00:00/e0 tag 8 dma 421888 in Aug 18 19:18:04 prime kernel: res 51/40:00:ba:04:43/00:00:75:00:00/00 Emask 0x9 (media error) Aug 18 19:18:04 prime kernel: ata1.00: status: { DRDY ERR } Aug 18 19:18:04 prime kernel: ata1.00: error: { UNC } Aug 18 19:18:04 prime kernel: ata1.00: configured for UDMA/133 Aug 18 19:18:04 prime kernel: ata1: EH complete Aug 18 19:18:07 prime kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Aug 18 19:18:07 prime kernel: ata1.00: irq_stat 0x40000001 Aug 18 19:18:07 prime kernel: ata1.00: failed command: READ DMA EXT Aug 18 19:18:07 prime kernel: ata1.00: cmd 25/00:38:b3:01:43/00:03:75:00:00/e0 tag 9 dma 421888 in Aug 18 19:18:07 prime kernel: res 51/40:00:ba:04:43/00:00:75:00:00/00 Emask 0x9 (media error) Aug 18 19:18:07 prime kernel: ata1.00: status: { DRDY ERR } Aug 18 19:18:07 prime kernel: ata1.00: error: { UNC } Aug 18 19:18:07 prime kernel: ata1.00: configured for UDMA/133 Aug 18 19:18:07 prime kernel: sd 0:0:0:0: [sda] Unhandled sense code Aug 18 19:18:07 prime kernel: sd 0:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Aug 18 19:18:07 prime kernel: sd 0:0:0:0: [sda] Sense Key : Medium Error [current] [descriptor] Aug 18 19:18:07 prime kernel: Descriptor sense data with sense descriptors (in hex): Aug 18 19:18:07 prime kernel: 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 Aug 18 19:18:07 prime kernel: 75 43 04 ba Aug 18 19:18:07 prime kernel: sd 0:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed Aug 18 19:18:07 prime kernel: sd 0:0:0:0: [sda] CDB: Read(10): 28 00 75 43 01 b3 00 03 38 00 Aug 18 19:18:07 prime kernel: md/raid:md4: Disk failure on sda3, disabling device. Aug 18 19:18:07 prime kernel: md/raid:md4: Operation continuing on 4 devices. Aug 18 19:18:07 prime kernel: ata1: EH complete Aug 18 19:18:07 prime kernel: md: md4: recovery done.