<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://aznot.com/index.php?action=history&amp;feed=atom&amp;title=Linux%2Fmdadm%2FNotes</id>
	<title>Linux/mdadm/Notes - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://aznot.com/index.php?action=history&amp;feed=atom&amp;title=Linux%2Fmdadm%2FNotes"/>
	<link rel="alternate" type="text/html" href="https://aznot.com/index.php?title=Linux/mdadm/Notes&amp;action=history"/>
	<updated>2026-05-08T15:24:26Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://aznot.com/index.php?title=Linux/mdadm/Notes&amp;diff=1898&amp;oldid=prev</id>
		<title>Kenneth: /* Virtual Play */</title>
		<link rel="alternate" type="text/html" href="https://aznot.com/index.php?title=Linux/mdadm/Notes&amp;diff=1898&amp;oldid=prev"/>
		<updated>2015-03-08T07:10:27Z</updated>

		<summary type="html">&lt;p&gt;&lt;span dir=&quot;auto&quot;&gt;&lt;span class=&quot;autocomment&quot;&gt;Virtual Play&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;== mdadm notes ==&lt;br /&gt;
&lt;br /&gt;
== Notes ==&lt;br /&gt;
&lt;br /&gt;
Copy partition table:&lt;br /&gt;
 sfdisk -d /dev/sda | sfdisk /dev/sdb&lt;br /&gt;
&lt;br /&gt;
Create md device:&lt;br /&gt;
 mknod /dev/md&amp;lt;number&amp;gt; b &amp;lt;MAJOR&amp;gt; &amp;lt;MINOR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
*[http://www.gagme.com/greg/linux/raid-lvm.php#failure Managing RAID and LVM with Linux]&lt;br /&gt;
*[http://www.midhgard.it/docs/lvm/html/install.disks.html Root-on-LVM-on-RAID HOWTO]&lt;br /&gt;
*[http://gentoo-wiki.com/HOWTO_Gentoo_Install_on_Software_RAID HOWTO Install on Software RAID]&lt;br /&gt;
*[http://www.linuxquestions.org/questions/linux-software-2/help-interpreting-mdadm-readouts-434067/ help interpreting MDADM readouts]&lt;br /&gt;
*[http://www.linuxquestions.org/questions/linux-server-73/mdadm-not-stinking-syncing-491325/ mdadm not stinking syncing]&lt;br /&gt;
*[http://www.issociate.de/board/post/145249/error_opening_/dev/md2:_No_such_file_or_directory.html error opening /dev/md2: No such file or directory]&lt;br /&gt;
&lt;br /&gt;
=== Build a RAID 5 device ===&lt;br /&gt;
&lt;br /&gt;
Check for existing md devices:&lt;br /&gt;
 cat /proc/mdstat&lt;br /&gt;
&lt;br /&gt;
Check if the not used md device exists&lt;br /&gt;
 ls /dev/md1&lt;br /&gt;
&lt;br /&gt;
If you need to create the device:&lt;br /&gt;
 # look at existing md devices for MAJOR and MINOR values&lt;br /&gt;
 file /dev/md*&lt;br /&gt;
 &lt;br /&gt;
 # Create md device&lt;br /&gt;
 # where 9 matches the already created&lt;br /&gt;
 # and the 1 is the next increment up from the already created&lt;br /&gt;
 mknod /dev/md1 b 9 1&lt;br /&gt;
&lt;br /&gt;
Create 3 raid type partitions (type=fd):&lt;br /&gt;
 /fdisk /dev/sda1&lt;br /&gt;
 ...&lt;br /&gt;
 Command (m for help): t&lt;br /&gt;
 Selected partition 1&lt;br /&gt;
 Hex code (type L to list codes): fd&lt;br /&gt;
 Changed system type of partition 1 to fd (Linux raid autodetect)&lt;br /&gt;
&lt;br /&gt;
Create RAID 5 md device:&lt;br /&gt;
 /sbin/mdadm --create --verbose /dev/&amp;lt;md_device_name&amp;gt; --level=5 --raid-devices=3 &amp;lt;partitions...&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 /sbin/mdadm --create --verbose /dev/md1 --level=5 --raid-devices=3 \&lt;br /&gt;
    /dev/hdc1 /dev/hde1 /dev/hdg1&lt;br /&gt;
 #or&lt;br /&gt;
 /sbin/mdadm --create --verbose /dev/md1 --level=5 --raid-devices=3 \&lt;br /&gt;
    /dev/hd[ceg]1&lt;br /&gt;
&lt;br /&gt;
Add device:&lt;br /&gt;
 mdadm /dev/md0 --add /dev/sc1&lt;br /&gt;
&lt;br /&gt;
Add spare: (spares are automatically selected when more than the capacity of disks is added)&lt;br /&gt;
 mdadm /dev/md0 --add /dev/sc1&lt;br /&gt;
&lt;br /&gt;
Wait for RAID to be rebuilt:&lt;br /&gt;
 watch cat /proc/mdstat&lt;br /&gt;
&lt;br /&gt;
Show MD details:&lt;br /&gt;
 mdadm --detail &amp;lt;md_device_name&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
 mdadm --detail /dev/md1&lt;br /&gt;
&lt;br /&gt;
BASIC:&lt;br /&gt;
&lt;br /&gt;
Format with file system:&lt;br /&gt;
 mkfs.ext3 /dev/md1&lt;br /&gt;
&lt;br /&gt;
Mount new device:&lt;br /&gt;
 mkdir /data&lt;br /&gt;
 mount /dev/md1 /data&lt;br /&gt;
&lt;br /&gt;
Make mount permanent:&lt;br /&gt;
 vi /etc/fstab&lt;br /&gt;
&lt;br /&gt;
 #device		mount point	fs	options			fs_freq fs_passno&lt;br /&gt;
 /dev/md1		/data		ext3	defaults,noatime	1 2&lt;br /&gt;
&lt;br /&gt;
LVM:&lt;br /&gt;
&lt;br /&gt;
Create Volume Group:&lt;br /&gt;
 pvcreate /dev/md1&lt;br /&gt;
 vgcreate lvm-raid /dev/md1&lt;br /&gt;
&lt;br /&gt;
&amp;quot;The default value for the physical extent size can be too low for a large RAID array. In those cases you&amp;#039;ll need to specify the -s option with a larger than default physical extent size. The default is only 4MB as of the version in Fedora Core 5. The maximum number of physical extents is approximately 65k so take your maximum volume size and divide it by 65k then round it to the next nice round number. For example, to successfully create a 550G RAID let&amp;#039;s figure that&amp;#039;s approximately 550,000 megabytes and divide by 65,000 which gives you roughly 8.46. Round it up to the next nice round number and use 16M (for 16 megabytes) as the physical extent size and you&amp;#039;ll be fine:&amp;quot; [http://www.gagme.com/greg/linux/raid-lvm.php]&lt;br /&gt;
 vgcreate -s 16M &amp;lt;volume group name&amp;gt; &amp;lt;physical volume&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Ok, you&amp;#039;ve created a blank receptacle but now you have to tell how many Physical Extents from the physical device (/dev/md0 in this case) will be allocated to this Volume Group. In my case I wanted all the data from /dev/md0 to be allocated to this Volume Group. If later I wanted to add additional space I would create a new RAID array and add that physical device to this Volume Group.&lt;br /&gt;
&lt;br /&gt;
To find out how many PEs are available to me use the vgdisplay command to find out how many are available and now I can create a Logical Volume using all (or some) of the space in the Volume Group. In my case I call the Logical Volume lvm0.&amp;quot; [http://www.gagme.com/greg/linux/raid-lvm.php]&lt;br /&gt;
 vgdisplay lvm-raid&lt;br /&gt;
  ...&lt;br /&gt;
  Free  PE / Size       57235 / 223.57 GB&lt;br /&gt;
 lvcreate -l 57235 lvm-raid -n lvm0&lt;br /&gt;
&lt;br /&gt;
This will create the following partition to use:&lt;br /&gt;
 /dev/lvm-raid/lvm0&lt;br /&gt;
&lt;br /&gt;
Format with file system:&lt;br /&gt;
 mkfs.ext3 /dev/lvm-raid/lvm0&lt;br /&gt;
&lt;br /&gt;
Mount new device:&lt;br /&gt;
 mkdir /data&lt;br /&gt;
 mount /dev/lvm-raid/lvm0 /data&lt;br /&gt;
&lt;br /&gt;
Make mount permanent:&lt;br /&gt;
 vi /etc/fstab&lt;br /&gt;
&lt;br /&gt;
 #device		mount point	fs	options			fs_freq fs_passno&lt;br /&gt;
 /dev/lvm-raid/lvm0	/data		ext3	defaults,noatime	1 2&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*[http://www.gagme.com/greg/linux/raid-lvm.php Managing RAID and LVM with Linux]&lt;br /&gt;
&lt;br /&gt;
=== Configuration File ===&lt;br /&gt;
&lt;br /&gt;
Generally you do not need a configuration file for mdadm devices.  During a rebuild it does come in handy though.&lt;br /&gt;
&lt;br /&gt;
Configuration file:&lt;br /&gt;
 /etc/mdadm.conf&lt;br /&gt;
&lt;br /&gt;
 # from man mdadm&lt;br /&gt;
 # This will create a prototype config file that describes currently active arrays that are known to be made&lt;br /&gt;
 # from partitions of IDE or SCSI drives.  This file should be reviewed before being used as it may  contain&lt;br /&gt;
 # unwanted detail.&lt;br /&gt;
 echo &amp;#039;DEVICE /dev/hd*[0-9] /dev/sd*[0-9]&amp;#039; &amp;gt; mdadm.conf&lt;br /&gt;
 mdadm --detail --scan &amp;gt;&amp;gt; mdadm.conf&lt;br /&gt;
&lt;br /&gt;
=== Fix Broken RAID ===&lt;br /&gt;
&lt;br /&gt;
 mdadm --stop /dev/md3&lt;br /&gt;
 mdadm --create /dev/md3 --verbose --level=raid5 --raid-devices=3  --spare-devices=0 /dev/hd[ceg]1&lt;br /&gt;
 mdadm --detail /dev/md3&lt;br /&gt;
 cat /proc/mdstat&lt;br /&gt;
 mdadm /dev/md3 --fail /dev/hdc1 --remove /dev/hdc1&lt;br /&gt;
 mdadm --stop /dev/md3&lt;br /&gt;
 mdadm --assemble /dev/md3&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Reassemble array manually:&lt;br /&gt;
 # Find UUID with examine (--examine or -E):&lt;br /&gt;
 mdadm --examine /dev/sda1&lt;br /&gt;
&lt;br /&gt;
 # Assemble (--assemble or -A) by UUID (--uuid or -u)&lt;br /&gt;
 mdadm --assemble /dev/md1 --uuid [UUID]&lt;br /&gt;
&lt;br /&gt;
 # Combined&lt;br /&gt;
 mdadm --assemble /dev/md1 --uuid $( mdadm --examine /dev/sda1 | grep UUID | awk &amp;#039;{print $3}&amp;#039; )&lt;br /&gt;
&lt;br /&gt;
See also [[Linux Recovery]]&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
*[http://tldp.org/HOWTO/Software-RAID-HOWTO.html The Software-RAID HOWTO]&lt;br /&gt;
&lt;br /&gt;
=== TODO ===&lt;br /&gt;
&lt;br /&gt;
*Will LVM survive reboot?&lt;br /&gt;
*Will MD survive reboot?&lt;br /&gt;
*Will LVM survive reinstall of OS?&lt;br /&gt;
*Will md device survive reinstall of OS?  Create a pure MD device and LVM device, and test on OS reinstall.&lt;br /&gt;
&lt;br /&gt;
=== RAID 5 Setup ===&lt;br /&gt;
&lt;br /&gt;
Using [http://www.gagme.com/greg/linux/raid-lvm.php this] as a reference.&lt;br /&gt;
&lt;br /&gt;
Another good tutorial: [http://www.midhgard.it/docs/lvm/html/index.html Root-on-LVM-on-RAID HOWTO]&lt;br /&gt;
&lt;br /&gt;
 [root@fileserver ~]# fdisk /dev/hdc&lt;br /&gt;
 &lt;br /&gt;
 The number of cylinders for this disk is set to 30401.&lt;br /&gt;
 There is nothing wrong with that, but this is larger than 1024,&lt;br /&gt;
 and could in certain setups cause problems with:&lt;br /&gt;
 1) software that runs at boot time (e.g., old versions of LILO)&lt;br /&gt;
 2) booting and partitioning software from other OSs&lt;br /&gt;
    (e.g., DOS FDISK, OS/2 FDISK)&lt;br /&gt;
 &lt;br /&gt;
 Command (m for help): p&lt;br /&gt;
 &lt;br /&gt;
 Disk /dev/hdc: 250.0 GB, 250059350016 bytes&lt;br /&gt;
 255 heads, 63 sectors/track, 30401 cylinders&lt;br /&gt;
 Units = cylinders of 16065 * 512 = 8225280 bytes&lt;br /&gt;
 &lt;br /&gt;
    Device Boot      Start         End      Blocks   Id  System&lt;br /&gt;
 &lt;br /&gt;
 Command (m for help): n&lt;br /&gt;
 Command action&lt;br /&gt;
    e   extended&lt;br /&gt;
    p   primary partition (1-4)&lt;br /&gt;
 p&lt;br /&gt;
 Partition number (1-4): 1&lt;br /&gt;
 First cylinder (1-30401, default 1):&lt;br /&gt;
 Using default value 1&lt;br /&gt;
 Last cylinder or +size or +sizeM or +sizeK (1-30401, default 30401):&lt;br /&gt;
 Using default value 30401&lt;br /&gt;
 &lt;br /&gt;
 Command (m for help): p&lt;br /&gt;
 &lt;br /&gt;
 Disk /dev/hdc: 250.0 GB, 250059350016 bytes&lt;br /&gt;
 255 heads, 63 sectors/track, 30401 cylinders&lt;br /&gt;
 Units = cylinders of 16065 * 512 = 8225280 bytes&lt;br /&gt;
 &lt;br /&gt;
    Device Boot      Start         End      Blocks   Id  System&lt;br /&gt;
 /dev/hdc1               1       30401   244196001   83  Linux&lt;br /&gt;
 &lt;br /&gt;
 Command (m for help): t&lt;br /&gt;
 Selected partition 1&lt;br /&gt;
 Hex code (type L to list codes): fd&lt;br /&gt;
 Changed system type of partition 1 to fd (Linux raid autodetect)&lt;br /&gt;
 &lt;br /&gt;
 Command (m for help): p&lt;br /&gt;
 &lt;br /&gt;
 Disk /dev/hdc: 250.0 GB, 250059350016 bytes&lt;br /&gt;
 255 heads, 63 sectors/track, 30401 cylinders&lt;br /&gt;
 Units = cylinders of 16065 * 512 = 8225280 bytes&lt;br /&gt;
 &lt;br /&gt;
    Device Boot      Start         End      Blocks   Id  System&lt;br /&gt;
 /dev/hdc1               1       30401   244196001   fd  Linux raid autodetect&lt;br /&gt;
 &lt;br /&gt;
 Command (m for help): w&lt;br /&gt;
 The partition table has been altered!&lt;br /&gt;
 &lt;br /&gt;
 Calling ioctl() to re-read partition table.&lt;br /&gt;
 Syncing disks.&lt;br /&gt;
&lt;br /&gt;
Repeated for /dev/hde and /dev/hdg...&lt;br /&gt;
&lt;br /&gt;
 [root@fileserver ~]# fdisk -l&lt;br /&gt;
 &lt;br /&gt;
 Disk /dev/hda: 61.4 GB, 61492838400 bytes&lt;br /&gt;
 255 heads, 63 sectors/track, 7476 cylinders&lt;br /&gt;
 Units = cylinders of 16065 * 512 = 8225280 bytes&lt;br /&gt;
 &lt;br /&gt;
    Device Boot      Start         End      Blocks   Id  System&lt;br /&gt;
 /dev/hda1   *           1          13      104391   83  Linux&lt;br /&gt;
 /dev/hda2              14        7476    59946547+  8e  Linux LVM&lt;br /&gt;
 &lt;br /&gt;
 Disk /dev/hdc: 250.0 GB, 250059350016 bytes&lt;br /&gt;
 255 heads, 63 sectors/track, 30401 cylinders&lt;br /&gt;
 Units = cylinders of 16065 * 512 = 8225280 bytes&lt;br /&gt;
 &lt;br /&gt;
    Device Boot      Start         End      Blocks   Id  System&lt;br /&gt;
 /dev/hdc1               1       30401   244196001   fd  Linux raid autodetect&lt;br /&gt;
 &lt;br /&gt;
 Disk /dev/hde: 250.0 GB, 250059350016 bytes&lt;br /&gt;
 255 heads, 63 sectors/track, 30401 cylinders&lt;br /&gt;
 Units = cylinders of 16065 * 512 = 8225280 bytes&lt;br /&gt;
 &lt;br /&gt;
    Device Boot      Start         End      Blocks   Id  System&lt;br /&gt;
 /dev/hde1               1       30401   244196001   fd  Linux raid autodetect&lt;br /&gt;
 &lt;br /&gt;
 Disk /dev/hdg: 250.0 GB, 250059350016 bytes&lt;br /&gt;
 255 heads, 63 sectors/track, 30401 cylinders&lt;br /&gt;
 Units = cylinders of 16065 * 512 = 8225280 bytes&lt;br /&gt;
 &lt;br /&gt;
    Device Boot      Start         End      Blocks   Id  System&lt;br /&gt;
 /dev/hdg1               1       30401   244196001   fd  Linux raid autodetect&lt;br /&gt;
 &lt;br /&gt;
 Disk /dev/dm-0: 59.2 GB, 59257126912 bytes&lt;br /&gt;
 255 heads, 63 sectors/track, 7204 cylinders&lt;br /&gt;
 Units = cylinders of 16065 * 512 = 8225280 bytes&lt;br /&gt;
 &lt;br /&gt;
 Disk /dev/dm-0 doesn&amp;#039;t contain a valid partition table&lt;br /&gt;
 &lt;br /&gt;
 Disk /dev/dm-1: 2080 MB, 2080374784 bytes&lt;br /&gt;
 255 heads, 63 sectors/track, 252 cylinders&lt;br /&gt;
 Units = cylinders of 16065 * 512 = 8225280 bytes&lt;br /&gt;
 &lt;br /&gt;
 Disk /dev/dm-1 doesn&amp;#039;t contain a valid partition table&lt;br /&gt;
&lt;br /&gt;
Create RAID 5 array&lt;br /&gt;
&lt;br /&gt;
 [root@fileserver ~]# /sbin/mdadm --create --verbose /dev/md2 --level=5 --raid-devices=3 \&lt;br /&gt;
     /dev/hdc1 /dev/hde1 /dev/hdg1&lt;br /&gt;
 mdadm: layout defaults to left-symmetric&lt;br /&gt;
 mdadm: chunk size defaults to 64K&lt;br /&gt;
 mdadm: /dev/hdc1 appears to contain an ext2fs file system&lt;br /&gt;
     size=104388K  mtime=Sat Nov 18 05:47:04 2006&lt;br /&gt;
 mdadm: size set to 244195904K&lt;br /&gt;
 Continue creating array? y&lt;br /&gt;
 mdadm: array /dev/md2 started.&lt;br /&gt;
&lt;br /&gt;
Examine the array&lt;br /&gt;
&lt;br /&gt;
 [root@fileserver ~]# mdadm --detail /dev/md2&lt;br /&gt;
 /dev/md2:&lt;br /&gt;
         Version : 00.90.03&lt;br /&gt;
   Creation Time : Thu Nov 23 23:24:27 2006&lt;br /&gt;
      Raid Level : raid5&lt;br /&gt;
      Array Size : 488391808 (465.77 GiB 500.11 GB)&lt;br /&gt;
     Device Size : 244195904 (232.88 GiB 250.06 GB)&lt;br /&gt;
    Raid Devices : 3&lt;br /&gt;
   Total Devices : 3&lt;br /&gt;
 Preferred Minor : 2&lt;br /&gt;
     Persistence : Superblock is persistent&lt;br /&gt;
 &lt;br /&gt;
     Update Time : Thu Nov 23 23:24:27 2006&lt;br /&gt;
           State : clean&lt;br /&gt;
  Active Devices : 3&lt;br /&gt;
 Working Devices : 3&lt;br /&gt;
  Failed Devices : 0&lt;br /&gt;
   Spare Devices : 0&lt;br /&gt;
 &lt;br /&gt;
          Layout : left-symmetric&lt;br /&gt;
      Chunk Size : 64K&lt;br /&gt;
 &lt;br /&gt;
            UUID : e5748042:19bf53d2:3d646fcb:1293ea5d&lt;br /&gt;
          Events : 0.1&lt;br /&gt;
 &lt;br /&gt;
     Number   Major   Minor   RaidDevice State&lt;br /&gt;
        0      22        1        0      active sync   /dev/hdc1&lt;br /&gt;
        1      33        1        1      active sync   /dev/hde1&lt;br /&gt;
        0       0        0    9923072      removed&lt;br /&gt;
 &lt;br /&gt;
        3      34        1        3      active sync   /dev/hdg1&lt;br /&gt;
&lt;br /&gt;
Curious what this line indicates...&lt;br /&gt;
&lt;br /&gt;
         0       0        0    9923072      removed&lt;br /&gt;
&lt;br /&gt;
Following [http://www.gagme.com/greg/linux/raid-lvm.php#lvm Initial set of LVM on top of RAID].&lt;br /&gt;
&lt;br /&gt;
=== RAID 5 Performance ===&lt;br /&gt;
&lt;br /&gt;
/dev/hdc1 and /dev/hde1 and /dev/hdg1 in RAID 5 configuration&lt;br /&gt;
&lt;br /&gt;
 [root@fileserver ~]# hdparm -Tt /dev/lvm-raid/lvm0&lt;br /&gt;
 /dev/lvm-raid/lvm0:&lt;br /&gt;
 #1&lt;br /&gt;
  Timing cached reads:   3332 MB in  2.00 seconds = 1665.86 MB/sec&lt;br /&gt;
  Timing buffered disk reads:  380 MB in  3.01 seconds = 126.18 MB/sec&lt;br /&gt;
 #2&lt;br /&gt;
  Timing cached reads:   3328 MB in  2.00 seconds = 1662.68 MB/sec&lt;br /&gt;
  Timing buffered disk reads:  378 MB in  3.00 seconds = 125.98 MB/sec&lt;br /&gt;
 #3&lt;br /&gt;
  Timing cached reads:   3356 MB in  2.00 seconds = 1677.87 MB/sec&lt;br /&gt;
  Timing buffered disk reads:  380 MB in  3.01 seconds = 126.32 MB/sec&lt;br /&gt;
 #4&lt;br /&gt;
  Timing cached reads:   3348 MB in  2.00 seconds = 1674.49 MB/sec&lt;br /&gt;
  Timing buffered disk reads:  380 MB in  3.01 seconds = 126.18 MB/sec&lt;br /&gt;
&lt;br /&gt;
It appears that the RAID 5 has the same performance as the 2 disk stripped test.&lt;br /&gt;
&lt;br /&gt;
=== Raid build failure ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@hal ~]# mdadm --create --verbose /dev/md2 --level=raid0 --raid-devices=3 /dev/hd[dfh]1&lt;br /&gt;
mdadm: error opening /dev/md2: No such file or directory&lt;br /&gt;
&lt;br /&gt;
error opening /dev/md2: No such file or directory&lt;br /&gt;
http://www.issociate.de/board/post/145249/error_opening_/dev/md2:_No_such_file_or_directory.html&lt;br /&gt;
&lt;br /&gt;
mknod /dev/md2 b 9 2&lt;br /&gt;
&lt;br /&gt;
[root@hal ~]# file /dev/md*&lt;br /&gt;
/dev/md0: block special (9/0)&lt;br /&gt;
/dev/md1: block special (9/1)&lt;br /&gt;
&lt;br /&gt;
[root@hal ~]# file /dev/md*&lt;br /&gt;
/dev/md0: block special (9/0)&lt;br /&gt;
/dev/md1: block special (9/1)&lt;br /&gt;
/dev/md2: block special (9/2)&lt;br /&gt;
&lt;br /&gt;
[root@hal ~]# mdadm --create --verbose /dev/md2 --level=raid0 --raid-devices=3 /dev/hd[dfh]1&lt;br /&gt;
mdadm: chunk size defaults to 64K&lt;br /&gt;
mdadm: array /dev/md2 started.&lt;br /&gt;
&lt;br /&gt;
[root@hal ~]# mkfs.ext3 /dev/md2&lt;br /&gt;
mke2fs 1.39 (29-May-2006)&lt;br /&gt;
Filesystem label=&lt;br /&gt;
OS type: Linux&lt;br /&gt;
Block size=4096 (log=2)&lt;br /&gt;
Fragment size=4096 (log=2)&lt;br /&gt;
73269248 inodes, 146518752 blocks&lt;br /&gt;
7325937 blocks (5.00%) reserved for the super user&lt;br /&gt;
First data block=0&lt;br /&gt;
Maximum filesystem blocks=4294967296&lt;br /&gt;
4472 block groups&lt;br /&gt;
32768 blocks per group, 32768 fragments per group&lt;br /&gt;
16384 inodes per group&lt;br /&gt;
Superblock backups stored on blocks:&lt;br /&gt;
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,&lt;br /&gt;
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,&lt;br /&gt;
        102400000&lt;br /&gt;
&lt;br /&gt;
Writing inode tables: done&lt;br /&gt;
Creating journal (32768 blocks): done&lt;br /&gt;
Writing superblocks and filesystem accounting information: done&lt;br /&gt;
&lt;br /&gt;
This filesystem will be automatically checked every 39 mounts or&lt;br /&gt;
180 days, whichever comes first.  Use tune2fs -c or -i to override.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Broken RAID 5? ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 [root@fileserver ~]# mdadm --stop /dev/md3&lt;br /&gt;
&lt;br /&gt;
 [root@fileserver ~]# mdadm --create /dev/md3 --verbose --level=raid5 --raid-devices=3  --spare-devices=0 /dev/hd[ceg]1&lt;br /&gt;
 mdadm: layout defaults to left-symmetric&lt;br /&gt;
 mdadm: chunk size defaults to 64K&lt;br /&gt;
 mdadm: /dev/hdc1 appears to contain an ext2fs file system&lt;br /&gt;
     size=240974720K  mtime=Wed Dec 31 17:00:00 1969&lt;br /&gt;
 mdadm: /dev/hdc1 appears to be part of a raid array:&lt;br /&gt;
     level=raid5 devices=3 ctime=Fri Nov 24 12:27:47 2006&lt;br /&gt;
 mdadm: /dev/hde1 appears to be part of a raid array:&lt;br /&gt;
     level=raid5 devices=3 ctime=Fri Nov 24 12:27:47 2006&lt;br /&gt;
 mdadm: /dev/hdg1 appears to contain an ext2fs file system&lt;br /&gt;
     size=240974720K  mtime=Wed Dec 31 17:00:00 1969&lt;br /&gt;
 mdadm: /dev/hdg1 appears to be part of a raid array:&lt;br /&gt;
     level=raid5 devices=3 ctime=Fri Nov 24 12:27:47 2006&lt;br /&gt;
 mdadm: size set to 120487360K&lt;br /&gt;
 Continue creating array? y&lt;br /&gt;
 mdadm: array /dev/md3 started.&lt;br /&gt;
&lt;br /&gt;
 [root@fileserver ~]# mdadm --detail /dev/md3&lt;br /&gt;
 /dev/md3:&lt;br /&gt;
         Version : 00.90.03&lt;br /&gt;
   Creation Time : Fri Nov 24 12:38:19 2006&lt;br /&gt;
      Raid Level : raid5&lt;br /&gt;
      Array Size : 240974720 (229.81 GiB 246.76 GB)&lt;br /&gt;
     Device Size : 120487360 (114.91 GiB 123.38 GB)&lt;br /&gt;
    Raid Devices : 3&lt;br /&gt;
   Total Devices : 3&lt;br /&gt;
 Preferred Minor : 3&lt;br /&gt;
     Persistence : Superblock is persistent&lt;br /&gt;
 &lt;br /&gt;
     Update Time : Fri Nov 24 12:38:19 2006&lt;br /&gt;
           State : clean&lt;br /&gt;
  Active Devices : 3&lt;br /&gt;
 Working Devices : 3&lt;br /&gt;
  Failed Devices : 0&lt;br /&gt;
   Spare Devices : 0&lt;br /&gt;
 &lt;br /&gt;
          Layout : left-symmetric&lt;br /&gt;
      Chunk Size : 64K&lt;br /&gt;
 &lt;br /&gt;
            UUID : 70b25481:ebbf8e5b:c8c3a366:13d2ddcd&lt;br /&gt;
          Events : 0.1&lt;br /&gt;
 &lt;br /&gt;
     Number   Major   Minor   RaidDevice State&lt;br /&gt;
        0      22        1        0      active sync   /dev/hdc1&lt;br /&gt;
        1      33        1        1      active sync   /dev/hde1&lt;br /&gt;
        0       0        0        0      removed&lt;br /&gt;
 &lt;br /&gt;
        3      34        1        3      active sync   /dev/hdg1&lt;br /&gt;
&lt;br /&gt;
 [root@fileserver ~]# cat /proc/mdstat&lt;br /&gt;
 Personalities : [raid6] [raid5] [raid4]&lt;br /&gt;
 md3 : active raid5 hdg1[3] hde1[1] hdc1[0]&lt;br /&gt;
       240974720 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]&lt;br /&gt;
 &lt;br /&gt;
 md4 : active raid5 hdg2[3] hde2[1] hdc2[0]&lt;br /&gt;
       247416832 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]&lt;br /&gt;
 &lt;br /&gt;
 unused devices: &amp;lt;none&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 [root@fileserver ~]# mdadm /dev/md3 --fail /dev/hdc1 --remove /dev/hdc1&lt;br /&gt;
 mdadm: set /dev/hdc1 faulty in /dev/md3&lt;br /&gt;
 mdadm: hot removed /dev/hdc1&lt;br /&gt;
 [root@fileserver ~]# mdadm --stop /dev/md3&lt;br /&gt;
 [root@fileserver ~]# mdadm --assemble /dev/md3&lt;br /&gt;
 mdadm: device 3 in /dev/md3 has wrong state in superblock, but /dev/hdg1 seems ok&lt;br /&gt;
 mdadm: /dev/md3 assembled from 1 drive and 1 spare - not enough to start the array.&lt;br /&gt;
&lt;br /&gt;
It appears that other people have had similar issues:&lt;br /&gt;
# http://www.linuxquestions.org/questions/showthread.php?t=434067&lt;br /&gt;
# http://www.linuxquestions.org/questions/showthread.php?t=491325&lt;br /&gt;
&lt;br /&gt;
=== Fixed RAID 5 ===&lt;br /&gt;
After discussing with Clint the problem, we came to the conclusion that when the array was created, the system did not process the spare to convert it into the array.  We assumed there was a bug with the raid5 module or the kernel.  I rebuilt the system using Fedora Core 5 64bit edition, and upon creating the array, the spare was processed correctly.  I was also pleased to see that the RAID 5 (even being broken), and the RAID 0 with LVM survied the reinstall of the OS.&lt;br /&gt;
&lt;br /&gt;
 Every 2.0s: cat /proc/mdstat                                                                         Fri Nov 24  19:04:03 2006&lt;br /&gt;
 &lt;br /&gt;
 Personalities : [raid0] [raid6] [raid5] [raid4]&lt;br /&gt;
 md3 : active raid5 hdg1[3] hde1[1] hdc1[0]&lt;br /&gt;
       240974720 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]&lt;br /&gt;
       [&amp;gt;....................]  recovery =  0.8% (969652/120487360) finish=43.1min speed=46173K/sec&lt;br /&gt;
 &lt;br /&gt;
 md4 : active raid0 hdg2[2] hde2[1] hdc2[0]&lt;br /&gt;
       371125248 blocks 64k chunks&lt;br /&gt;
 &lt;br /&gt;
 unused devices: &amp;lt;none&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Creating md device ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@hal ~]# mdadm --create --verbose /dev/md2 --level=raid0 --raid-devices=3 /dev/hd[dfh]1&lt;br /&gt;
mdadm: error opening /dev/md2: No such file or directory&lt;br /&gt;
&lt;br /&gt;
error opening /dev/md2: No such file or directory&lt;br /&gt;
http://www.issociate.de/board/post/145249/error_opening_/dev/md2:_No_such_file_or_directory.html&lt;br /&gt;
&lt;br /&gt;
[root@hal ~]# file /dev/md*&lt;br /&gt;
/dev/md0: block special (9/0)&lt;br /&gt;
/dev/md1: block special (9/1)&lt;br /&gt;
&lt;br /&gt;
# where the 9 matches the others&lt;br /&gt;
# and the 2 is the next increment above the exisiting&lt;br /&gt;
mknod /dev/md2 b 9 2&lt;br /&gt;
&lt;br /&gt;
[root@hal ~]# file /dev/md*&lt;br /&gt;
/dev/md0: block special (9/0)&lt;br /&gt;
/dev/md1: block special (9/1)&lt;br /&gt;
/dev/md2: block special (9/2)&lt;br /&gt;
&lt;br /&gt;
[root@hal ~]# mdadm --create --verbose /dev/md2 --level=raid0 --raid-devices=3 /dev/hd[dfh]1&lt;br /&gt;
mdadm: chunk size defaults to 64K&lt;br /&gt;
mdadm: array /dev/md2 started.&lt;br /&gt;
&lt;br /&gt;
[root@hal ~]# mkfs.ext3 /dev/md2&lt;br /&gt;
mke2fs 1.39 (29-May-2006)&lt;br /&gt;
Filesystem label=&lt;br /&gt;
OS type: Linux&lt;br /&gt;
Block size=4096 (log=2)&lt;br /&gt;
Fragment size=4096 (log=2)&lt;br /&gt;
73269248 inodes, 146518752 blocks&lt;br /&gt;
7325937 blocks (5.00%) reserved for the super user&lt;br /&gt;
First data block=0&lt;br /&gt;
Maximum filesystem blocks=4294967296&lt;br /&gt;
4472 block groups&lt;br /&gt;
32768 blocks per group, 32768 fragments per group&lt;br /&gt;
16384 inodes per group&lt;br /&gt;
Superblock backups stored on blocks:&lt;br /&gt;
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,&lt;br /&gt;
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,&lt;br /&gt;
        102400000&lt;br /&gt;
&lt;br /&gt;
Writing inode tables: done&lt;br /&gt;
Creating journal (32768 blocks): done&lt;br /&gt;
Writing superblocks and filesystem accounting information: done&lt;br /&gt;
&lt;br /&gt;
This filesystem will be automatically checked every 39 mounts or&lt;br /&gt;
180 days, whichever comes first.  Use tune2fs -c or -i to override.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Chunk Size ===&lt;br /&gt;
&lt;br /&gt;
[http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO.html#toc5.10 5.10 Chunk sizes]&lt;br /&gt;
&lt;br /&gt;
The chunk-size deserves an explanation. You can never write completely parallel to a set of disks. If you had two disks and wanted to write a byte, you would have to write four bits on each disk, actually, every second bit would go to disk 0 and the others to disk 1. Hardware just doesn&amp;#039;t support that. Instead, we choose some chunk-size, which we define as the smallest &amp;quot;atomic&amp;quot; mass of data that can be written to the devices. A write of 16 kB with a chunk size of 4 kB, will cause the first and the third 4 kB chunks to be written to the first disk, and the second and fourth chunks to be written to the second disk, in the RAID-0 case with two disks. Thus, for large writes, you may see lower overhead by having fairly large chunks, whereas arrays that are primarily holding small files may benefit more from a smaller chunk size.&lt;br /&gt;
&lt;br /&gt;
Chunk sizes must be specified for all RAID levels, including linear mode. However, the chunk-size does not make any difference for linear mode.&lt;br /&gt;
&lt;br /&gt;
For optimal performance, you should experiment with the value, as well as with the block-size of the filesystem you put on the array.&lt;br /&gt;
&lt;br /&gt;
The argument to the chunk-size option in /etc/raidtab specifies the chunk-size in kilobytes. So &amp;quot;4&amp;quot; means &amp;quot;4 kB&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;quot;A reasonable chunk-size for RAID-5 is 128 kB, but as always, you may want to experiment with this.&amp;quot; [http://unthought.net/Software-RAID.HOWTO/Software-RAID.HOWTO-5.html]&lt;br /&gt;
&lt;br /&gt;
=== To Read ===&lt;br /&gt;
&lt;br /&gt;
*http://www.gagme.com/greg/linux/raid-lvm.php&lt;br /&gt;
*http://scotgate.org/?p=107&lt;br /&gt;
*http://www.networknewz.com/2003/0113.html&lt;br /&gt;
*http://www.linuxdevcenter.com/pub/a/linux/2002/12/05/RAID.html&lt;br /&gt;
*http://man-wiki.net/index.php/5:mdadm.conf&lt;br /&gt;
*http://acd.ucar.edu/~fredrick/linux/fedoraraid/&lt;br /&gt;
*http://xtronics.com/reference/SATA-RAID-debian-for-2.6.html&lt;br /&gt;
*http://dev.riseup.net/grimoire/storage/software-raid/&lt;br /&gt;
&lt;br /&gt;
=== Virtual Play ===&lt;br /&gt;
&lt;br /&gt;
My posting to PLUG 2009.07.08:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Mike,&lt;br /&gt;
&lt;br /&gt;
By the way, if you are wanting to play around with mdadm without actually using real drives you &lt;br /&gt;
can setup a few virtual devices and play with mdadm to your hearts content without destroying real disks:&lt;br /&gt;
&lt;br /&gt;
dd if=/dev/zero of=/root/vd1 bs=1M count=100   # create virtual disk 1&lt;br /&gt;
dd if=/dev/zero of=/root/vd2 bs=1M count=100   # create virtual disk 2&lt;br /&gt;
dd if=/dev/zero of=/root/vd3 bs=1M count=100   # create virtual disk 3&lt;br /&gt;
dd if=/dev/zero of=/root/vd4 bs=1M count=100   # create virtual disk 4&lt;br /&gt;
&lt;br /&gt;
losetup -a  # show currently used loop devices&lt;br /&gt;
&lt;br /&gt;
losetup /dev/loop1 /root/vd1  # use an unused loop device&lt;br /&gt;
losetup /dev/loop2 /root/vd2  # use an unused loop device&lt;br /&gt;
losetup /dev/loop3 /root/vd3  # use an unused loop device&lt;br /&gt;
losetup /dev/loop4 /root/vd4  # use an unused loop device&lt;br /&gt;
&lt;br /&gt;
mdadm --create /dev/md2 --level raid10 --raid-devices 4 /dev/loop[1234]  # create md devices (use unused /dev/md?)&lt;br /&gt;
mkfs.ext3 /dev/md2  # format as ext3&lt;br /&gt;
mount /dev/md2 /mnt/md2  # mount if you want&lt;br /&gt;
&lt;br /&gt;
now you can fail virtual disks, stop array, reassemble, fail disks, etc to your hearts content.&lt;br /&gt;
&lt;br /&gt;
# example rebuild with one less disk:&lt;br /&gt;
mdadm --stop /dev/md2&lt;br /&gt;
mdadm --assemble /dev/md2 /dev/loop[123]&lt;br /&gt;
&lt;br /&gt;
When you are done, to clean up:&lt;br /&gt;
&lt;br /&gt;
umount /dev/md2   # if mounted&lt;br /&gt;
mdadm --stop /dev/md2&lt;br /&gt;
losetup -d /dev/loop1&lt;br /&gt;
losetup -d /dev/loop2&lt;br /&gt;
losetup -d /dev/loop3&lt;br /&gt;
losetup -d /dev/loop4&lt;br /&gt;
rm /root/vd1&lt;br /&gt;
rm /root/vd2&lt;br /&gt;
rm /root/vd3&lt;br /&gt;
rm /root/vd4&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Odd Failure on Rebuild ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Aug 18 19:17:53 prime kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0&lt;br /&gt;
Aug 18 19:17:53 prime kernel: ata1.00: irq_stat 0x40000001&lt;br /&gt;
Aug 18 19:17:53 prime kernel: ata1.00: failed command: READ DMA EXT&lt;br /&gt;
Aug 18 19:17:53 prime kernel: ata1.00: cmd 25/00:38:b3:01:43/00:03:75:00:00/e0 tag 4 dma 421888 in&lt;br /&gt;
Aug 18 19:17:53 prime kernel:         res 51/40:00:ba:04:43/00:00:75:00:00/00 Emask 0x9 (media error)&lt;br /&gt;
Aug 18 19:17:53 prime kernel: ata1.00: status: { DRDY ERR }&lt;br /&gt;
Aug 18 19:17:53 prime kernel: ata1.00: error: { UNC }&lt;br /&gt;
Aug 18 19:17:53 prime kernel: ata1.00: configured for UDMA/133&lt;br /&gt;
Aug 18 19:17:53 prime kernel: ata1: EH complete&lt;br /&gt;
Aug 18 19:17:56 prime kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0&lt;br /&gt;
Aug 18 19:17:56 prime kernel: ata1.00: irq_stat 0x40000001&lt;br /&gt;
Aug 18 19:17:56 prime kernel: ata1.00: failed command: READ DMA EXT&lt;br /&gt;
Aug 18 19:17:56 prime kernel: ata1.00: cmd 25/00:38:b3:01:43/00:03:75:00:00/e0 tag 5 dma 421888 in&lt;br /&gt;
Aug 18 19:17:56 prime kernel:         res 51/40:00:ba:04:43/00:00:75:00:00/00 Emask 0x9 (media error)&lt;br /&gt;
Aug 18 19:17:56 prime kernel: ata1.00: status: { DRDY ERR }&lt;br /&gt;
Aug 18 19:17:56 prime kernel: ata1.00: error: { UNC }&lt;br /&gt;
Aug 18 19:17:56 prime kernel: ata1.00: configured for UDMA/133&lt;br /&gt;
Aug 18 19:17:56 prime kernel: ata1: EH complete&lt;br /&gt;
Aug 18 19:17:59 prime kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0&lt;br /&gt;
Aug 18 19:17:59 prime kernel: ata1.00: irq_stat 0x40000001&lt;br /&gt;
Aug 18 19:17:59 prime kernel: ata1.00: failed command: READ DMA EXT&lt;br /&gt;
Aug 18 19:17:59 prime kernel: ata1.00: cmd 25/00:38:b3:01:43/00:03:75:00:00/e0 tag 6 dma 421888 in&lt;br /&gt;
Aug 18 19:17:59 prime kernel:         res 51/40:00:ba:04:43/00:00:75:00:00/00 Emask 0x9 (media error)&lt;br /&gt;
Aug 18 19:17:59 prime kernel: ata1.00: status: { DRDY ERR }&lt;br /&gt;
Aug 18 19:17:59 prime kernel: ata1.00: error: { UNC }&lt;br /&gt;
Aug 18 19:17:59 prime kernel: ata1.00: configured for UDMA/133&lt;br /&gt;
Aug 18 19:17:59 prime kernel: ata1: EH complete&lt;br /&gt;
Aug 18 19:18:01 prime kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0&lt;br /&gt;
Aug 18 19:18:01 prime kernel: ata1.00: irq_stat 0x40000001&lt;br /&gt;
Aug 18 19:18:01 prime kernel: ata1.00: failed command: READ DMA EXT&lt;br /&gt;
Aug 18 19:18:01 prime kernel: ata1.00: cmd 25/00:38:b3:01:43/00:03:75:00:00/e0 tag 7 dma 421888 in&lt;br /&gt;
Aug 18 19:18:01 prime kernel:         res 51/40:00:ba:04:43/00:00:75:00:00/00 Emask 0x9 (media error)&lt;br /&gt;
Aug 18 19:18:01 prime kernel: ata1.00: status: { DRDY ERR }&lt;br /&gt;
Aug 18 19:18:01 prime kernel: ata1.00: error: { UNC }&lt;br /&gt;
Aug 18 19:18:02 prime kernel: ata1.00: configured for UDMA/133&lt;br /&gt;
Aug 18 19:18:02 prime kernel: ata1: EH complete&lt;br /&gt;
Aug 18 19:18:04 prime kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0&lt;br /&gt;
Aug 18 19:18:04 prime kernel: ata1.00: irq_stat 0x40000001&lt;br /&gt;
Aug 18 19:18:04 prime kernel: ata1.00: failed command: READ DMA EXT&lt;br /&gt;
Aug 18 19:18:04 prime kernel: ata1.00: cmd 25/00:38:b3:01:43/00:03:75:00:00/e0 tag 8 dma 421888 in&lt;br /&gt;
Aug 18 19:18:04 prime kernel:         res 51/40:00:ba:04:43/00:00:75:00:00/00 Emask 0x9 (media error)&lt;br /&gt;
Aug 18 19:18:04 prime kernel: ata1.00: status: { DRDY ERR }&lt;br /&gt;
Aug 18 19:18:04 prime kernel: ata1.00: error: { UNC }&lt;br /&gt;
Aug 18 19:18:04 prime kernel: ata1.00: configured for UDMA/133&lt;br /&gt;
Aug 18 19:18:04 prime kernel: ata1: EH complete&lt;br /&gt;
Aug 18 19:18:07 prime kernel: ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0&lt;br /&gt;
Aug 18 19:18:07 prime kernel: ata1.00: irq_stat 0x40000001&lt;br /&gt;
Aug 18 19:18:07 prime kernel: ata1.00: failed command: READ DMA EXT&lt;br /&gt;
Aug 18 19:18:07 prime kernel: ata1.00: cmd 25/00:38:b3:01:43/00:03:75:00:00/e0 tag 9 dma 421888 in&lt;br /&gt;
Aug 18 19:18:07 prime kernel:         res 51/40:00:ba:04:43/00:00:75:00:00/00 Emask 0x9 (media error)&lt;br /&gt;
Aug 18 19:18:07 prime kernel: ata1.00: status: { DRDY ERR }&lt;br /&gt;
Aug 18 19:18:07 prime kernel: ata1.00: error: { UNC }&lt;br /&gt;
Aug 18 19:18:07 prime kernel: ata1.00: configured for UDMA/133&lt;br /&gt;
Aug 18 19:18:07 prime kernel: sd 0:0:0:0: [sda] Unhandled sense code&lt;br /&gt;
Aug 18 19:18:07 prime kernel: sd 0:0:0:0: [sda] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE&lt;br /&gt;
Aug 18 19:18:07 prime kernel: sd 0:0:0:0: [sda] Sense Key : Medium Error [current] [descriptor]&lt;br /&gt;
Aug 18 19:18:07 prime kernel: Descriptor sense data with sense descriptors (in hex):&lt;br /&gt;
Aug 18 19:18:07 prime kernel:        72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00&lt;br /&gt;
Aug 18 19:18:07 prime kernel:        75 43 04 ba&lt;br /&gt;
Aug 18 19:18:07 prime kernel: sd 0:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed&lt;br /&gt;
Aug 18 19:18:07 prime kernel: sd 0:0:0:0: [sda] CDB: Read(10): 28 00 75 43 01 b3 00 03 38 00&lt;br /&gt;
Aug 18 19:18:07 prime kernel: md/raid:md4: Disk failure on sda3, disabling device.&lt;br /&gt;
Aug 18 19:18:07 prime kernel: md/raid:md4: Operation continuing on 4 devices.&lt;br /&gt;
Aug 18 19:18:07 prime kernel: ata1: EH complete&lt;br /&gt;
Aug 18 19:18:07 prime kernel: md: md4: recovery done.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Kenneth</name></author>
	</entry>
</feed>