[BlueOnyx:11430] Re: MD RAID device state="clean, degraded" -- how to fix

Bob Wickline wick at bobwickline.com
Tue Sep 25 21:41:45 -05 2012


On Wed, Aug 29, 2012 at 10:14 AM, Michael Stauber <mstauber at blueonyx.it>wrote:

> Hi Bob,
>
> > I have a Dell server that I installed 5108.  It has two (2) 1GB drives
> > and one (1) 2GB disk.  When I installed it, I used the option to mirror
> > (soft RAID) across the drives but when I did it used all three drives.
> > I used "mdadm" to fail and remove the third (2GB) drive.  (I did not
> > want it part of the root mirror)  Now the device shows "clean, degraded"
> > when I list it with mdadm:
> >
> > # mdadm --detail /dev/md0
> > /dev/md0:
> >          Version : 1.0
> >    Creation Time : Fri Jul 20 07:38:36 2012
> >       Raid Level : raid1
> >       Array Size : 511988 (500.07 MiB 524.28 MB)
> >    Used Dev Size : 511988 (500.07 MiB 524.28 MB)
> >     Raid Devices : 3
> >    Total Devices : 2
> >      Persistence : Superblock is persistent
> >
> >      Update Time : Tue Aug 28 16:25:57 2012
> >            State : clean, degraded
> >   Active Devices : 2
> > Working Devices : 2
> >   Failed Devices : 0
> >    Spare Devices : 0
> >
> >             Name : localhost.localdomain:0
> >             UUID : 8b68d22d:4566147a:621a50a4:6d016578
> >           Events : 325
> >
> >      Number   Major   Minor   RaidDevice State
> >         0       8        1        0      active sync /dev/sda1
> >         3       8       17        1      active sync /dev/sdb1
> >         2       0        0        2      removed
> > #
>
> I don't know the device of the third disk. But I assume it is /dev/sdc?
>
> If so, did you really do both steps (fail and remove)? I know, you said
> you did, but I'm just making sure. :-)
>
> mdadm --fail /dev/md0 /dev/sdc1
> mdadm --remove /dev/md0 /dev/sdc1
>
> It may also be helpful to zero the superblock of the removed disk, as it
> still contains RAID meta data which mdadm might see:
>
> mdadm --zero-superblock /dev/sdc
>
> If you did all those steps, it still could be that the disk is listed in
> mdadm.conf, so you'd need to update the conf this way:
>
> mdadm --detail --scan >> /etc/mdadm.conf
>
> This appends to /etc/mdadm.conf, so you will have to edit the file
> afterwards to remove duplicate lines. Your new ARRAY lines will be at
> the end of this file.
>
> For more information you might want to check this URL:
>
> http://www.ducea.com/2009/03/08/mdadm-cheat-sheet/
>
>
Yea I'm pretty sure I did both steps.  And yes the third disk is sdc.  I
tried the --zero-superblock but it didn't recognize the device:

# fdisk -l /dev/sdc

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000c4a89

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      243201  1953512000+  83  Linux
# mdadm --zero-superblock /dev/sdc
mdadm: Unrecognised md component device - /dev/sdc
# mdadm --zero-superblock /dev/sdc1
mdadm: Unrecognised md component device - /dev/sdc1
#

I compared the contents of mdadm.conf and the output of mdadm --scan and it
looks the same:

# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md0 metadata=1.0 level=raid1 num-devices=2
UUID=8b68d22d:4566147a:621a50a4:6d016578
ARRAY /dev/md1 metadata=1.1 level=raid1 num-devices=2
UUID=b2bc5b0b:4d3a33ba:39e6a2fb:be3ae14c
# mdadm --detail --scan
ARRAY /dev/md1 metadata=1.1 name=localhost.localdomain:1
UUID=b2bc5b0b:4d3a33ba:39e6a2fb:be3ae14c
ARRAY /dev/md0 metadata=1.0 name=localhost.localdomain:0
UUID=8b68d22d:4566147a:621a50a4:6d016578
#

???
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.blueonyx.it/pipermail/blueonyx/attachments/20120925/599e70ea/attachment.html>


More information about the Blueonyx mailing list