[BlueOnyx:11115] Re: SL6.2 no boot from degraded RAID1... with fix... BTW 6.3 is OK

Rickard Osser rickard.osser at bluapp.com
Tue Aug 7 02:11:40 -05 2012


Hi,

just a stupid question, is the this bug related to not writing down the
grub info correctly onto both disks?
It wouldn't start on the first disk either if there would be something
wrong inside initramfs and from what I can see the raid itself seems to
be intact?

I wrote a hack for BQ back in the day of centos4 because
grub-install/initrd had problems writing down the correct grub info on
some systems where I was running a raid1 for boot and raid5 for the
rest. This hack is included in all BO and seems to work correctly
re-writing the grub on each shutdown/reboot.

I'll make a test later today and see if this is the problem and if it is
then we already have a fix for it.

Best regards,

Rickard

On Tue, 2012-08-07 at 05:13 +0200, Michael Stauber wrote:
> Hi Gerald,
> 
> > It appears to be fixed in CentOS-6.3 version of BlueOnyx
> 
> Yeah, I have read that. The confusing part is that Scientific Linux is
> doing a staggered release at the moment.
> 
> The official OS version is SL-6.2. SL-6.3 is available for testing as
> release candidate, but it is not finished yet.
> 
> However, many RPMs that have been pushed to the SL-6.2 YUM repository in
> the last couple of weeks are already taken straight from SL-6.3.
> 
> So we got a bit of a hybrid there at the moment which contains bits and
> pieces from SL-6.2 and some from SL-6.3.
> 
> To make things more confusing:
> 
> CentOS has 6.3 out, but a fair chunk of their 6.3 RPMs off the mirror
> are older than the corresponding 6.3 RPMs that SL serves right now for
> SL-6.2.
> 
> >From what I have gathered so far by looking at the bug tickets and the
> outlined resolves:
> 
> Yes, this is fixable by using the work around you outlined in your
> initial posting. However, I dare not to publish an automatic fix that
> messes with rebuilding the initramfs. There is just too much that can go
> wrong while running this work around unobserved somewhere during a
> hastily cobbled together update. In fact I would still feel
> uncomfortable fixing this in an update done by myself even if I had two
> days (and the spare hardware) to test it out fully under all imaginable
> scenarios that one could expect.
> 
> At this time we have about 2500 BlueOnyx 5107R and 5108R out there.
> There is no telling how many of these use SL or CentOS or how many of
> them use RAID1 to begin with. A fair chunk of these 2500 installs could
> be VPS's and/or could be using no RAID or hardware RAID. So determining
> the actual need is impossible.
> 
> All in all I'd rather wait until SL fixes this during the course of a
> normal update. The release of SL-6.3 is pretty much overdue at this time
> anyway.
> 
> Those who are in a hurry to get their RAID1 using SL based BlueOnyx
> 5107R and 5108R fixed can of course apply the work around.
> 





More information about the Blueonyx mailing list