[clue] HDD "shell game" issue

Jim Ockers ockers at ockers.net
Sun Jun 12 14:18:43 MDT 2016


When you boot with the new drive in place, what is its block device?  Is 
it /dev/sdb or some other device name?

I understand from your email that your server will boot with the new 
drive installed if you comment out the fstab entry which references 
filesystems on /dev/sdb .  If the new drive becomes /dev/sdb could you 
boot up without sdb in fstab, and then while the server is up you could 
format the drive the way that fstab expects, and then uncomment the sdb 
lines in /etc/fstab, do 'mount -a' to mount them, and then once you're 
sure everything is OK, then reboot the server and it should come up 
normally.  Did you try that?

If the new drive simply won't become /dev/sdb and you need it to become 
/dev/sdb then maybe you have a persistent block device name? Maybe you 
could look for & edit udev configs to get rid of the old sdb device 
config. 
https://wiki.archlinux.org/index.php/persistent_block_device_naming It 
seems like I had to do something like this once but it's been so long 
I've forgotten the specifics.  I know I have to edit udev configs to 
make ethernet cards get the right ethX id.

Jim

On 6/12/16 12:48 PM, foo7775 at comcast.net wrote:
> Hi All,
>
>   I've run into a situation that has me puzzled (& a little bit 
> humbled, actually).  I have a server that started out with five 
> drives, sda - sde (no RAID, no LVM).  The sde drive has been throwing 
> alerts for a few days, so I went in to the DC to replace it.  The 
> drives are hot-swappable, so I didn't expect much problem - but when I 
> inserted the new drive, the server appeared to not detect it - and 
> started to generate alerts for the sdB drive as well. (I'm sure that 
> at least a couple of you can probably see where this is going...)
>
>    After puzzling over the problem for a little while, I found that 
> the new drive had (appropriated|been assigned to) the sdB slot, rather 
> than sdE like I had expected.  At that point, I started trying to 
> collect info so that I could recreate the fstab file using 
> 'mount-by-uuid' - but regardless of what configuration I try, I can't 
> seem to get the original sdB drive to reappear:
>
>    If I boot with the drive slot open, I get sda, sdc & sdd.
>
>    If I boot with the drive slot holding the bad/original drive, the 
> server shows sda through sdd.
>
>    If I boot with the drive slot holding the new/unformatted drive, 
> the boot process fails when it's unable to complete checking the 
> drives (server's been up for >450 days), & I had to boot from a rescue 
> drive & comment out the sdb entry in fstab to get the server to 
> complete booting.
>
>   The worst part of dealing with this is that I *know* that it's not a 
> really complex issue - but I am just not seeing how to restore the 
> server to proper function (which has me feeling distinctly inept).  
> I'm sure that there's data that I've forgotten to provide here, so if 
> anyone has any questions, I'll do my best to answer them.
>
> I would really appreciate any thoughts/suggestions that anyone can 
> provide.
>
> Thanks in advance.
>
>
> _______________________________________________
> clue mailing list: clue at cluedenver.org
> For information, account preferences, or to unsubscribe see:
> http://cluedenver.org/mailman/listinfo/clue

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://cluedenver.org/pipermail/clue/attachments/20160612/b0711fea/attachment.html 


More information about the clue mailing list