[CLUE-Tech] RAID 1 on Linux

Chris Dos chris at chrisdos.com
Wed Oct 20 10:50:51 MDT 2004


Carl Schelin wrote:
> Does anyone have a pointer to a document that debunks
> this? Can I in fact, add a second disk and make the
> system RAID 1 or do I have to back it off and
> reinstall?
> 

I had a document for doing this for a while.  I've been running linux 
software raid1 and raid5 for a long time with excellent results.  Here 
was a snippit of the document that I've been using:

< --- Begin Snippit --- >
You can only use this method on RAID levels 1 and above.  The idea is
  to install a system on a disk which is purposely marked as failed in
  the RAID, then copy the system to the RAID which will be running in
  degraded mode, and finally making the RAID use the no-longer needed
  ``install-disk'', zapping the old installation but making the RAID run
  in non-degraded mode.


  o  First, install a normal system on one disk (that will later become
     part of your RAID). It is important that this disk (or partition)
     is not the smallest one. If it is, it will not be possible to add
     it to the RAID later on!

  o  Then, get the kernel, the patches, the tools etc. etc. You know the
     drill. Make your system boot with a new kernel that has the RAID
     support you need, compiled into the kernel.

  o  Now, set up the RAID with your current root-device as the failed-
     disk in the raidtab file. Don't put the failed-disk as the first
     disk in the raidtab, that will give you problems with starting the
     RAID. Create the RAID, and put a filesystem on it.  Please make sure
     you set the partition type to "fd" from "83".  Do this using the
     fdisk command.  Type 83 is ext2 and type fd is linux RAID.  You
     should use identical disks and write down the size of the orginal
     partitions on the first drive.  You should do this using the fdisk
     command.

  o  Try rebooting and see if the RAID comes up as it should

  o  Copy the system files in single user mode, and reconfigure the
     system to use the RAID as root-device. You'll have to copy each
     partition seperately.
     cd /
     find . -xdev | cpio -pm /mnt/newroot

  o  When your system successfully boots from the RAID, you can modify
     the raidtab file to include the previously failed-disk as a normal
     raid-disk. Now, raidhotadd the disk to your RAID.

  o  You should now have a system that can boot from a non-degraded
     RAID.

  o  Check /proc/mdstat to find out when the rebuild is completed.
< --- End Snippet --- >

Now some notes.  Use the "failed disk" in the raidtab when making the 
array so you can make just the first half of the raid1 array.  The "find 
. -xdev | cpio -pm /mnt/newroot" command will move everything from one 
partition/slice to another.  I always make an individual 50MB boot 
partition.

Use LILO as your boot loader as you can put boot=/dev/mdX where X is the 
  raid1 that contains your kernel, and LILO will boot either drive in 
case one is failed.  GRUB cannot not do this and you must specify a 
particular drive to boot off of.

BTW, LILO freaks out if it sees SATA drives (At least with Debian 
Sarge).  So if you have SATA, you must use GRUB.

	Chris



More information about the clue-tech mailing list