[CLUE-Tech] Help with remote backup problem

Dan Harris dan at drivefaster.net
Tue Jun 1 23:10:19 MDT 2004


I've recently overhauled my backup strategy and I am running into a snag 
with it.  Perhaps someone else here can shed some light on this for me.

I have 3 servers that need to be backed up.  One of them houses a 40/80 
DLT drive.  I was trying to use GNU tar, however this much data was 
taking f-o-r-e-v-e-r to finish.  So I discovered 'star'.  Something 
about this program makes it MUCH faster for network backups. 

Anyhow, here's the situation.  On the server with the tape drive, I run 
a nightly perl program that first backs up the local files using star, 
then backs up the two remote servers using rsh to call star and pipes 
back to dd to write it to the tape drive. 

The problem I'm having is that when I try and un-star either the 2nd or 
3rd file, I get:

harvard# star -tv -f=/dev/nst0 -bs=1024k | tee /root/filelist
star: Input/output error. Error reading '/dev/nst0'.
star: 0 blocks + 0 bytes (total of 0 bytes = 0.00k).


Here are the commands that are being executed in the backup script:

1) ( local server )  star -c -f=/dev/nst0 -b=1024k <directories to backup>
2) ( remote server )  rsh pikes 'star -c -f=- -b=1024k /etc /usr/home 
/var/www' | dd of=/dev/nst0 bs=1024k
3) ( remote server )  rsh crestone 'star -c -f=- -b=1024k /etc 
/var/spool /var/lib' | dd of=/dev/nst0 bs=1024k

Notice that 'star' is being executed on the remote machines in #2 and #3 
and then piping the data to a local instance of 'dd'.  I don't know if 
by using this process it is confusing the blocksizes somehow?  I thought 
since using 1024k consistently between star and dd it would be okay, but 
I guess not?  I've tried different tapes and the problem happens on all 
of them.

Maybe there's a more sensible way to do this rather than using dd?  Any 
ideas are appreciated.

-Dan



More information about the clue-tech mailing list