[clue-admin] CVS Publishing Script

Jed S. Baer thag at frii.com
Fri Dec 17 00:25:33 MST 2004


On Thu, 16 Dec 2004 19:44:42 -0700
David Anselmi wrote:

> Jed S. Baer wrote:
> [...]
> > This all works, except for the step of moving the symbolic link. I get
> > no errors from the 'ln -sf' command (or 'ls -s -f' either). But the
> > link stays put on the old directory. I can't figure out why this is.
> > 
> > If I copy the ln command from the webserver output, and paste it to
> > the command line, it works.
> 
> On my system I can't reproduce what you describe as "working from the 
> command line".
> 
> Here's what I tried:
> 
> ln -sf A WEBSITE ; ln -sf B WEBSITE
> 
> You'd expect this to make a symbolic link to directory A and then change
> it to point to directory B.  If A and B were regular files that's what 
> it does.  But since they are directories the second command creates a 
> link named B in the directory WEBSITE (which is actually A).  The man 
> page seems clear enough if you realize the second command will 
> dereference WEBSITE.

I'll check that out tomorrow (err, later today).

> Looks like what you want is to add -n to avoid the dereference.

Hmmm.

Except that, as I reported, it does, in fact, replace the symlink when I
do it at the command line. But maybe that has something to do with what
you mention next.

> The nice thing about repointing the symlink is that it's atomic.

Yeah, part of what I was hoping for.

> But 
> I'm not sure that matters since old files may still be open.  So maybe 
> just renaming directories is good enough (dunno though, I don't see any 
> reason not to do it this way).

Well, with renaming the directories, ie:
  mv html old_html
  mkdir html
  cd html
  cvs export ...
  rm old_html

Hmmm. Since ext3 should delete the files only after their reference count
drops to zero, yeah, that should actually work,  shouldn't it?

> In that case though, I'd export straight
> to $NEWDIR.  What do you gain by exporting to /tmp and then tar'ing 
> across (which could also be a cp it seems)?

The reason for that is that the actual html directory is one-deep in the
website project. If I could do 'cvs export website/html' then I'd get the
files where I want them. Uh, I haven't tried to do that, because the CVS
docs don't indicate that export can be made selective that way. I use tar
because in my experience 'cp -R *' doesn't pick up hidden files -- at
least in the base source dir.

> Do you really want -D today in the export?  Seems like you can only do 
> one update a day then.  Fine if you sync the web site from a daily cron 
> job, but not what you want if you want your change up now.

Well, that's on my list of stuff to experiment with. My intial testing
indicates that it will pick up subsequent updates, or more precisely, very
recent updates on the same day -- which surprised me, since that's not how
I read the CVS docs. But if it winds up not working in all cases, I do
know that '-D tommorow' also works, even though the CVS man page mentions
that -D takes 'a date in the past'. Needs more testing.

> You could 
> use -r HEAD (or perhaps that's the default).  For a more formal system 
> (with tags on the actual release code) you'd want to accept arbitrary -r
> arguments (probably not necessary here).

Oh, I didn't know I could specify HEAD as a tag. Yeah, I remember reading
about that as a tag, it just didn't click. Thanks.

And, specifying arbitrary tag names is exactly what I did with the
development site, but that has some other considerations.

> Finally, you might want to add some error checking/handling.  You have a
> comment that says "if successful, remove the tmp export" but you didn't 
> actually check that anything succeded.

Oh yeah. I'm just trying to get past the 'proof of concept' at the moment.

> I use rsync on my web site (and I sync across the network out of a CVS 
> working copy rather than a temp export, with some paranoia checks thrown
> in).  There's an rsync option to delete target files that don't exist in
> the source.  I guess that's a little overkill for this case, though 
> rsync over ssh might give you better security than the .htaccess 
> protected page we used to use (maybe you've already addressed that).

I thought about using rsynch. But I think it would wind up with extra
steps, somehow. I'm not sure of that, but I also wonder how it'd work with
a source copy that gets deleted and recreated repeatedly. I haven't looked
at the rsynch behavior in detail. It seems like more complexity than we
need.

> I'm interested in hearing other ideas.

So am I.

jed
-- 
http://s88369986.onlinehome.us/freedomsight/

... it is poor civic hygiene to install technologies that could someday
facilitate a police state. -- Bruce Schneier



More information about the clue-admin mailing list