[clue-admin] Basement hosting.
David L. Anselmi
anselmi at anselmi.us
Mon Feb 20 21:09:07 MST 2012
So as far as things kept in git, it may be very easy to replicate our web site (I'm talking about
what is, not what may be) to anyone who wants it. A process to update our server from git should be
usable by anyone so all the pieces might happen regardless (making the basement hosting
"application" almost free).
I already mentioned that using git to distribute copies of data came from here:
http://zgp.org/pipermail/linux-elitists/2009-February/012763.html
And if you read carefully, it's actually using git to replace SMTP (and in our case HTTP as well),
though it hooks into those to support traditional users. But we could put the archives and web
pages in git and make them local for all our members. (So we'd need instructions, a better
architecture, maybe hooks into clients, and a toe hold on the web for the uninitiated. But if I
ever start an invite only group it might work like that.)
But I don't think we have answers for the interesting questions yet. What is actually required to
pull this off? (I wrote up a bunch of history and then realized it wasn't complete. But since it
was already too long I removed it. I'll bring it up if it matters.)
There are some constraints to work with: We don't (until Dan came along) have much skilled time to
run things. If our list or web pages go down we cease to exist (it could be one or the other so
we'd "half exist"). If people have to figure out something as difficult as "which is the main list"
they won't join (we think).
So we think that we don't have enough admin time to maintain a file based site (hence the Drupal
effort).
I suspect that manually updated, round-robin DNS is not resilient enough for what we want. If nodes
are unavailable it would be good to notice sooner than I will and update DNS automatically. I'm
also not sure that DNS is the right way to distribute a service like this. Does round-robin work
well enough (given caching behavior we don't control) to balance well and work around failures?
There ought to be some checking that nodes are serving correct data. There's a lot more that could
be done and we shouldn't try to save the world all at once, but I think we need some mechanism to
ensure we notice unapproved changes to a node.
Probably this sort of thing has been done before. Anyone have any examples? If not we should make
sure we document well enough to write a paper on it.
I saw some things on OASIS (look under 2006 at [1]), which may have gone no further than research.
The first paragraph of this [2] review makes it sound like what we want (but the devil is in the
details).
1) http://www.cs.princeton.edu/~mfreed/publications/
2) http://www.arl.wustl.edu/~jst/reInventTheNet/?p=150
And related to the Freedom Box project, Eben Moglen has some sort of basement hosted Facebook going
on (he wouldn't call it that because Freedom Boxes don't (necessarily) go in your basement).
Don't take this as trying to discourage experimentation. I'm trying to discourage excessive
expectations of seeing it in production soon.
And to up the ante, if we can manage this with the web site I think we can manage it with the mail
list too (parts are easier, parts are harder), even using the existing mailman software.
Dave
More information about the clue-admin
mailing list