On Thu, 29 Apr 2010, Cameron Kaiser wrote:
"Cory Doctorow tells us that '[i]n 2007, John
Goerzen scraped every gopher
site he could find (gopher was a menu-driven text-only precursor to the Web;
I got my first online gig programming gopher sites). He saved 780,000
documents, totalling 40GB. Today, most of this is offline, so he's making
That's heroic and all, but how is a 2007 gophergrab(tm) at all
representative? I'm surprised there was anything at all left at that
point (aside from retro-sites.)
Actually, there were still some larger academic sites still up then. I
archived a few myself (
userserve.ucsd.edu was particularly nostalgic for
me since it was a little SE/30 with a big disk in AP&M, and I managed to
archive it before it was decommissioned -- I used it as an undergrad).
That reminds me of my efforts to preserve the contents of a BBS in the
tail end of the BBS era. I managed to archive the contents of Da Warren
of Bakersfield California shortly before the owner left for college. Part
of it can be seen at
http://www.crummy.com/warren/. It was a source of
much silliness.
--
David Griffith
dgriffi at
cs.csubak.edu
A: Because it fouls the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?