On Thu, Apr 29, 2010 at 7:19 PM, Philip Pemberton
<classiccmp at philpem.me.uk> wrote:
"Cory Doctorow tells us that '[i]n 2007, John
Goerzen scraped every gopher
site he could find (gopher was a menu-driven text-only precursor to the Web;
I got my first online gig programming gopher sites). He saved 780,000
documents, totalling 40GB. Today, most of this is offline, so he's making
That's heroic and all, but how is a 2007 gophergrab(tm) at all
representative? I'm surprised there was anything at all left at that
point (aside from retro-sites.)