It was thus said that the Great Patrick Finnegan once stated:
While at work, we might use a cluster of a hundred or so Dell machines to
replace what was done by a single PDP-11 or VAX 11/780 in the late 70s/early
80s, with the stuff we do, we goet a lot faster turnaround on jobs, and the
jobs have increased in complexity (and usefulness) exponentially along with
the CPU horsepower. In fact, there's people who are talking about (and a few
acutally doing it now) using GPUs on video cards to do some amount of useful
work.
GPUs do pretty much nothing but vector operations on large data sets. So
if your problem matches that domain, you can use a GPS, thus leaving the CPU
to tackle other problems.
In fact, another difference between now and 20 years
ago, is that we used to
be a 8-5, M-F shop, with users only being able to use our systems for a
subset of 24 hours per day. Now, we have to keep everything running 7x24, or
users start complaining - and with some code that runs for as long as 30 days
at a time (720 hour jobs) across a whole bunch of systems, even a single
weekly (or monthly) fixed downtime period isn't acceptable to our users.
Google could make a killing licensing their GoogleOS (or parts thereof).
They manage to use off-the-shelf PCs (and not even the fastest ones) as plug
in replacements for their operations. The technology of the software is
very impressive.
-spc (But I suspect this is getting to be a bit off topic ...)