On Sunday 13 August 2006 09:55, Tim Shoppa wrote:
Recent posts on the subject of "modern"
logic families and PCB's
make me think of an obvious trend in computing over the past
several decades:
Power density (and required cooling/heat dissipation) have grown
greatly.
Sort-of. CMOS is approaching the power density level of water-cooled bipolar
logic, but it's still not there yet...
A desktop PC of 20 years ago often had no fan, or if
it had one
it was just to generally keep air moving through the case and not
to cool any specific heat producing sections.
Of course modern desktop PC's (since at least the early/mid 90's) have
vastly greater heat production and cooling requirements, with CPU heat
sinks and fans being vital to reliability.
Yes, there's now a lot more FLOPS available to the user/programmer.
At the same time, and a subject of increasing
frustration for me,
the number of computers required to do a given task has gone up
exponentially. Tasks that used to (meaning 20 or 30 years ago) used
to require a single PDP-8 or PDP-11 class minicomputer now use dozens
to hundreds of PC-clone's to do the same functions. The heat production
(and power and cooling requirements) of all the resulting PC-clones is
hugely higher.
While at work, we might use a cluster of a hundred or so Dell machines to
replace what was done by a single PDP-11 or VAX 11/780 in the late 70s/early
80s, with the stuff we do, we goet a lot faster turnaround on jobs, and the
jobs have increased in complexity (and usefulness) exponentially along with
the CPU horsepower. In fact, there's people who are talking about (and a few
acutally doing it now) using GPUs on video cards to do some amount of useful
work.
In some cases, we actually have the same Fortran code that was running on
machines 20-30 years ago still being used on modern stuff (and modified along
the way to deal with each successive new "supercomputing" technology).
Of course, with more moving parts, all made in eastern Asia, the biggest
challenge I have now is trying to keep the individual machines in a cluster
running as close to 24x7 as possible...
In fact, another difference between now and 20 years ago, is that we used to
be a 8-5, M-F shop, with users only being able to use our systems for a
subset of 24 hours per day. Now, we have to keep everything running 7x24, or
users start complaining - and with some code that runs for as long as 30 days
at a time (720 hour jobs) across a whole bunch of systems, even a single
weekly (or monthly) fixed downtime period isn't acceptable to our users.
I personally see massive government/military
contractor computer
projects turn into a race to buy the fastest/biggest/best/most computers
with little regard as to whether you need hundreds of blade servers
to run a single web site or mail server. It is also frustrating
to see Peoplesoft/Oracle/Microsoft sell thousands and thousands
of licenses at a cost of hundreds of millions of dollars when the
same function used to be done by a single PDP-11 with a couple of
RK05's!
I'm not sure that it's REALLY the same function as before. While some stuff
(like Windows) seems to get more bloated for no good reason, there's lots of
things that have good reason to be much more complex than what they were
decades ago. Tax laws, for instance, aren't getting any simpler to deal
with, which means more complex software... and having software that's more
flexible increases its complexity.
Pat
--
Purdue University ITAP/RCAC ---
http://www.rcac.purdue.edu/
The Computer Refuge ---
http://computer-refuge.org