Sorry if this comes through twice. My mailer is acting up.
I know better than to continue an OT thread, but there are
a few points worth considering here.
Cool down, there. I was merely observing that
connectivity (being
the highway) has trailed Moore's Laws (at least in the US) for quite
some time.
True in the way Moore's Law is usually thought of. But when
he stated his "law" Moore wasn't talking about speed; he
was talking about the number of transistors. He foresaw
the rate at which transistor sizes could be shrunk. Now,
for a long time, making the transistors smaller made them
faster and the overall CPU was also faster. That's less the
case than it used to be, in part because the on-chip interconnects
don't benefit as much. On top of that, we just kept making
the discrepency between CPU and memory speeds greater
and greater, putting more and more demand on the cache.
So rather than using the additional transistors to create an
implementation that's even faster (e.g. increased pipelining),
we're now using them for additional cache and for multiple
cores. And that's we where start to hit Ahmdal's Law. As the
parallel guys have known for years, not all tasks parallelize
nicely. There's probably not much to gain from implementing
a word processor with multiple concurrent threads. So
what does all this mean? Even though the physicists are
clever enough to keep the "end of Moore's Law" always
10 years in the future (which I've been hearing for at least
20 yeasrs), Moore's law doesn't necessarily buy us the
performance increase we're used to. And that's before
we even look at software bloat.
Given that the bulk of the use for home PCs is net-
based; there will come a time that there is simply not enough
bandwidth *commonly* available to feed a petaflop machine.
Regardless of the needs of home hobbyists, who don't drive production
of consumer PCs, most people I know of still use the box primarily
for email and web browsing. I don't consider amateur astronomy to
be in the "killer app" class.
There is once class of user that's large enough and willing
enough to fork out dollars that there's still a point in pushing
performance as much as we can. That's the gamers, the
guys that keep moving toward a day when their game is
so immersive and so realistic that they can't always tell
whether they're in the game or the real world. And it turns
out that some of the things we need to do in those games
do parallelize nicely. So I expect we will continue to see
some of the continued growing overkill of system performance
for uses like e-mail and browsing. But then the people who
took the web and twisted it from being a solid information
distribution world to an entertainment medium will find
ways to use those cycles.
Now I shall slink off and punish myself with many lashes
of a wet noodle for contributing OT material. My only excuse
is that I mentioned hearing about the end of Moore's law
for 20 years :-)
BLS