That's
fascinating. Take obsolete hardware and architecture (vax), and
keep them running! I guess I will never cease to be amazed at the weird
Here's the clue to understanding it: software and business logic.
Sure, if the old stuff works, why change? (Even if it -is- obsolete!)
It does indeed make sense.
I do believe at one point you stated that a dx2-66
could beat a
VAX 8650 on any application (I don't think these are your exact
words, something like crushing one). This is what people are
reacting to, I don't think anyone is arguing about the difference
in integer performance.
Thanks for helping me understand. I certainly believe the dx2/66 would
make the 8650 cringe on any app, but I'd want to collect all the data on
the 8650 (or 6500, for that matter) that would completely describe the I/O
busses, memory busses, etc before I'd say for -certain- that the dx2/66
would kick old vax butt.
> >These are very different machine
configurations, and even the latest
> >PC would have no hope of keeping up to a decade old VAX running
> >a large multiuser application.
I don't believe this.
SGI is not selling any NT at this point, and its not
clear that
they ever sold very much. The NT experiment at SGI is over, and
there are attempts to sell off what they can of it. SGI has
gone back to building big servers, and Unix/Linux based ones.
They realized that they weren't going to be able to scale
an NT solution, and their strength was in scaling.
Yeah, this is an interesting gamble for sgi. Microsoft, of course,
heavily believes in Win2k, including its scalability. In fact, I think
W2K is going to do very well, as it is really pretty nice (I hate saying
that, but, again, I'm trying to call 'em like I see 'em...).
BUT, I would
like the Vax Lover Crowd to acknowledge that they integer
performance of their machine is pathetic.
No one in this group ever said it was good. As I said above
the reaction was to your comment about PCs always being better
than a VAX regardless of application.
I said a properly-configured PC would whip Vax butt in every case, yes.
But I -don't- believe that a single PC would be the correct -solution- to
deploy, as it is not scaleable or redundant. Clusters make more sense.
Its not the speed
of the individual bus, but its the number of busses.
That's of course bull.....
Then why does every large scale system maker build systems
with multiple busses. Name one large scale system that
has only one high speed bus. Surely they all aren't stupid,
there must be some reason for doing this.
> >The more busses, the more parallelism and the
less waiting.
>
> -IF- the speed of the busses is high enough!
This is a really simple point here. If I have 10 busses and each one
runs at 1 megabit, is that any better than my one bus that runs at 100 Mb/s ?
That's the only little point I was trying to make. (Like the point that
you can bolt 16 VAX "CI" busses onto a modern PCI bus, and still have
leftover bus cycles.)
>>One
>>fast bus works well until you want to do multiple things, and
>>then it quickly becomes a bottleneck.
>
>Excuse me? Could you please back up this assertion with data? After all,
Which CPU? If I have a high end system I will be
running multiple CPUs.
Yikes! Now we've jumped to Multiple CPUs! Yow!
That's a Whole Different Ballgame!
I've been reading Pfister's "In Search of Clusters" and I gotta say
I'm
coming to the conclusion that uni-processors tied into loose clusters
gives the best bang for the buck (in all vectors: reliabilty, scaleability,
etc.).
I agree about the problem of getting the data into
the
CPU, and as I pointed out this is the weak point of PC based
systems. Everything goes into the CPU over a single bus, which
has had problems keeping up with processor speed. Look at how
many times Intel has changed the speed of the FSB over the past
year or so.
Sure. And look how much bigger L1 cache is getting, and why more and more
L2 cache is onboad the PIII modules.
The thing that
bothers me, tho, is that it's difficult to use such programs
unless the h/w is relatively similar.
Our graphics benchmarks run on everything from low end PCs right
up to high end workstations. We use lots of PCs, and we use
lots of high end workstations, I need to know the cross over
point in performance. If I can put an application on a PC I
definitely will. I gain in price, upgradability and accessibility,
but I can't do that with all applications (I wish I could, those
large SGI machines are really expensive).
That sounds perfectly reasonable. Can you cluster some PCs to get a
pseudo-parallel machine?
That's the
beauty (and downfall?) of benchmarks like Dhrystone 2.1 - it can
be run on most any piece of computer h/w every designed.
Yes, and the results are typically meaningless.
Well.... I'm not quite so sure I'd go that far...
Benchmarking
is really hard, and typically the more general the benchmark
the more useless it is. Here's a little example. Several
years ago we went to two companies to evaluate their high
end machines. According to the specs and benchmarks Machine A
was the clear winner, at least 3x faster than Machine B. When
we actually ran our programs we found that Machine B was
consistently 2 or 3x faster than Machine A. This was a pretty
wide range of applications. There are several reasons why
this happened. One was that Machine B had much better compiler
technology. The other was that the specs and benchmarks didn't
tell the real story, how the machine really performed. Its
easy to tune a machine to look good on standard benchmarks,
but it may not run anything else at near that speed.
So this is a spread of maybe 6x or so between A and B? Just curious,
what were the Dhrystone 2.1 numbers for Machine A and Machine B? Could
you run identical OSs on them? Could you (if you wanted to) run identical
compilers on them?
Yes, I acknowledge the difficulty of making good benchmarks, but we should
start -somewhere- .
> I know this. But, frankly, -every- bus is
limited! Knowing how to "tune"
> a system's architecture is partially what makes computers fascinating to me.
>
> It's also one of the main reasons I enjoy looking at and studying these
> old behemoths, even vaxes? ;-)
Well the IBM 390 architecture is still in use, which
goes
back to the 360 in the early 1960s. Thats a pretty long
lived architecture.
That is long-lived. The 360 was the quintessential upward-compatible
architecture, right? "The "360 degrees" (full circle) of
applications".
And s/w compatibilty was paramount.
My bet is we'll see the PC live for much much longer than the 360. It's
just evolution, eh?
-mac
--
Dr. Mark Green mark(a)cs.ualberta.ca