Mark Green wrote:
>Mike Cheponis wrote:
>>Looks like the 6500 was about 13 VAX-11/780 MIPS. That would make it about
>>2x to 3x slower than a 486DX2/66.
You've got to be kidding!!! Most
VAXes out there are running large
multiuser systems. The VAX has not been out of production for
very long, and there are still a considerable number of systems
that are running on them. If you meant it in terms of dozens of
users being too small, I might agree. Many of these systems are
running more like hundreds of users.
That's fascinating. Take obsolete hardware and architecture (vax), and
keep them running! I guess I will never cease to be amazed at the weird
things people do. Heck, I heard the other day that people are -still-
running 1401 emulation mode under a VM/360 simulator on their modern h/w!
Here's the clue to understanding it: software and business logic.
Companies (and other organizations) have software systems that
run on older machines, they run quite well and do the job. Many
of these companies see no reason to rewrite this software, sometimes
at considerable cost, so it will run on the latest hardware. There
is just no business case that can be made for this (in terms of
profit and loss). These companies aren't full of techies that
want to play with the latest hardware, they just want to get on
with business. Make perfect sense to me.
>Now, perhaps if we were to port Apache to the
VAX, and used that I/O bandwidth
>on multiple DS3s, well, that's great.
The problems is equivalent hardware. You
can't configure a PC
like a VAX, they are two different types of machines. A PC
is tuned for a single user, while a VAX is tuned for many users.
Amen! Thank you!
I think all the flamage back and forth needs to accept this, and I think
Chuck first brought it up. That's right!
Remember, I was just making the observation that the integer performance of
the vax 8650 is worse than a dx2-66. I think single-user; I run single-user
machines. The future is single-user with vast network-accessed databases.
I do believe at one point you stated that a dx2-66 could beat a
VAX 8650 on any application (I don't think these are your exact
words, something like crushing one). This is what people are
reacting to, I don't think anyone is arguing about the difference
in integer performance.
These are very different machine configurations,
and even the latest
PC would have no hope of keeping up to a decade old VAX running
a large multiuser application.
Again, with -equivalent hardware- it certainly would.
Many of the PC manufacturers
have tried to scale PCs to this level and they have all failed.
This is why companies like IBM, HP, SUN and SGI are still selling
big iron, and sales are increasing rapidly. There is a huge
demand for server style systems, and single bus systems like
the PC can't scale to the performance required.
What you failed to mention is that sgi is -only- selling NT these days,
having given up on Big Iron. Also, the market sizes for IBM, HP, and Sun's
"big iron" exist specifically to be those back-room servers that can do lots
of disk I/Os per second (the web, eh?).
SGI is not selling any NT at this point, and its not clear that
they ever sold very much. The NT experiment at SGI is over, and
there are attempts to sell off what they can of it. SGI has
gone back to building big servers, and Unix/Linux based ones.
They realized that they weren't going to be able to scale
an NT solution, and their strength was in scaling.
I fully agree that computer systems designed for specific purposes are
going to do better for those applications than a PC.
BUT, I would like the Vax Lover Crowd to acknowledge that they integer
performance of their machine is pathetic.
No one in this group ever said it was good. As I said above
the reaction was to your comment about PCs always being better
than a VAX regardless of application.
Its not the speed
of the individual bus, but its the number of busses.
That's of course bull.....
Then why does every large scale system maker build systems
with multiple busses. Name one large scale system that
has only one high speed bus. Surely they all aren't stupid,
there must be some reason for doing this.
The more busses, the more parallelism and the less
waiting.
-IF- the speed of the busses is high enough!
One
fast bus works well until you want to do multiple things, and
then it quickly becomes a bottleneck.
Excuse me? Could you please back up this assertion with data? After all,
at -some- point, all these busses have to get their data into/out of the CPU,
right? And -that- is a "bottleneck" for sure... (Sure, you can have
channel-to-channel I/O, but most aps are not just shuffling bits.)
Which CPU? If I have a high end system I will be running multiple
CPUs. I agree about the problem of getting the data into the
CPU, and as I pointed out this is the weak point of PC based
systems. Everything goes into the CPU over a single bus, which
has had problems keeping up with processor speed. Look at how
many times Intel has changed the speed of the FSB over the past
year or so.
And, if
you don't like Dhyrstone 2.1, then what benchmarks -can- we use?
Each application area has their own set of benchmarks that
reflect their use of the machine. For example, a lot of the
supercomputer centers use fluid dynamics programs to evaluate
machines, since that's what takes up most of their cycles. We
take a two pronged approach to graphics performance. One is
to run some standard visualization and measure frame rate (this
ties us to the real world). Second is to run a set of programs
that measure individual aspects of the graphics pipeline,
such as transformation rate (as a function of vertices/polygon),
fill rates, lighting rates, texture rates, along with
where the pipeline blocks. This give us a pretty good picture
of the display, and how we will need to generate code to
drive it.
Hey, this is very helpful. It makes much sense using a fluid dynamics
program for supercomputers, as well as frames/second for a given
visualization. And then the specific architectural features, such as the
fill rates or polygons/sec, etc.
The thing that bothers me, tho, is that it's difficult to use such programs
unless the h/w is relatively similar.
Our graphics benchmarks run on everything from low end PCs right
up to high end workstations. We use lots of PCs, and we use
lots of high end workstations, I need to know the cross over
point in performance. If I can put an application on a PC I
definitely will. I gain in price, upgradability and accessibility,
but I can't do that with all applications (I wish I could, those
large SGI machines are really expensive).
That's the beauty (and downfall?) of benchmarks like Dhrystone 2.1 - it can
be run on most any piece of computer h/w every designed.
Yes, and the results are typically meaningless. Benchmarking
is really hard, and typically the more general the benchmark
the more useless it is. Here's a little example. Several
years ago we went to two companies to evaluate their high
end machines. According to the specs and benchmarks Machine A
was the clear winner, at least 3x faster than Machine B. When
we actually ran our programs we found that Machine B was
consistently 2 or 3x faster than Machine A. This was a pretty
wide range of applications. There are several reasons why
this happened. One was that Machine B had much better compiler
technology. The other was that the specs and benchmarks didn't
tell the real story, how the machine really performed. Its
easy to tune a machine to look good on standard benchmarks,
but it may not run anything else at near that speed.
You've been reading too much sales literature.
AGP has no
effect on a wide range of graphics applications. The only
place its a big win is texture mapping. In all other cases
the geometry still has to go through the CPU, which is FSB
limited, no matter how fast the memory is.
Sure, but AGP is better than -no- AGP, and it does show that there are other
busses available on a PC, yes? (Which was my original point.)
AGP is no worse than no AGP, but its not clear that its better
for a lot of applications. But, thats not relevant at this
point.
Its interesting to note that SGI fell into the
same trap
that you did when they design their graphics PCs. They
put all sort of bandwidth in that thing, and then found
that the competition blew them away, one of the main
reasons SGI is in so much trouble now! They didn't
realize that the PC is FSB limited, and all the bandwidth
they had built in couldn't be used.
I know this. But, frankly, -every- bus is limited! Knowing how to "tune"
a system's architecture is partially what makes computers fascinating to me.
It's also one of the main reasons I enjoy looking at and studying these
old behemoths, even vaxes? ;-)
Well the IBM 390 architecture is still in use, which goes
back to the 360 in the early 1960s. Thats a pretty long
lived architecture.
--
Dr. Mark Green mark(a)cs.ualberta.ca
Professor (780) 492-4584
Director, Research Institute for Multimedia Systems (RIMS)
Department of Computing Science (780) 492-1071 (FAX)
University of Alberta, Edmonton, Alberta, T6G 2H1, Canada