Mark Green wrote:
Mike Cheponis wrote:
> Looks like the 6500 was about 13 VAX-11/780 MIPS. That would make it about
> 2x to 3x slower than a 486DX2/66.
Integer performance is a very misleading
measure of performance when you are talking about system performance.
My main belief is that nobody is going to keep a VAX anything running with
dozens of simultaneous users. So, if a VAX is to be something close
to "useful" today, it'll be in single-user mode. In that case, Integer
performance is very important.
You've got to be kidding!!! Most VAXes out there are running large
multiuser systems. The VAX has not been out of production for
very long, and there are still a considerable number of systems
that are running on them. If you meant it in terms of dozens of
users being too small, I might agree. Many of these systems are
running more like hundreds of users.
Now, perhaps if we were to port Apache to the VAX, and
used that I/O bandwidth
on multiple DS3s, well, that's great.
for example, all except
the most recent PCs, there is only a single bus. This bus
must be used for all memory transfers, graphics, I/O, etc.
On a single user system, this is sometimes okay, but for
multiple users forget it.
Hey, I'm not saying the original IBM PC was going to outperform the VAX 6500;
but a modern PC will crush any VAX in any application, IMHO, with equivalent
h/w attached.
The problems is equivalent hardware. You can't configure a PC
like a VAX, they are two different types of machines. A PC
is tuned for a single user, while a VAX is tuned for many users.
These are very different machine configurations, and even the latest
PC would have no hope of keeping up to a decade old VAX running
a large multiuser application. Many of the PC manufacturers
have tried to scale PCs to this level and they have all failed.
This is why companies like IBM, HP, SUN and SGI are still selling
big iron, and sales are increasing rapidly. There is a huge
demand for server style systems, and single bus systems like
the PC can't scale to the performance required.
Most of the VAXes had multiple
busses, and each was dedicated to a particular function.
What are:
1) The names of these busses?
2) Their uses?
3) Their peak and average throughputs?
You've completely missed the point here. Its not the speed
of the individual bus, but its the number of busses. The
more busses, the more parallelism and the less waiting. One
fast bus works well until you want to do multiple things, and
then it quickly becomes a bottleneck. I believe Allison has
already given you the speeds of some of these busses, and
at least some of them are faster than any PC bus.
I certainly know for a fact that UNIBUS performed very poorly. I don't have
data at my fingertips, but it seems to me it was around 10 Mb/s (that
megabits/sec) peak throughput. [I prefer measuring throughputs in bits/sec
since that normalizes across different bus widths.]
The UNIBUS was only on the early VAXes, and it was there to
support legacy peripherals. This meant you didn't need to
buy a whole new set of peripheral when you upgraded to a
VAX. Remember that the early VAXes also had PDP11 compatibility
mode, so you could move your existing applications over to
them with conversion.
And, if you don't like Dhyrstone 2.1, then what benchmarks -can- we use?
Each application area has their own set of benchmarks that
reflect their use of the machine. For example, a lot of the
supercomputer centers use fluid dynamics programs to evaluate
machines, since that's what takes up most of their cycles. We
take a two pronged approach to graphics performance. One is
to run some standard visualization and measure frame rate (this
ties us to the real world). Second is to run a set of programs
that measure individual aspects of the graphics pipeline,
such as transformation rate (as a function of vertices/polygon),
fill rates, lighting rates, texture rates, along with
where the pipeline blocks. This give us a pretty good picture
of the display, and how we will need to generate code to
drive it.
One of the main problems with
all of the PC chips is the limited speed of the FSB. Its
no good having high integer performance if you can't get
the data in or out of the CPU.
Fast dual-port SRAM solves the problem, but commodity PCs aren't designed
that way. Also, the AGP bus uses mega-RAM to speed up PC graphics, for example.
You've been reading too much sales literature. AGP has no
effect on a wide range of graphics applications. The only
place its a big win is texture mapping. In all other cases
the geometry still has to go through the CPU, which is FSB
limited, no matter how fast the memory is.
Its interesting to note that SGI fell into the
same trap
that you did when they design their graphics PCs. They
put all sort of bandwidth in that thing, and then found
that the competition blew them away, one of the main
reasons SGI is in so much trouble now! They didn't
realize that the PC is FSB limited, and all the bandwidth
they had built in couldn't be used.
--
Dr. Mark Green mark(a)cs.ualberta.ca
Professor (780) 492-4584
Director, Research Institute for Multimedia Systems (RIMS)
Department of Computing Science (780) 492-1071 (FAX)
University of Alberta, Edmonton, Alberta, T6G 2H1, Canada