>Mark Green wrote:
>
>>Mike Cheponis wrote:
>>>Looks like the 6500 was about 13 VAX-11/780 MIPS. That would make it about
>>>2x to 3x slower than a 486DX2/66.
>>Integer performance is a very misleading
>>measure of performance when you are talking about system performance.
>My main belief is that nobody is going to keep a
VAX anything running with
>dozens of simultaneous users. So, if a VAX is to be something close
>to "useful" today, it'll be in single-user mode. In that case, Integer
>performance is very important.
You've got to be kidding!!! Most VAXes out there
are running large
multiuser systems. The VAX has not been out of production for
very long, and there are still a considerable number of systems
that are running on them. If you meant it in terms of dozens of
users being too small, I might agree. Many of these systems are
running more like hundreds of users.
That's fascinating. Take obsolete hardware and architecture (vax), and
keep them running! I guess I will never cease to be amazed at the weird
things people do. Heck, I heard the other day that people are -still-
running 1401 emulation mode under a VM/360 simulator on their modern h/w!
>Now, perhaps if we were to port Apache to the VAX,
and used that I/O bandwidth
>on multiple DS3s, well, that's great.
The problems is equivalent hardware. You can't
configure a PC
like a VAX, they are two different types of machines. A PC
is tuned for a single user, while a VAX is tuned for many users.
Amen! Thank you!
I think all the flamage back and forth needs to accept this, and I think
Chuck first brought it up. That's right!
Remember, I was just making the observation that the integer performance of
the vax 8650 is worse than a dx2-66. I think single-user; I run single-user
machines. The future is single-user with vast network-accessed databases.
These are very different machine configurations, and
even the latest
PC would have no hope of keeping up to a decade old VAX running
a large multiuser application.
Again, with -equivalent hardware- it certainly would.
Many of the PC manufacturers
have tried to scale PCs to this level and they have all failed.
This is why companies like IBM, HP, SUN and SGI are still selling
big iron, and sales are increasing rapidly. There is a huge
demand for server style systems, and single bus systems like
the PC can't scale to the performance required.
What you failed to mention is that sgi is -only- selling NT these days,
having given up on Big Iron. Also, the market sizes for IBM, HP, and Sun's
"big iron" exist specifically to be those back-room servers that can do lots
of disk I/Os per second (the web, eh?).
I fully agree that computer systems designed for specific purposes are
going to do better for those applications than a PC.
BUT, I would like the Vax Lover Crowd to acknowledge that they integer
performance of their machine is pathetic.
Its not the speed
of the individual bus, but its the number of busses.
That's of course bull.....
The more busses, the more parallelism and the less
waiting.
-IF- the speed of the busses is high enough!
One
fast bus works well until you want to do multiple things, and
then it quickly becomes a bottleneck.
Excuse me? Could you please back up this assertion with data? After all,
at -some- point, all these busses have to get their data into/out of the CPU,
right? And -that- is a "bottleneck" for sure... (Sure, you can have
channel-to-channel I/O, but most aps are not just shuffling bits.)
And, if you
don't like Dhyrstone 2.1, then what benchmarks -can- we use?
Each application area has their own set of benchmarks that
reflect their use of the machine. For example, a lot of the
supercomputer centers use fluid dynamics programs to evaluate
machines, since that's what takes up most of their cycles. We
take a two pronged approach to graphics performance. One is
to run some standard visualization and measure frame rate (this
ties us to the real world). Second is to run a set of programs
that measure individual aspects of the graphics pipeline,
such as transformation rate (as a function of vertices/polygon),
fill rates, lighting rates, texture rates, along with
where the pipeline blocks. This give us a pretty good picture
of the display, and how we will need to generate code to
drive it.
Hey, this is very helpful. It makes much sense using a fluid dynamics
program for supercomputers, as well as frames/second for a given
visualization. And then the specific architectural features, such as the
fill rates or polygons/sec, etc.
The thing that bothers me, tho, is that it's difficult to use such programs
unless the h/w is relatively similar.
That's the beauty (and downfall?) of benchmarks like Dhrystone 2.1 - it can
be run on most any piece of computer h/w every designed.
You've been reading too much sales literature. AGP
has no
effect on a wide range of graphics applications. The only
place its a big win is texture mapping. In all other cases
the geometry still has to go through the CPU, which is FSB
limited, no matter how fast the memory is.
Sure, but AGP is better than -no- AGP, and it does show that there are other
busses available on a PC, yes? (Which was my original point.)
Its interesting to note that SGI fell into the same
trap
that you did when they design their graphics PCs. They
put all sort of bandwidth in that thing, and then found
that the competition blew them away, one of the main
reasons SGI is in so much trouble now! They didn't
realize that the PC is FSB limited, and all the bandwidth
they had built in couldn't be used.
I know this. But, frankly, -every- bus is limited! Knowing how to "tune"
a system's architecture is partially what makes computers fascinating to me.
It's also one of the main reasons I enjoy looking at and studying these
old behemoths, even vaxes? ;-)
-mac