On 9 May 2010, at 00:02, cctalk-request at
classiccmp.org wrote:
Message: 24
Date: Sat, 8 May 2010 13:37:11 -0700 (PDT)
From: Fred Cisin <cisin at xenosoft.com>
Subject: Re: Greatest videogame device (was Re: An option - Re:
thebeginningof
To: "General Discussion: On-Topic and Off-Topic Posts"
<cctalk at classiccmp.org>
Message-ID: <20100508132544.P80526 at shell.lmi.net>
Content-Type: TEXT/PLAIN; charset=US-ASCII
On Sat, 8 May 2010, Tony Duell wrote:
>>> Is the size of the data bus irrelevant?
>>> (There have been people who maintain that THAT is the measure of the
>>> processor!)
>>
>> They're wrong :-). The "size" of the CPU is defined by the size of
the
>> internal registers. I am astonished there is actual discussion debating
>> this.
>
> Ah, so a Z80 is a 16 bit processor (IX, IY, SP and PC are all 16 bits,
> and there is no documented way to use half of them (Yes I do know about
> some of the undocumented ways)).
>
>>
>> I think people who maintain the size of the data bus as being the
>> measure of a CPU are hardware people who have never optimized an inner
>> loop in machine code.
No I am first and foremost a programmer who has picked up hardware by buying an old
mainframe (before home computers existed) and maintaining it. The 'largest addressable
unit of storage' I was taught in my computer science degree is now outdated. I have
read many specifications issued by microprocessor manufacturers (who should know what they
are talking about surely) who define their computers by their data bus width, for instance
Intel define the 8088 as an 8 bit computers system. It the chip maker says their chip is
an 8 bit processor why should it become a 16 bit computer when you merely plug that chip
into a motherboard and the marketing people call it 16 bit.
Conversely I could claim that those who calim the 8088 is a 16 bit
processor have never wire-wrapped the data bus connections to one, and
found there are only 8 to wire up.
The problem remains that we are trying to come up with a single
quantification for measuring something with multiple variable
characteristics.
Yes, and there are other aspects to be considered too. Whilst I have always thought of my
ICT1301 as a 48 bit computer because it has a 48 (+ 2 parity) bit data bus but the
engineers of the day called it a 4/12 system meaning 4 bits parallel x 12 digits serial.
The mill (ALU) is only 4 bits wide but the three arithmetic registers are 48 bits and it
had three 24 bit 'control' registers which hold instructions and it has no program
counter register at all.
If we were to grossly oversimplify,
and use the most "popular" quantifiers,
we would still have two characteristics to measure.
the 8080 is 8 bit software, 8 bit hardware
the 8088 is 16 bit software, 8 bit hardware.
the 8086 is 16 bit software, 16 bit hardware.
the 80286 is 16 bit software, 16 bit hardware.
the 80386SX is 32 bit software, 16 bit hardware.
the 80386DX is 32 bit software, 32 bit hardware.
the Sentry-70 is unknown.
But this does not invalidate the measuring systems used for different
types, and it is still trivially easy to come up with defensible ways to
measure with different end results.
We should consider what the bit size is used for. To users they expect speed to increase
with size. An 8088 and an 8086 or a 68008, 68000 and 68020/30/40 have the same internal
architecture but the speed of operation is not the same, the cpu has to go to make more
memory accesses with a narrower data bus. This does affect speed and to the poster who
referred to "hardware people who have never optimized an inner loop in machine
code" I should point out that if he optimised his loop on a machine with an 8 bit
data bus and expects that the loop will still be optimal on a 16, 32 or 64 bit data bus
processor then he is almost certainly wrong. It won't be far off, and MIGHT be optimal
but until you do that again (As I have done MANY times) you cannot be sure it could not be
tweaked to give a more optimal loop on a wider data bus width. Even turning off the cache
memory can make a bit difference, and on one memory occasion I found that emulated 68k
assembler running on a PowerPC ran quicker than the same code re-written in native C on
the same PowerPC processor. The code was simply for rotating a one bit deep bit map 90
degrees and after looking in great detail at the generated PPC code the only explanation I
could come up with was that the possibility that the 68k emulator turned off the RAM cache
which meant that when my code wanted to read a word the PPC was doing four memory accesses
to load up the entire cache line, three of which were pointless because for a large bitmap
they would be purged before the other three were used, whereas the 68k emulator just did
one memory access. I imaging the emulator locked the cache so its own code would not be
purged.
Roger Holmes.
Now, what are the definitions of "microcomputer", "minicomputer",
"mainframe"?