I think a much larger number of records, and longer records would give a more
thorough test. On a level playing field, the 6502 will always outrun the Z80 in
strictly 8-bit quantities, and the Z80 may have a bit of an advantage with
strictly 16-bit quantities. With, say 48 or 64-bit quantities, things begin to
look different, and with 1K, or even 1/2-K 32-byte records, things look VERY
different. A simple sort-in-place such as this would possibly be a reasonable
test, but the special cases have to be neutralized by doing something that makes
them insignificant. 16-bit values are an advantage to the processor with 16-bit
registers, while 8-bit values are an advantage to the processor without them.
Most data records, e.g. a customer name, however, being longer than that, might
be more of a challenge, as well as being more realistic. The 8-bitter is
favored with bytes, one with 16-bit registers has a distinct advantage with
words, but neither of them has a substantial advantage if the records range in
length between 1 and 32 bytes, with more or less random distribution of length.
A larger array of data might be more convenient for measuring the relative
performance of the CPU's, too, particularly if the comparison is closer than
first anticipated.
The relatively inefficient algorithm might actually provide a better test than a
more efficient one, e.g. shell- or quick-sort.
more below ...
Dick
----- Original Message -----
From: <CLeyson(a)aol.com>
To: <classiccmp(a)classiccmp.org>
Sent: Saturday, December 22, 2001 3:22 PM
Subject: Re: 6502/Z80 speed comparison (was MITS 2SIO serial chip?)
In a message dated 12/22/01 Chris Leyson writes:
> How about a straight insertion bubble sort ?
A completely useless task but
it
> does take a defined number of data moves and
compare operations. The array
> to be sorted could be say, 16-bit signed integer, 1k words long and in
reverse
> order. (That should take a while for a 6502
to sort out).
In a reply dated 12/22/01 Richard Erlacher writes
Yes, maybe something of that sort would be
appropriate. Testing it on
8-bit,
and then 16-bit quantities might be just the
thing for testing the relative
ability, in spite of architectural differences, of handling longer data.
I'd
suggest that larger records might be more
appropriate, i.e. 32-byte records,
etc.
OK 8-bit and 16-bit data is appropriate but would require separate algorithms.
As for record length I would suggest at least 1k (1024) entries. (Eliminates
base page cheating)
BTW, when I was in college, which I realize was
some time ago, but, back
then,
Bubble Sort and Insertion Sort were two different
algorithms ... I don't
remember the differences, but will check my old texts, though they're in
Sanskrit ...
Apologies, that should have read insertion sort OR bubble sort. Bubble sort
runs through the array comparing adjacent values and swaps them whereas
insertion sort moves an array member until it's in the right place. In terms
of
performance they are both slow algoritms.
Yes, ... order n^2, IIRC.
Here is the code from Numerical Recipies for an insertion sort (Fortran and C)
Sorts an array arr(1:n) into ascending numerical order, by straight insertion.
n is input; arr() is replaced on output by its sorted rearrangement.
integer n
real arr(n)
integer i,j
real a
do j=2,n
a=arr(j)
do i=j-1,1,-1
if (arr(i).le.a) goto 10
arr(i+1)=arr(i)
end do
i=0
10 arr(i+1)=a
end do
{
int i,j;
float a;
for (j=2;j<=n;j++) {
a=arr[j];
i=j-1;
while (i>0 && arr[i] > a );
arr[i+1] = arr[i];
i--;
}
arr[i=1]=a;
}
}
Ignore the floats and reals for the data, they should be ints or chars for the
purposes of our 6502/Z80 benchmark.
Just for the hell of it, I will try this out in DSP56300 assembler.
Chris