Well, I recall that someone said, a while back, that the devil's in the
details. What I'm trying to do is place boundaries around this problem for
purposes of understanding its limits. Others who attempt to replicate your
work on other processors will want to know these things. From your
statement that the process produces a result of '*' for an invalid input,
which, apparently would include negative values, non-integers, and integers
of value 4000 or greater. If the input is presumed to be unsigned integer,
that solves much of the problem. Now, you want to store the output in
memory, presumably as ascii characters, presumably as a null-terminated
string, and perhaps (optionally) echo it to the screen in the aftermath of
your run. Does that sound like a reasonable thing to do?
How do we tell this program what string of numbers to convert? Is this
someting you want to put into memory as a null-terminated string of binary
values, or would you prefer a single word for each value, with a null
terminating the input array or a fixed string length?
It's still simple enough. I can even understand it myself, I think.
Dick
-----Original Message-----
From: Sean 'Captain Napalm' Conner <spc(a)armigeron.com>
To: Discussion re-collecting of classic computers
<classiccmp(a)u.washington.edu>
Date: Sunday, April 18, 1999 12:23 PM
Subject: Re: Program Challenge (was Re: z80 timing... 6502 timing)
It was thus said that the Great Richard Erlacher once
stated:
There are a few details which have been left out of the specification for
this task.
Does it require input validation?
I think I specified that. The valid range of Roman numerals is 1 through
3,999 inclusive. The routine does have to check that and construct a
special string ( "*" ) if the input is not in that range.
Is the binary input pure binary, or is it BCD?
Okay, that might be a valid point, but it's pure binary, not BCD.
> Shouldn't it go both ways, i.e. shouldn't we also have to convert ROMAN
to
BINARY as well
as BINARY to ROMAN?
One thing at a time, please 8-)
> What about the console I/O routine? Shouldn't there be some definition
of
> how it's to be used? Should it be a call with
the I/O character simply
held
in a register
before/after the call?
I liked Sam's suggestion of ``printing to memory'' as a way to avoid the
complications of I/O in this, and if I didn't make this clear that the
conversion was to be stored in memory, I'm sorry.
That should work. In fact, input could be done that was as well, placing
the input in memory and then executing the program from a debugger or with a
call from a HLL.
> How much memory is used can be defined in two
ways. (a) the number of
> bytes, and (b) how much contiguous memory must be present in order to
allow
the code to be
implemented. It requires 200 bytes of RAM is not a valid
statement if that RAM has to be scattered over a 32-KByte range.
Uh ... okay ... gee ... I thought common sense would be enough here.
The problem here is that I could say: Code segment size, data segment
size, bss (dynamic) segment size and stack segment size, but that tends to
lead to certain assumptions about how to code (at least to me). In modern
systems, code and data are kept separate, but there's nothing really
requiring that, and as you can see from my solution, I mix both code and
data together, which was a common trick in the 8-bit era (and maybe used
earlier as well).
This is an issue only because these systems have both ROM and RAM, and using
parts of each can bias the resource tally without really having any meaning.
> If your
> claim is that your code runs in 200 bytes of memory, it must be runnable
on
> a computer having only 200 bytes of memory. If
you can't figure out how
to
> build a 200-byte RAM, then perhaps it might be
more appropriate to
suggest
it requires
only 256 bytes of RAM, which you can buy.
I'm a software guy---building computers isn't exactly my forte. Besides,
if I say my code only requires 200 bytes of memory, and I can't figure out
how to build a computer with 200 bytes of memory (pretty easy for me 8-)
then that means I have 56 additional bytes to play with, maybe by adding
code to run blinkenlights or something.
Besides, who wants to build a computer for this? Okay, except for Tony?
That's the ultimate test, though, isn't it?
> Was the processor in question available in 1983?
As I recall, the 6809
was,
> but there are some which weren't.
>
> Now, for the more subjective aspects of the comparison, how was the code
> initially generated? How long did it take to code the problem? How
long
to debug it?
This I'd rather not include as this is very subjective. It only took me
about an hour or so to code and debug the program, but I'm a software guy
that's been programming for 15 years or so, and the 6809 was my first CPU I
learned assembly language on. It might take Tony four hours to get a
similar program running. By the same token, he could probably get a simple
computer system running in an hour that would take me four hours.
It really depends upon how much experience you have both in programming
and the CPU in question. I know that it would take me longer to write this
program for the 6502 or the Z80, both of which I've never written code for
(but I can read code for each CPU).
How is the 6809E relevant to the timing of the
Z-80 and 6502?
Nothing at all, except as an outside reference. That, and I don't really
know Z80 or 6502 code (nor do I have development systems for these chips).
Its certainly an outside reference. It may be a challenge for everyone to
improve on it. . . We'll see, I guess
-spc (Gee, I thought it was pretty simple problem
myself ... )