I worked on a model 1 with 40k memory (my very first computer experience) and floating
point, and later a model 2 stripped. I believe the model 2 still used table lookup for
multiply.
floating point in model 1 (and I think model 2) was limited to a 98 digit mantissa, still
more precision than the hardware in any subsequent computer AFAIK. since the exponent was
10**-99 to 10**99 a broader range than any computer till many years later I think.
fixed point divide was limited to (?) 10000 digits because it ended at address x0099, i
set that up once it took a loonng time for that one instruction to execute. might have
been 1000 digits only.
don't know if the disk i/o RPQ overlapped, I saw one through a glass window once.
I'll have to look up the 1710. There are emulators for the 1620, i hope there are for
the 1710 too. I had forgotten or never knew the relationship, thank you.
<pre>--Carey</pre>
On 10/03/2024 10:13 AM CDT David Wise via cctalk
<cctalk(a)classiccmp.org> wrote:
The IBM 1710 was a 1620 enhanced for process control. It had interrupts.
Dave Wise
________________________________
From: Paul Koning via cctalk <cctalk(a)classiccmp.org>
Sent: Thursday, October 3, 2024 7:05 AM
To: cctalk(a)classiccmp.org <cctalk(a)classiccmp.org>
Cc: Paul Koning <paulkoning(a)comcast.net>
Subject: [cctalk] Re: Might be antique computer parts
On Oct 2, 2024, at 5:23 PM, Van Snyder via cctalk
<cctalk(a)classiccmp.org> wrote:
On Wed, 2024-10-02 at 16:39 -0400, Paul Koning via cctalk wrote:
For the earlier 1311, lack of overlap made
perfect sense. After all,
the 1620 has no interrupts, no parallelism of any kind: every I/O
operation stalls the CPU until the operation is finished. (That and
the BB instruction are among the reasons why Dijkstra rejected the
1620.)
1401 had overlap, but as far as I can tell, only for cards and tape.
The 1403 had a buffer, and 1401 had instructions to test whether the
printer or carriage were busy, but that "overlap" didn't work the same
as for cards and tape.
I remember the 1620 being called CADET, but not because it was a
beginner computer. It didn't have arithmetic hardware. It was done by
table lookup. CADET meant "Can't Add, Doesn't Even Try." One of my
colleagues exploited the table-based arithmetic to do octal arithmetic
for satellite telemetry processing.
That would be the Model 1, which had "add-multiply tables" in lowcore. Adding
and multiplying was done by looking up entries in those tables. I assume the address was
formed from the two input digits, producing a two-digit result (or for add, maybe a one
digit result plus a carry bit encoded in the sign bit). Core memory of course is
non-volatile so unless you had a program scribble wildly, those tables would carry from
one run to the next. When in doubt, you could boot the add-multiply tables card deck to
load the correct values.
And yes, you could do octal arithmetic that way, in the sense of interpreting a string of
digits in memory as octal rather than decimal digits. Neat hack.
Ours was a Model 2, which differs in a number of ways; one major difference is that it
has add and multiply hardware. So while we did have an add-multiply card deck sitting
around, it wasn't actually needed.
A fun aspect of both machines is that they would do arithmetic on variable length
operands, of any length if you were patient enough. So you could add a pair of 5000 digit
numbers. The same was true for floating point (I don't know if that was only in Model
2), the mantissa was variable length (the exponent always two digits). I remember the
Fortran compiler had a compiler option setting to choose the "field length"
(number of digits) for the integer and real data types, allowing you to pick any length up
to 20 digits separately for each.
Speaking of interrupts vs. blocking I/O, there's an interesting hybrid technique
I've seen only in one place: the Electrologica X1 will interrupt on completion of an
I/O, but also allows you to start a new I/O on a device that's still busy with the
preceding one. If you do that, the second I/O command will block the CPU until the first
one is done, then it will issue the I/O start and the instruction is then complete. In
practice, I/O was interrupt driven, but with the help of what is arguably the world's
first BIOS, a set of ROM resident I/O library routines originally written by Dijkstra for
his Ph.D. project. As far as I can tell, the X1 was the world's first production
computer with interrupts as a standard feature.
paul