Answer: Weekly Classic Computer Trivia Question (20150112)

Doug Ingraham dpi at dustyoldcomputers.com
Wed Jan 14 17:50:21 CST 2015


More discussion ensued than was expected for this question.

On a minimal 4k machine (no extended memory and no EAE) and no data break
devices operating the longest latency would be that of a memory reference
instruction with a defer (indirect address) cycle.  On a Straight 8 this
would be 4.5 microseconds.  A single cycle data break device could have
delayed the interrupt indefinitely although I know of no such devices.  A
three cycle break could extend the delay by an additional three memory
cycles (4.5 microseconds) for a total of 9 microseconds.

If the machine was equipped with an EAE then the Division instruction would
have delayed entry of the interrupt service routine by up to 35
microseconds (on a straight 8).  I believe this was a 9 microsecond
instruction on the 8/e EAE.

If the machine is equipped with extended memory then there is the CIF
induced delay.  CIF is the Change Instruction Field instruction.  This
instruction only schedules the change.  The actual change does not take
place until the next JMP I or JMS I instruction.  The interrupts are masked
by the CIF until the conclusion of the indirect branch.  This is because
the interrupt handler would not be able to restore the instruction field
correctly in this instance.  I have been unable to think of a reason why
you would ever want to delay the indirect branch although you certainly
could do so.  That means the sequence CIF followed by a JMS I would only
see the additional delay from the JMS I which is the same as the delay from
a memory reference instruction with a defer.  The JMP I variant would only
be 3 microseconds.

And the greatest source of interrupt latency on a straight 8 would have
been due to the Option 189 which is the low cost A/D converter.  This
option added the ability to do 6 to 12 bit A/D conversions as a CPU
instruction (6004).  It performed a typical successive approximation
conversion using the MB and AC registers.  When 12 bit conversion were
selected this instruction would take 56.6 microseconds to execute according
to the manual.  I don't have a 189 for my straight 8 but I am planning on
hunting down the cards I don't have.

And the answer Mr Stearns gave for this question was "No matter what it is
faster than Windows."



On Mon, Jan 12, 2015 at 9:10 AM, Doug Ingraham <dpi at dustyoldcomputers.com>
wrote:

> This weeks question comes from Warren Stearns.  It is a little obscure and
> specific to the PDP-8 family of computers.
>
> What is the source of the greatest latency in the interrupt system on a
> PDP-8.
>
> I have three answers, two of the answers depends on the options fitted to
> an 8.
>
> Why would this have been an important question?  Interrupt latency would
> be extremely important in the field of data collection which was one of the
> principal early uses of these machines.  My particular 8 was used for
> exactly this purpose in the Summer when it was hauled to a radar site and
> collected weather research data for the Institute of Atmospheric Sciences.
> I believe it was used for this from 1969 through 1972.  I have several
> hundred DECTapes with some of this data.  The surprising thing is we don't
> see any problems reading DECTapes that haven't been out of their box since
> the early 70's.
>
> Doug Ingraham
> PDP-8 S/N 1175
>


More information about the cctalk mailing list