On 2015-01-13 00:57, Doug Ingraham wrote:
On Mon, Jan 12, 2015 at 11:10 AM, Johnny Billquist
<bqt at update.uu.se> wrote:
What do you mean by interrupt latency here?
The latency before an interrupt is actually propagated to the CPU? The
latency before the interrupt is reacted upon by the CPU (assuming interrupt
are ON I hope...). The latency internal to a controller before it even
might raise the interrupt request?
By latency I mean the time between when the interrupt source signals the
presence of an interrupt and when the JMS Z 0 to field 0 takes place.
Ok. Assuming interrupts are on. The longest time would be if we have a
data break going on at the same time, since that is done before the
interrupt signal is sampled. And preferrably an instruction that takes a
long time.
Longest would (I think) be the EAE mode B DIV instruction.
Without the EAE, an indirect memory reference would be the longest
instruction I can think of.
On top of that, you would also have one data break cycle. However, I
wonder if the 3 cycle data break (what was it actually called - my
memory is blank right now) actually do 3 DMAs during one instruction
cycle. But that is possible, which would then definitely add a bunch of
time.
What that adds up to in time depends on the CPU...
Johnny
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt at softjar.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol