In effect, what the logic analyser did was to display
the results of
stepping
through about 100 instructions showing the data values and address pointers
that were required as they were used by the program. This is exactly what
a debugging session would be when a used to actually stop the program and
then manually and visually inspects the values of interest. While the logic
analyser is much more restrictive, its great advantage is that it does NOT
I woudln't necessarily claim it's more restrictive, rather that it's
another tool you can use. Sometimes softwere debuggers, softwear
breakpoints (that stop the program under test runnign when some codition
is met and let you exampine memory, CPU registers, etc) is a more
appropriate way, sometimes it's not.
change the timing of the operation since it captures
any data it collects
in real time and stores it in its own buffers outside of and apart from the
system being tested without in any manner disturbing the system being
tested by that observation. The logic analyser can do this because it
Basically a logic analyser is like a multiple-channel storage 'scope for
digital signals. Unlike a normal 'scope it doesn';t record the voltages
of the signals, it just records whether they are a logic '0' or a logic
'1' (the logic threshold can generally be set so the inturment can be
used with TTL, CMOS (at various supply voltags, ECL, etc).
is able to collect the data in its own buffer as the
data is produced by the
test system without regard as to when the trigger location or event in the
system being tested will take place. When the event does occur, the logic
Yes, a logic analyser shouldn't have any effect on the device under test.
analyser then takes a simple action. It stops
accepting any more data and
retains the snapshot of the previous bunch of instructions that preceded
the event. It is like asking the breakpoint to take place at a certain
location,
What happens is that the analyse stores the state of it's inputs at
regualr time interface (perhaps every 50ns, or every 1us, or ...) It
stores the last <n> samples in an intenral buffer memory (n> depends on
the instrument, it might be 1024 samples for an old analyser like the one
I use.When it gets a trigger it will then take a certain further number
of smaples and then stop recording. If that 'further number' is 0, then
you get 1023 samples of what happened before the trigger event. if it's
1024, then you get the trigger event and what happened after it. If it's
512, then you get a a recording with the rigger event in the middle. Some
analysers let you select the exact number of samples to take after the
trigger before stopping, others just give you the 3 choices I've
mentioned (start, end, middle), but that's notmally enough
then asking the system to reverse itself and run
backwards to look at what
HAD JUST HAPPENED during the previous 100 instructions. Obviously
impossible to do with a breakpoint on the normal way, but that is how the
logic analyser functions.
Exactly. And thgt's why the logic analyser can be more useful than a
breakpoint.
Please correct my memory of how the logic analyser functions if I did
not remember correctly - or if that is just one of the modes of operation.
See above for the basic 'capture modes' -- what the analyser does when
it's recording. When you look at the data, most analysers will display it
as a timing diagram (which is what you want when debugging hardware
problems), often you can also display it as a table of 1's and 0's,
possible also in hex or ovtal, or ASCII cahracters, or... Many
instruments either include a computer or can be interfces to a computer
so you can run a dissaembler o nythe recorded data and actually see what
instructions were being executed.
To accomplish to feat, the logic analyser probably has to run at least
10 times
faster than the system being tested. (Tony, do you have any estimates?)
It depends on the system. If the analyser allows and external clock
signal for recording (many do), and you have a suitable cycle strobe in
the system under test you can get away with an anlyser that's only as
fast as the machine you're working on. For example, I often use my
analyser to trace microcode flow in old minicomputers. I know the
microcode program counter can only change on a particular edge or a
steady clock signal. I use that signal to clock the analyser -- but on
the other edge (so the microcode program ocunter outputs are stable when
the analyser is trying to record them).
For other systems you haven't got a suitable clock to do that with. You
have to use the built0-in clock circuit of the logic analyser, which
won;'t be synchonieed to the system you're working on. So you ahve to run
the analyser considerably faster than the data rate of the machine under
test. I would think that 10 times was certainly fast enough.
With the fastest CPUs being used today, that is
becoming increasingly MUCH
more difficult. Tony, is that what you are concerned about as well, how the
logic analysers are able to keep up?
That is certainly a problem (it's one reason I don't have a fast
computer, I can't afford a fast enough logic analyser to maintain it),
but it's not the problem I was thinking of here.
I was thinking for farily traditional microcontrollers executing perhaps
20 million instructions per second at most. That is a rate that most
logic analysers can keep up with. But the problem is that if the
processor and program memory are on the same chip, there is no way of
conencting the analyser to the address and data buses so it can see
what's going on.
-tony