Am 16.11.2011 19:15, schrieb Chuck Guzis:
On 15 Nov 2011 at 20:40, Keith Monahan wrote:
FWIW, I love simulation with FPGAs. It usually
reveals my beginner
mistakes, and is a pretty powerful tool to help test your design.
While I haven't done enough to demonstrate it, there are differences
between simulation and real hardware.
This and Tony's comment about using
discrete logic rather than FPGA
points up an interesting, but important, limitation of FPGAs (and
CPLDs): they're clocked designs.
While Tony can use his 7400-series logic to implement an asynchronous
design, almost all FPGA implementations must have a clock of some
sort.
Not every design employing logic is a CPU and not all employ clocked
logic.
Someone who wants to substitute, say, a CPLD for a bunch of unclocked
TTL is going to have to come up with a clock--and then determine how
that will affect function.
Achronix is the only vendor that I can recall offhand even discussing
aysnchronous FPGAs--and it's not even clear to me if they're still
offering them. Simulation must be a nightmare.
--Chuck
When I was working for my PhD on testing of digital systems 20 years
ago, we had
such a discussion already, namely: asynchronous vs. synchronous design,
or more academic: Moore vs. Mealy finite state machines.
The answer to that was: do synchronous, registered designs - always - ever!
Why? The system clock decides on the maximum frequency the design can run.
Logic paths can be designed and verified to have these maximum
propagation delays;
they are also loop-free (a loop always involves at least a flipflop or
register), which
makes them much better testable.
Asynchronous effects, such as glitches, are much easier to avoid,
because a local
instability will not propagate over the next register in path (this
implies latches at outputs
as well).
Needless to say, simulation is much more simple, and the toolchain to
validate and optimize
the design, can produce much better results than for circuits that rely
on dirt effects.
FPGAs are constructed for exactly the synchronous paradigm - no surprise
they, and their
associated tool chain, are barely usable for asynchronous circuitry.
VHDL can, of course, simulate asynchronous designs, much better than
Verilog (which was originally
invented for synchronous ASIC design) - the difficulty then is to map
such a trick circuit to real
hardware.
I think the problem here is the viewpoint of the "old farts" who know
the hardware hacks in
old computers by heart and believe that an asynchronous TTL board is an
example for good design.
This point of view is similar to the old debate on "to GOTO or not to
GOTO" - while it is ubiquitous
in assembler, FORTRAN and BASIC, there are very good reasons to avoid it
wherever possible -
there do exist a few applications, where GOTO as well as asynchronism is
opportune, but it should
be used then as salt in your meal, not as an excuse for producing crappy
designs.
The main reason why former digital designs were full of asynchronous
circuitry was mainly a question
of transistor, or gate, or chip, count - it was better and cheaper to
add some RC delay line to
solve some timing problem than to add another chip to the whole board.
Without high-performance
simulation on computers, one was forced to measure the effect with logic
analyzers or scopes; it is
understandable that Real Engineers(tm) were more fond of playing with
real circuitry than reading
sheets of simulation output after some hours of batch simulation.
Nowadays, a 500 gate FPGA does not very much differ from a 100000 gate
FPGA, in terms of
circuit size, so it returns to be mainly a cost factor whether to put a
simple 4bit counter into a
500 gate FPGA or a 100K one - the development process is exactly the
same. In any way, no reason
to hack it to emulate it to behave artificially asynchronous. The modern
circuits are not there to
replace a TTL graveyard one-by-one.
--
Holger