On 01/01/2018 11:26 AM, dwight via cctalk wrote:
One other thing that larger/faster becomes a problem.
That is probability!
We think of computers always making discrete steps. This is not always true. The
processors run so fast that different areas, even using the same clock, have enough skew
that the data has to be treated as asynchronous. Transferring asynchronous information it
always a probability issue. It used to be that 1 part in 2^30 was such a large number, it
could be ignored.
Well, very few (like NONE) mainstream chips are globally
asynchronous. (Systems often are, but the CHIPS,
themselves, are almost always fully synchronous.) The main
reason for this is the simulation software has not been
BUILT to handle that. A guy I work with is a partner in a
company working on simulation tools for GALS designs
(Globally Asynchronous, Locally Synchronous) which is a
fairly hot area of research now. And, not, there are
designs for synchronizers that reduce the probability issue
to FF metastability. Xilinx did some extensive work many
years ago on reducing FF metastability, and showed that with
dual-ranked FFs, the sun will burn out before you get an
incorrect event through the 2nd FF.
The Macromodule project at Washington University developed
this synchronization scheme and did a whole lot of
theoretical work on how to make it reliable and provable.
That is the basis for much of the GALS concepts.
Jon