On Saturday 12 August 2006 04:53 pm, Jeff Walther wrote:
Date: Fri, 11
Aug 2006 12:58:56 -0700
From: Brent Hilpert <hilpert at cs.ubc.ca>
Chuck Guzis wrote:
At one time async logic was a hot topic.
The IAS machine (von Neumann/late 1940s) is listed in various places
(under 'clock rate') as being 'async'. (And - annoyingly - those
listings
then don't provide an effective instruction rate for the sake of
comparison).
I've been curious as to more precisely how the timing was accomplished in
that (those) machines. Offhand, I suspect you still end up with delay
elements in the design at various points to ensure some group (worst
case) of signals/paths
are all ready/stable at some point and you end with a more-or-less
'effective clock rate' anyways and don't gain much.
Such all started with ENIAC didn't it?, which - based on what I've been
able to find/read - could be described as an async design.
Was async still being discussed in the 60's?
I worked on some "non-clocked" logic designs for a little company
called Theseus. As far as I know they're still in business. It's
been a while, so my memory is hazy and it was definitely
unconventional design.
The basic scheme (IIRC) was to use two wires per bit of information.
Three of the four possible states were used. '0' and '1' were two of
the states and 'ready' was the third state, except I don't think they
called it 'ready' but that'll do for this discussion.
When you reached a set of registers (flops) in the logic (say a
grouping of 8 bits for a bus) you'd have 'acknowledge' logic which
would would signal back upstream that it was ready for the next
computation. It depended on all eight registers reaching a data
state (0 or 1) before it signaled ready back upstream. Then and
this is where I get hazy, all the registers would get reset to the
ready state before the next set of data is processed. I think. It
really has been a while.
So, in practice, you have 2 to 4 times as much logic because you have
two wires per bit plus acknowledge logic flowing back upstream.
On the other hand, if nothing is being processed, then your circuitry
is idle and not switching. This can save a bundle of power depending
on the application.
Additionally, the logic pipeline can operate as fast as it possibly
can, without being held back by a clock. So in some cases one gains
speed. And you don't have to worry about routing finicky clocks all
over the chip.
Still, you have the overhead of those acknowledge signals.
Plus, being an unconventional logic, there are not sophisticated
tools and libraries available, so it takes longer to design for and
requires more design discipline from the designer.
If you applied the same amount of effort and discipline to
conventional design, you might end up with something just as good or
better, but the non-clocked logic paradigm forces the extra effort.
This reminds me a bit of stuff that I've seen where signals were fed from one
(custom) chip to the next with only 2 or 3 lines, mostly in some musical
electronic equipment...
Supposedly, non-clocked logic can also offer greater
security because
there's no clock signal for remote sensors to key on when trying to
sense what the CPU is doing. This seemed a little odd to me. Do
espionage types really try to sense what a processor is doing
remotely, based on the EM emissions from the chip?
To the best of my understanding of such stuff, most of what's done in that
regard is picking up _video_ signals, though I'm not going to say that it
can't happen some other way as well, I suppose lots of things can be done if
one throws enough money at them.
--
Member of the toughest, meanest, deadliest, most unrelenting -- and
ablest -- form of life in this section of space, a critter that can
be killed but can't be tamed. --Robert A. Heinlein, "The Puppet Masters"
-
Information is more dangerous than cannon to a society ruled by lies. --James
M Dakin