Computing from 1976
Paul Koning
paulkoning at comcast.net
Mon Jan 1 14:00:24 CST 2018
> On Jan 1, 2018, at 12:26 PM, dwight via cctalk <cctalk at classiccmp.org> wrote:
>
> One other thing that larger/faster becomes a problem. That is probability!
>
> We think of computers always making discrete steps. This is not always true. The processors run so fast that different areas, even using the same clock, have enough skew that the data has to be treated as asynchronous. Transferring asynchronous information it always a probability issue. It used to be that 1 part in 2^30 was such a large number, it could be ignored.
>
> Parts often use ECC to account for this but that just works if the lost is recoverable ( not always so ).
That doesn't sound quite right.
"Asychronous" does not mean the clock is skewed, it means the system operates without a clock -- instead relying either on worst case delays or on explicit completion signals. That used to be done at times. The Unibus is a classis example of an asynchronous bus, and I suppose there are others from that era. The only asynchronous computer I can think of is the Dutch ARRA 1, which is notorious for only ever executing one significant program successfully for that reason. Its successor (ARRA 2) was a conventional synchronous design.
About 15 years or so ago, an ASIC company attempted to build processors with an asynchronous structure. That didn't work out, partly because the design tools didn't exist. I think they ended up building packet switch chips instead.
Clock skew applies to synchronous devices (since "synchronous" means "it has a clock"). It is a real issue in any fast computer, going back at least as far as the CDC 6600. The way it's handled is by analyzing worst case skew and designing the logic for correct operation in that case. (Or, in the case of the 6600, by tweaking until the machine seems to work.) ECC isn't applicable; computer logic doesn't use ECC, it doesn't really fit. ECC applies to memory, where it is used to handle the fact that data is not stored with 100% reliability.
I suppose you can design logic with error correction in it, and indeed you will find this in quantum computers, but I haven't heard of it being done in conventional computers.
> Even the flipflops have a probability of loosing their value. Flops that are expected to be required to hold state for a long time are designed differently that flops that are only used for transient data.
Could you give an example of that, i.e., the circuit design and where it is used? I have never heard of such a thing and find it rather surprising.
paul
More information about the cctalk
mailing list