On Jul 14, 2015, at 4:41 PM, Chuck Guzis <cclist at
sydex.com> wrote:
On 07/14/2015 10:29 AM, Paul Koning wrote:
The accuracy of the FPGA depends on the approach.
If it?s a
structural (gate level) model, it is as accurate as the schematics
you?re working from. And as I mentioned, that accuracy is quite
good; it lets you see obscure details that are not documented and
certainly not visible in a software simulator. The example I like to
point to is the 6000 property that you can figure out a PPU 0 hard
loop by doing a deadstart dump and looking for an unexpected zero in
the instruction area: deadstart writes a zero where the P register
points at that time. But you won?t find that documented or explained
anywhere. The FPGA model derived from the schematics reproduces this
behavior, and when you look at how it happens, the explanation
becomes blindingly obvious. *This* is why I feel there?s a point in
doing this sort of work.
I can agree with some points, but not others. In the 6600, for example, clock
distribution was a big design issue--not so in an FPGA. You had racks of taper-pin mats
of wiring between the cordwood moules extending over (by today's standards) long
distances. Cooling was a huge issue. In those respects, an FPGA hardly resembles a
"real" 6600.
Certainly, the physical aspects are completely different. And clock distribution,
certainly. Not so much between chassis, interestingly enough, but throughout the logic
within a chassis. And wire delays in chassis to chassis cabling are very significant.
Wire delays within the chassis, in a few cases, but 99% of the wires are short enough
that their delay is not a real consideration in comparison with the logic circuit stage
delay.
The example I gave, and others like it, are properties of the detailed logic design, they
are not dependent on the details of the timing closure, or the physical design of the
machine.
paul