On Jul 14, 2015, at 1:17 PM, Chuck Guzis <cclist at
sydex.com> wrote:
I'm missing something in this discussion, I think.
HDL's (take your pick) are just programming languages like FORTRAN or C with
different constraints. What's the point of going to all the trouble of doing an FPGA
implementation of a slow old architecture, when pretty much the same result could be
obtained by running a software emulator? Neither accurately reflects the details of the
real thing--and there will always be the aspect of missing peripherals. ...
I've run the Cyber emulator as well as various SIMH emulators from time to time, but
it's just not the same as the real thing--it's not even remotely the same.
One possible answer is ?because I can?.
As for whether it accurately reflects the details of the real thing, that depends. Not
the peripherals, of course. If the peripherals are much more interesting than the CPU, I
agree there isn?t much point. In the case of machines like the CDC 6600, the CPU is very
interesting, the PPUs also, some of the peripheral controllers to some extent, but the
peripheral devices themselves are not interesting at all. An FPGA model can reproduce the
interesting parts.
The accuracy of the FPGA depends on the approach. If it?s a structural (gate level)
model, it is as accurate as the schematics you?re working from. And as I mentioned, that
accuracy is quite good; it lets you see obscure details that are not documented and
certainly not visible in a software simulator. The example I like to point to is the 6000
property that you can figure out a PPU 0 hard loop by doing a deadstart dump and looking
for an unexpected zero in the instruction area: deadstart writes a zero where the P
register points at that time. But you won?t find that documented or explained anywhere.
The FPGA model derived from the schematics reproduces this behavior, and when you look at
how it happens, the explanation becomes blindingly obvious. *This* is why I feel there?s
a point in doing this sort of work.
paul