On 10/27/2017 3:54 AM, Dave Wade via cctech wrote:
Kip,
I think "emulation" and "simulation" get used pretty much
interchangeable.
SIMH is touted a simulator, Hercules/390 as an emulator yet they are both
programs that provide a "bare metal" machine via software on which an
operating system can be installed. Neither make any attempt to reproduce the
speed of the original CPU.
I am going to stick with "emulator" as I think of "simulation" as
process
whereby we can model some statistical or mathematical parameters e.g. how
long the queues are in a supermarket, what time is high tide in Boston using
only mathematics. Note this may involve a general purpose computer, or it
may use specialist machines such as the Doodson-Lege Tidal Predictor
http://www.ntslf.org/about-tides/doodson-machine
So to return to emulating other computers have at least five different
flavours...
1. Functional Software Emulation where we match the functions but not the
speed of operation using a program. SIMH and Hercules are such beasts
For much work this is fine. Most software emulators take this approach.
2. Cycle accurate Software Emulation/Simulation where we attempt to match
both function and speed of the underlying hardware. This may be necessary
for software which uses software loops to control say the speed of a UART. I
If you want to use the simulator for historical research this may help. Some
emulators can be switched to this mode when software needs it...
David Sharp's SSEM/Baby simulator is such a beast.
http://www.davidsharp.com/baby/
3. Behavioural Hardware Emulation
This is where we build a hardware implementation of a machine, but do not
attempt to duplicate the exact detail of the logic or its speed of
operation. Richard Stofer's IBM1130 in VHDL is such a project.
He doesn't have it available on the Web (I have a copy and have run it)
There is a Flash video on the
IBM1130.org site
4. Cycle Accurate Behavioural Hardware Emulation
This is probably the most common approach to cycle accurate emulations.
Because FPGA's typically run several times faster than the clock on legacy
hardware, and they may contain high level function blocks, e.g. multipliers
its often "relatively easy" to match the instruction times of a legacy CPU
in an FPGA.
My BabyBaby FPGA implementation of the SSEM FPGA is such a beast. It runs at
the same speed as replica SSEM in MSI Manchester but internally it's a
parallel implementation whereas the real Baby is a serial machine.
https://hackaday.com/2016/01/06/babybaby-a-1948-computer-on-an-fpga/
5. Gate Level Hardware Emulation
It gate level hardware emulation we try and re-implement the hardware down
to the logic gate level. This is hard because FPGA's are may not be designed
to work this way, and gate level design will also have some dependencies on
propagation delays, which on an FPGA will be much smaller than on any real
hardware. A couple of examples of these are
Laurence Wilkinson's IBM 360/30
http://www.ljw.me.uk/ibm360/
Carl Claunch's IBM 1130
http://ibm1130.blogspot.co.uk/
I hope this doesn't muddy the water too much...
Dave
Well, the waters are sufficiently muddy that I figured little harm would
be dine if I throw my weeds in too... ;)
I like that you have clearly given this some thought, and have developed
a kind of taxonomy, so your comments are valuable because they are not
just off-the cuff.
Looking online at Merriam-Webster, the conclusion one might reach is
that these are all simulators. But emulation is also a term of art when
it comes to computers, so I don't think we should shackle ourselves to M-W.
I have generally used the term emulator for software that attempts to
provide some level of functionality (up to 100%) of the original machine
such that software (including operating systems) will operate, without
worrying about HOW that is done. So, I would throw most, if not all, of
the SimH programs, and Hercules as well into that pot. I would also put
the IBM 1401 and 1410 emulators that appeared on early model IBM
System/360 machines (which was done using software with microcode
assist) into that same bag, as well as the FLEX implementation of the
IBM mainframe architectures. So, I am on the same page with you with
regards to #1.
I have generally used the term simulator for software that attempts to
replicate the actual operation of the original machine, regardless of
speed - I view speed as just one of several possible measures of the
accuracy/level of the simulation. I have written an IBM 1410 simulator
that simulates the operation of the original machine at the machine
cycle level, based on the IBM CE instructional materials - but it pays
no attention at all to the absolute cycle time, only to the relative
cycle time (so that peripherals, such as tape drives, work in about the
same relative speed to the CPU as the original - in order that
diagnostics can be run). [It is convenient that it runs faster than the
original. ;) ].
When it comes to hardware, such as FPGA or other hardware that reproduce
machine behavior, I think the judgement is different. I would agree
with your definition #3, to call these emulations for those
implementations which pay little or no attention to the original
machine. That said, however, I tend to use the word "implementation" or
"model" here. Consider, for example, that the IBM 360 and 370
architecture, and the PDP-11 and VAX architectures were implemented by
their original manufacturers (or their competitors, e.g. Amdahl), using
very different hardware approaches - some hard wired, some micro-coded,
some mixed, for different models of the architecture.
But I can see using the term "simulation" in some cases, such as your
"BabyBaby".
But I would call IBM's P390 an "implementation" of S/390 (and S/370,
depending on which firmware you load).
With respect to your #5, I have some direct experience with that, and am
working on a tricky project to implement the IBM 1410 in a FPGA at the
gate level, based on the SMS Automated Logic Diagrams (ALD's). What I
have found so far is that a rule or two can be used to deal with the
speed and design technology differences. I don't think that the issues
pointed out make it "hard", really. The hard part, to me, is
deciphering the original design from drawings or other incomplete
engineering information. ;) The rules I have developed so far:
a. If the original implementation uses cross-connected gates (or
transistors), the FPGA model can follow those with a synchronous D
flip flop. It usually works because the clock times are often 10
or more times faster or more than the original machine clock. I
have successfully used this technique to implement an original
design that was not all that great (see "b." below for details) that
actually had some race conditions in the original design.
The information on this project can be found at:
https://drive.google.com/open?id=0B2v4WRwISEQRcFpNM0o2VDJiWFk
b. I did not come across delays in the one project I completed
this way (a re-implementation of a design done for a class in
college in 1973), but my next project will, and my plan is to use a
counter (or, I suppose, a small number of cascaded D flip flops
acting as a bucket brigade) in cases where that delay is needed for
the implementation to work properly. (In cases where the delay
exists only to match propagation times along different wire/cable
lengths in the original implementation, one might be able to turn
the delay into a wire).
JRJ