On Jan 1, 2019, at 2:00 PM, ben via cctalk <cctalk
at classiccmp.org> wrote:
On 1/1/2019 8:58 AM, Carlo Pisani via cctalk wrote:
hi
on DTB we are designing a RISC-ish CPU, code name "Arise-v2"(1).
We are using the MIPS R2K and the RISC-V as the reference.
In the end, it will be implemented in HDL -> FPGA.
The page on DTB is related to a software emulator (written in C) for
the whole system. CPU + RAM + ROM + UART, etc. so we can test and our
ISA more comfortably.
As a second reference, I'd like to consider the first Motorola RISC:
88K, which is very elegant and neat ISA; unfortunately, I have
difficulties at finding user manuals and books about it.
If someone wants to sell me a copy, it will be appreciated!
Thanks and happy new year!
I was never a fan of RISC architecture as does not fit the standard high level language
model. Everybody wants a 1 pass compiler, thus the RISC model. If you are doing your own
RISC model, you might consider a model
that supports Effective addressing better since we have got the point
where fetching the data is taking longer than processing it.
Huh? I don?t understand the 1 pass compiler statement requiring RISC. I was doing 1 pass
compilers
in the mid-to-late 70?s (well before RISC). So I?m not sure what you?re talking about.
It also depends
upon what you mean by ?1 pass?. Most compilers nowadays make only one pass over the
source but
will make multiple passes over the intermediate form before finally generating code (even
then it may
make another pass over the resulting generated code for peep-hole optimizations.
RISC is actually nice for a compiler because it?s simple and fairly regular (hard to
actually generate code
automatically for complex instructions) and RISC has a large number of registers.
However, modern
CPUs are all out-of-order execution with register renaming with ridiculous numbers of
registers (I think
current Intel Core x CPUs have 192+ registers for register renaming where the visible
number of registers
is 8). It also allows for speculative execution (following multiple paths through the
code until the data
required for the various decision points is finally available).
The other thought is the pipeline seems has too high speed of a clock,
what is the use a fast clock, if you got one or two gates of logic between your clocks.
Gate and line driving speed ratios remind me of the Vacuum tube era of computing.
Deep pipelines are needed to get clock speeds up so that timing can be met. The problem
with deep
pipelines is that when any sort of exception (interrupts, etc) happen, there?s a lot of
state that gets flushed
and then restarted when the exception handling completes.
Pipelines (especially if they?re not fixed depth for all operations) means that simple
operations (those that
require a minimum number of pipeline stages) can be completed quickly where as complex
operations
that require either a lot of logic or time to complete can be broken up in to multiple
stages. This allows
a higher clock rate and allows for the simple operations to be completed more quickly than
if there was
a very shallow pipeline.
TTFN - Guy