On Jan 1, 2019, at 10:17 PM, Jim Manley via cctalk
<cctalk at classiccmp.org> wrote:
RISC was never just about compiler and hardware simplification for improved
performance of the most frequently-executed instructions. It's also been
front-and-center in low-power (e.g., mobile) and embedded (now including
Internet of Things) applications, ...
I think you may be mixing up where we ended up vs. where we started. Mobile applications
were science fiction when RISC started. Especially if you take the view that the first
RISC architecture machines were designed before the term was invented -- for example, it
would be easy to make the argument that the CDC 6600 is a RISC machine.
As for compiler simplification, I'm not so sure. A CISC machine like VAX makes
compilers easy because the instruction set is so regular and so nicely matches the higher
level operations. RISC instructions less so. Then again, most machines predating the
PDP-11 are more like RISC in their instruction set limits and compilers cope with that.
The biggest difference I can see is that by the time RISC became a buzzword, optimizers
were good enough that hand-coding assembly language became uncommon. And for many RISC
architectures -- consider Alpha for example, never mind Itanium -- that is crucial because
hand-coding is just painfully hard.
...
A Blue Screen of Death is truly fatal for a product that depends on an
embedded device, like an ATM in the middle of dispensing over half a grand
in cash, a DVR in a satellite TV receiver that requires upwards of ten
minutes to restart and get back to where the viewer was (minus the
permanently lost live recorded cache), or a self-driving vehicle at any
speed above zero. ....
Certainly, but in almost all cases this is a question of software quality and the
designers' attitude to reliability and careful design. People have built reliable
systems on CISC machines (VMS for example) and on machines that predate the term (AGC).
They've also built unreliable systems on any of these architectures.
The x86/x64 instruction set complexity hasn't been
helpful in reducing the
security vulnerability of software running on those architectures, either.
The multiple parallel pipelines that make possible speculative execution of
a number of branches before associated decisions are computed, have
resulted in the whole new class of security vulnerabilities such as
Meltdown, Foreshadow, and Spectre. This isn't limited to x86/x64, however,
as the most recent multicore ARM processors have also fallen victim to such
issues, they've just been late to the game as the most advanced (and
complex) features have been pursued (somewhat for me-too marketing
purposes), so fewer families/generations have been affected.
Are you arguing that speculative instruction is a marketing toy? I thought it's a
feature that delivers real performance gains. And it's widely implemented on high end
machines of both flavors for that reason. I can believe it's more significant on x86
because of its more complex pipelines but the RISC pipe at this point is also so much
faster than memory that it's interesting there too. And it's done there, too.
The issue doesn't appear on many other RISC architectures because those don't aim
to the top performance levels but rather at other market niches.
So Spectre is universal. Meltdown is not; that one comes from an Intel decision to delay
page access checking, and wasn't implemented by others. There is no good reason to
have a Meltdown vulnerability in any CPU architecture. But Spectre is fundamental to
speculative execution. You can avoid it either by software workarounds in the kernel that
are fairly cheap, or by adding hardware mechanisms to close off timing channels. The
latter tends to be hard, though there might be other reasons why it's worth trying.
paul