On Aug 17, 2024, at 8:32 AM, Peter Corlett via cctalk
<cctalk(a)classiccmp.org> wrote:
...
The problem is the native register width keeps
changing with every cpu. C
was quick and dirty language for the PDP 11, with 16 bit ints. They never
planned UNIX or C or Hardware would change like it did, so one gets a
patched version of C. That reminds me I use gets and have to get a older
version of C.
They'd have had to be fairly blinkered to not notice the S/360 series which
had been around for years before the PDP-11 came out. It doesn't take a
particularly large crystal ball to realise that computers got smaller and
cheaper over time and features from larger machines such as wider registers
would filter down into minicomputers and microcomputers.
Not to mention that K&R had experience with the PDP-7, which is an 18 bit word
oriented machine. And a whole lot of other machines of that era had word lengths
different from 16, and apart from the S/360 and the Nova most weren't powers of two.
But C also seems to ignore a lot of the stuff we
already knew in the 1960s
about how to design languages to avoid programmers making various common
mistakes, so those were quite large blinkers. They've never been taken off
either: when Rob and Ken went to work for Google they came up with a "new"
C-like language which makes many of the same mistakes, plus some new ones,
and it is also more bloated and can't even be used to write bare-metal stuff
which is one of the few things one might reasonably need C for in the first
place.
C, especially its early incarnations, could be called a semi-assembly language. For
example, you can tell that struct declarations originally amounted simply to symbolic
offsets (you could use a field name declared for struct a in operations on types of struct
b). And yes, ALGOL showed the way with a far cleaner design, and ALGOL extensions existed
to do all sorts of hard work with it. Consider the Burroughs 5500 series and their
software, all written in ALGOL or slightly tweaked extensions of same, including the OS.
...
Complex subroutine nesting can be done just fine on a CPU "optimised for"
running C. For example, you can synthesise an anonymous structure to hold
pointers to or copies of the outer variables used in the inner function, and
have the inner function take that as its first parameter. This is perfectly
doable in C itself, but nobody would bother because it's a lot of
error-prone boilerplate. But if the compiler does it automatically, it
suddenly opens up a lot more design options which result in cleaner code.
Absolutely. The first ALGOL compiler was written for the Electrologica X1, a machine with
two accumulators plus one index register, a one address instruction set, and no stack or
complex addressing modes. It worked just fine, it simply meant that you had to do some
things in software that other machines might implement in hardware (or, more likely, in
microcode). Or consider the CDC 6000 mainframes, RISC machines with no stack, no
addressing modes, and not just an ALGOL 60 but even an ALGOL 68 compiler.
On the other hand there was the successor of the X1, the X8, which adds a stack both for
data and subroutine calling, as well as "display" addressing modes to deal
directly with variable references in nested blocks up to 63 deep. Yes, that makes the
generated code from the ALGOL compiler shorter, but it doesn't necessarily make things
any faster and I don't know that such features were ever seen again.
paul