On Mon, Jan 13, 2025 at 01:57:44PM -0700, ben via cctalk wrote:
[...]
Funny when the 8 and 16 bit micros hit the market,
Algol seemed to vanish
off the face of the earth. Was 64KB too small a address space?
It's likely to be the same reasons why C never really became a thing on
typical microcomputers of the era, although the specifics vary by
architecture:
The popular 8-bit CPUs such as the 6502 and 8080/Z80 lack useful
base-plus-index addressing modes. These are useful for accessing function
parameters and local variables on the stack, and for extracting fields from
a structure. The 6502 doesn't have these modes *at all* and requires
explicit futzing around to compute the effective address, whereas the Z80
does have IX/IY-relative operations, but they are both very slow to use and
only support fixed offsets so if the index is variable, manual address
calculation like on the 6502 is required. Real-world code for these CPUs
tended to use hardwired (i.e. static/global) addresses, and/or used a
virtual machine (with a BASIC interpreter being an extreme case) which was
more capable than the base CPU.
The 6800/6809 and the 65802/65816 are somewhat better on this front, but
they were either too expensive or too late to be relevant. You might as well
just get a 16-bit system instead.
As to the 16-bits, the 8086/80286 had segmentation and so did bang up
against a 64kiB address space because once again there was a lot of futzing
around with segment registers to access more memory than that. They were
barely any better than an 8-bit system with bank switching. Realistically,
they are still basically 8-bit CPUs with 16-bit busses.
The other popular 16-bit CPU was the 68000, which what with its PDP-11
heritage, couldn't fail to be a good C target. A fair chunk of AmigaOS was
written in C, for example. But one could argue that it's actually a 32-bit
CPU with a 16-bit bus. C was still not terribly popular on these machines,
but that'll be because C compilers cost as much as the machine itself, and
generated *much* worse code than handwritten assembler. So the only
customers were businesses for whom time-to-market was more important than
the cost of tools or the performance of the end product.
I never liked the idea of dynamic arrays, who knows
when the heap? will
overflow. With static data it fits, or not at all.
The converse argument is that the developer has to guesstimate the
appropriate size for a static array, which may turn out to be the wrong size
at run time. Making it larger than required wastes space, and smaller than
required results in buffer overruns and data corruption.