On Feb 15, 2025, at 2:53 PM, ben via cctalk
<cctalk(a)classiccmp.org> wrote:
On 2025-02-15 11:27 a.m., Frank Leonhardt via cctalk wrote:
Running anything like Algol on a machine with
drum memory seems a bit optimistic!
Remove "Like Algol" and the statement is even more valid.
Oh? Certainly by today's standards a drum memory machine is quite slow. But in their
day they were used for serious work.
To pick one example, the ARMAC machine developed at CWI in Amsterdam (then called MC,
Mathematical Center) had a drum main memory coupled with a one entry cache to hold the
most recently accessed drum track. That was the main machine at MC for a couple of years,
until the Electrologica X-1 (with instruction times in the 20-30 microsecond range)
replaced it.
Before the ARMAC came the ARRA 2, which had drum memory without the cache. It worked well
enough that Dutch aircraft manufacturer Fokker commissioned a copy, named FERTA. It was
used to do the design of at least one of their major commercially successful airliners.
I guess that was why the PDP-1 was successful, it had
early core memory.
Yes, it did, though commercial core memory machines appeared a number of years before
that. For example, the EL-X1, which was a core memory fully transistorized computer, came
out in 1958. It appears to be the first commercial computer with interrupts as a standard
feature, which prompted Dijkstra to write his Ph.D. thesis on the problem of how to create
reliable code with interrupts. He wrote the ROM BIOS for that machine.
I keep forgetting about drum memory, on most early
machines.
Around what time did core memory drop in price that one had ample main memory to compile
with? I am guessing the late 60's.
1958. The X1 is where Dijkstra and Zonneveld created the first ALGOL compiler, in 1960.
Unlike a number of early compilers that one was a full implementation except for the one
or two small "features" that by then had already been recognized as mistakes.
That compiler ran in 4 kW 27 bit words, requiring several passes. A later version that
took advantage of a memory upgrade -- to 16 kW -- was a full compile-load-execute system.
So "ample memory" is a bit of a debatable question. It depends a lot on who is
doing the work. For Dijkstra and Zonneveld, 4 kW was useable and I suspect they would
have considered 8 kW "ample". For the creators of the DEC PDP-11 operating
systems DOS-11 and RT-11, 8 kW (16 bits, so 16 kB) was ample; IBM engineers required 128
kW for something not as good (OS/360 PCP v19.6). Today's programmers often think in
terms of megabytes if not gigabytes as the minimum tolerable, while many of those who hang
out on this list enjoy the challenge of squeezing stuff into a small microcontroller, or
the boot block of some disk device.
For an extreme example, consider the "programmable I/O" state machines in the
Raspberry Pico microcontroller. It has two or three engines, each with 4 state machines
that share the program memory -- 32 words of 16 bits. With care, you can do a lot with
those. I'm thinking of doing a "bit banging" implementation of Ethernet
with that device...
The PDP 8/e, being simple could have a fast cycle time
1.2 uS.
Keep in mind that cycle time in those days wasn't necessarily constrained by the logic
but by the memory. Early core memories had cycle times of many microseconds (the X1 for
example had 8 microsecond core, I think). CDC delivered 1 microsecond core memory cycle
time in 1964, but it took some rather mind bending electronic circuitry to make that
possible.
...
I wonder how much progress computers would have made had ACSII
and Algol not kept changing standards every few years?
Algol-60 never changed after the Revised Report of 1962, which was really the
"V1.0" release. Algol-68 is an entirely different language, just as Pascal is
an entirely different language.
As for ASCII, that is exactly what it was originally. There have been lots of other
character set definitions since then, though the mind bending variety of language-specific
sets was finally fixed once and for all with Unicode. Yes, Unicode keeps growing to add
more obscure character sets, but it's still the same standard.
Some how 6 bit characters seems more standard, text
wise,
Which one? Electrologica had several, CDC had a bunch, as did every other vendor.
Is there any thing a ALGOL compiler needs for good
code generation
other than ample index and GP resisters?
Some things make life simpler, such as stack operations and subroutine call instructions
that support recursion. But none of those are required; the EL-X1 had none of these. Its
successor the X8 did because it was designed with the knowledge of Algol in mind (the X1
predates Algol). And as a result, the transformation from Algol source code to machine
code is in a number of places more straightforward. But you can compile anything you want
to any machine you want; if it's good enough to be considered a general purpose
computer it can be handled.
Another example is the CDC 6000 series mainframes, for which a bunch of compilers were
created including Algol 60 as well as Algol 68. That machine has no index registers, no
stack, no recursion; it only has 8 primary registers and its subroutine call overwrites
memory. Not very friendly, but it just means a bit more work for the compiler writer.
paul