Wikipedia policy excludes what they call "original research". Unless your
article was published by some major mainstream outlet, you're toast, even if you are
literally the last person on Earth who knows the stuff.
________________________________
From: Paul Koning via cctalk <cctalk(a)classiccmp.org>
Sent: Sunday, February 16, 2025 10:08 AM
To: cctalk(a)classiccmp.org <cctalk(a)classiccmp.org>
Cc: Paul Koning <paulkoning(a)comcast.net>
Subject: [cctalk] Re: Elliott Algol
On Feb 16, 2025, at 11:56 AM, ben via cctalk
<cctalk(a)classiccmp.org> wrote:
On 2025-02-16 7:32 a.m., Paul Koning via cctalk wrote:
A lot of early "ALGOL" compilers did
major subsetting because it was considered to hard to do the real language. Those subsets
may not actually bear any real resemblance to the actual language. For example, a
"subset" that omits recursion is not ALGOL but rather a mongrel joke.
I disagree here, recursion is just one method of problem solving.
That's true but not my point. Yes, you can solve it in Fortran II, without recursion,
even if the most natural solution is a recursive one.
My point is that support for recursion, and nested blocks, and nested scopes, is the
essence of ALGOL and what makes it different from FORTRAN II. So a language that omits
one of those elements cannot legitimately call itself a variant or subset of ALGOL, any
more than a language without pointers can legitimately call itself a subset of C.
While I think, function nesting is too complex for
most use, the use of stack based local variables in blocks was a important step foreword.
Function nesting is an important mechanism in some scenarios, but admittedly much work
doesn't need it. It's useful enough that GNU C added it as a compiler
extension.
Playing around with META-II, compiler compiler I
discovered it had
no way of handling local variables and symbol tables, as it just moved text around.I
could parse fine, but not generate code.
Parsing -- splitting text into tokens (lexing) and building parse trees -- is part of the
compiler's job but usually the easiest part. Not quite as easy if you want good error
messages or error recovery.
Code generation is an independent problem, and something that parses programs without
generating code isn't a compiler, it's at best just a front end.
You might want to look at the GCC internals manual. GCC has an explicit layering, with
front end processing steps that construct parse trees which are then transformed in
stages, until they reach the "target" code which converts the final internal
representation into actual machine code.
Did many people believe back then that one could just
shuffle text around to solve new programing languages like Algol, where it might work with
something like Fortran, with some sort of macro processing language.
I doubt it. Compiler compilers (or more accurately, parser generators, like the famous
YACC) are a later development. By the time that compiler writing textbooks like the
"dragon book" (Hopcroft and Ullman, if I remember right) came out, the relevant
theory was well understood. But writing early compilers required inventing elements of
that theory along the way. Again, the Dijkstra/Zonneveld Algol compiler is an example,
and Gauthier van den Hove spells it out in detail. Dijkstra invented a stack based
parser, somewhat like a recursive descent parser, which he called the "shunting
yard" algorithm after railroad yards, as a mechanism for parsing Algol. He also
invented "displays" which are a way to find the stack frame for the current
static nesting of recursive function calls.
paul