On Sat, Feb 28, 2015 at 09:19:54AM -0800, Robert Ferguson wrote:
On Feb 28,
2015, at 3:35 AM, Peter Corlett <abuse at cabal.org.uk> wrote:
APL's cryptic syntax is not a feature; if you want that, you know where to
find Perl or Clojure.
Your homework assignment is to read Ken Iverson?s Turing
Award lecture, write
a book report, and bring it to class for next time.
Oh, alright then :)
I have skimmed quite a lot of it as it's quite long and I don't actually care
to learn APL, but it has a few central points that fit in a few pages, with 50
pages of exposition:
a) Mathematical notation is inconsistent and the meaning varies depending on
topic;
b) "Most programming languages are decidedly inferior to mathematical
notation";
c) It is possible to unify the two, this is actually desirable, and APL
delivers;
d) Similar problems should have similar solutions, and language features should
be orthogonal and composable in obvious ways; and
e) Brevity is of the essence.
It is of particular note that this lecture was given way back in 1979, a time
closer to the invention of the computer than the present day. It is thus
(rightly) criticising the state of the art in the early days of computers. The
state of the art has moved on somewhat since, even if most modern-day
programmers still seem to be stuck in the BASIC mindset.
Point a) is trivially supported: Newton and Leibnitz invented different
notation for calculus, for example.
b) is utterly subjective with no evidence presented and is clearly a case of
[citation needed]. The author evidently believes that programming languages are
decidedly inferior for mathematics which is very much a case of "well, duh,
that's not what they're designed for!"
c) is getting into the meat of it.
Where the author says "mathematical notation", what is really meant is the
subset of said notation which describes arithmetical expressions, and thus
which can be computed by rote. Similarly, expressions are a subset of
programming languages. It follows that where mathematics and programming
intersect, one can use the same notation.
It is certainly the case that expressions in many programming languages do
resemble mathematics: 1 + 2 means the same in both, * is used for
multiplication without incident, and the operations have the same rules for
associativity, distributivity, and so on. It is also the case that this is a
desirable trait since it reduces the learning curve when approaching a new
discipline. APL itself uses + and * to denote addition and multiplication, but
then wanders off in its own direction, no doubt much to the bemusement of
mathematicians and programmers alike.
APL fails as a programming language because it is quite unlike more familiar
programming languages, and doesn't offer anything new. It may have been novel
in 1979 but I doubt it was even then. It requires special support from the
environment due to its non-ASCII character set, adding further friction.
APL fails as a mathematical notation because it is quite unlike more familiar
mathematical notation, and doesn't offer anything new. Indeed, it seems to fall
short in that it only concerns itself with arithmetical expressions and not
mathematics as a whole.
It's very much like Esperanto. The British and the French don't bother to learn
Esperanto because it's too much work for too little gain, so we just pick up
what we need of each other's language, and all is well.
d) is another case of "well, duh!", and is hardly unique to APL. And Lisp
predates APL by six years...
We finally get to e). The dense APL examples very much supports my opinion that
the language is unnecessarily obtuse. However, I'll dig out a more contemporary
example:
Some programming languages (C++, Perl, Python, etc) allow operator overloading,
i.e. that they allow programmers to define functions for user-defined types
which mimic arithmetical operators. For example, one could overload * to
perform matrix multiplication. Some go even further (Scala, PostgreSQL) and
allow more arbitrary sequences of characters as operators and not just those
combinations that are in the base language. And others (Java) just forbid it as
a bad idea.
When given operator overloading, library designers go hog wild and invent
Domain Specific Languages (DSLs) which contort the language by eliminating
parentheses, adding no-op noise words to make it more English-like, and various
other abuses that are somewhat against the spirit of the language. This results
in very pretty-looking portfolio examples for the project's github page, but
also an API that turns out to be somewhat brittle and error-prone as the human
author and the compiler have different ideas of what is valid code and what it
means. Whenever I encounter this sort of thing, I start to think that Java's
designers may have been onto something after all.
Two important traits of programming languages is that it be easy to describe an
algorithm to a computer without lots of trial-and-error, and that the program
be readable understandable by other people (or the original author months later
when they've forgotten what they did) and the meaning obvious. APL fails
spectacularly on both points. So do a lot of languages popular today, but that
doesn't forgive APL for actually believing this to be a desirable trait.