> "This seems complex (or at least verbose!),
but when you get used to
> it you'll find that it's actually very easy to follow, and
> incredibly convenient in many ways. Here are some others to try out.
> See if you can figure out what they evaluate to before trying
> them::"
> Why should programmers *have* to get used to it?
They don't. They can use, for example, FORTRAN, which was designed for
numerical work and is actually pretty good at it. But if you're doing
something that plays to Lisp's strengths, the value of a uniform code
representation even for numerical code is high.
The sticking point here seems to be that you assume
that everything
*should* use algebraic notation, just because "everyone knows it."
It's not a _completely_ stupid stance. After all, this...
> The *job* of a high-level language is to make life
easier for human
> beings, or we'd all write everything in machine code.
...is true. Using algebraic notation for things that are well-suited
to it is a significant win. And - not coincidentally, I believe - the
best Lisp I've used (on the Lisp Machines we had back when I worked in
academia) had a notation which did let the programmer write numerical
expressions in traditional infix notation. The triggering character
was one that doesn't exist in Latin-1, but, if I use @ for it instead,
one could write something like
(let ((size @(n*3)+k-7@))
...)
instead of
(let ((size (+ (* n 3) k -7)))
...)
which in a sense is the best of both worlds: the human can use
traditional infix notation for those fragments for which it produces
clearer code. (Note that this is independent of the representation
generated. This is an I/O issue, not an internal-representation
issue.)
Yes, if you force everything into infix notation, you will end up with
a horrible botch. But not even FORTRAN does that. Forcing everything
into prefix notation is a little more tolerable, but it suffers from
the same problem (in a milder form).
> A language which needs people to learn new,
difficult patterns and
> practices because they are more convenient for the interpreter or
> compiler, or are easier for the program itself to manipulate, is a
> language which is biased in favour of the computer and the
> implementer and against the user, and that is a very profoundly
> dangerous direction to take. I might go so far as to say it is a
> serious flaw.
Per se, I agree.
In the case of Lisp, though, those patterns and practices have proven,
through more years of experience than almost any other lanugage has
survived, to have substantial value in their own right. I have seen it
written that knowing Lisp makes you a better programmer even if you
never write anything in Lisp, and I agree with it; the mental tools I
have because I know Lisp improve the code I write regardless of the
language I'm writing in.
To put it another way, you shouldn't learn those patterns and practices
because they are more conveinent for the Lisp engine. You should learn
them because they are important, and Lisp is just an especially good
language in which to learn, and work with, them. You _can_ learn them,
most of them at least, without ever seeing a single s-expression. But
it's much easier that way - and, once you have your head around them,
you mostly know Lisp anyway.
If I have to go through programmatic gyrations to
express problems as
algebraic expressions, has the HLL made my life easier?
But, conversely, if you have to go through programmatic gyrations to
express algebraeic formulae in a notation at odds with every other
algebraeic context, has the language made your life easier?
Dogmatic insistence on seeing everything the Lisp way is just as bad as
dogmatic insistence on seeing everything the FORTRAN way, or for that
matter the Prolog way or the C way.
/~\ The ASCII Mouse
\ / Ribbon Campaign
X Against HTML mouse at
rodents-montreal.org
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B