On 09/29/2014 11:35 AM, Mouse wrote:
(Interesting. I hadn't previously noticed that
<< was undefined but >>
was implementation-defined in the negative case. I wonder why....)
Because there's more than one machine that has shift right as an
arithmetic (sign-extended) operation, with shift-left being a logical
(shift zeroes into the lsb and toss the msb). The CDC 6000, since we've
mentioned it was one such (AXi Xj,Bk vs. LXi Xj,Bk operations).
But K&R is much laxer in its definitions. I'd
have to go read it to be
sure, but you might be able to implement &, |, ^, and ~ as per-digit
operations rather than per-bit operations (my first cut would be & as
digit-by-digit minimum, | as digit-by-digit maximum, ^ as
digit-by-digit addition modulo the base, and ~ as ~x = MAXINT-1-x
(loosely speaking)), with shifts shifting by digits instead of bits,
and still conform to K&R. That certainly feels like a reasonable
approach to me.
There are some assumptions made in K&R that are significant on older
machines.
Dijkstra didn't mention it in his tirade against the 1620, but there are
characters/digits(?) on the 1620 that can't be compared (e.g. record,
group marks), only tested for and one (numeric blank) that can neither
be tested for nor compared. On the 1620, the compare operation is a
strictly arithmetic affair and special characters aren't part of the
vocabulary. So trying to, say, compare a record mark with a numeric
blank will result in a check condition. Indeed, there's no way to tell
the difference between a record mark encoded as 82 and 821, while it's
certainly possible to read either into memory.
I attribute these "shortcomings" to a shift in attitudes over the years.
1620 programmers KNEW about the various issues and inconsistencies and
avoided them. At worst, you had to restart the machine. Sort of a "you
know that you'll injure yourself if you try to trim your nails with this
oxyacetylene torch" whereas modern architectures try to make sure that
you never get the opportunity. There's a parallel in consumer and
industrial protection regulations, I think.
K&R C was designed as an OS implementation
language. As such, it is
expected that the coder knows the machine, with operations doing things
that are unsurprising in view of that. Modern C is a tricky balancing
act, on the one hand pulled towards that original stance by the desire
that it still be a useful OS implementation language, on the other hand
pulled towards precisely-specified and machine-independent semantics by
the desire that it be usable for cross-OS-portable programming, such as
for application-level code and utility libraries. The current ubiquity
of binary machines has meant that C could get away with mandating
binary for things like & and << without crippling it enough to bother
anyone with significant clout, in contrast to things (like int size,
which they found it necessary to leave considerable leeway in).
Hence a reference to PL/I. I think this page makes an interesting read:
http://www.uni-muenster.de/ZIV.EberhardSturm/PL1andC.html
One attraction of C was that it was a cheap close-to-assembler language
with a small vocabulary. One could become usefully fluent in K&R C over
the space of a weekend. With the latest incarnations of C++, I'm not
sure that I could become usefully fluent in six months.
Similarly, PL/I's biggest liability is that it's a very large language
and somewhat difficult to master in a short amount of time.
--Chuck