On 06/23/2016 07:31 AM, Paul Koning wrote:
I have a copy of 1948 (!) lecture notes on computer
design. It
discusses one's complement and two's complement. It points out the
advantage of two's complement (no two zeroes) but also the
disadvantage that negating is harder (requiring two steps). In early
computers that was significant, which explains why you see one's
complement there.
There are also a few obscure bit-twiddling tricks that work in ones
complement, but not in two's.
Another interesting aspect where people may not be
aware of how much
variety existed is in the encoding of floating point numbers. IEEE
is now the standard, but PDP-11 users will remember the DEC format
which is a bit different.
And by the time you got to the VAX, the issue became *which* floating
point format? (D,E,F or G).
CDC and IBM were different still. The Dutch machine
Electrologica X8
had a particularly interesting approach (parts of which were adopted,
many years later, by the IEEE standard).
IBM's S/360 FP format was a big weakness of that machine.
Single-precision 32-bit word with an exponent that indicated the power
of 16 (not 2) to be applied to the mantissa (i.e., normalizing the
mantissa only shifted to the nearest 4 bits, not 1).
CDC, on the other hand, dedicated 48 bits to the mantissa of
single-precision numbers, In other words, CDC's single-precision was
roughly the equivalent of IBM's double-precision.
To the scientific community, this was a big selling point.
Of course, there were also machines that used the floating point
facility for all arithmetic. Integer computations is performed as a
subset of floating-point. This has the ramification that an integer
does not occupy an entire word, but only part of it.
--Chuck