On Jun 22, 2016, at 11:05 PM, Swift Griggs
<swiftgriggs at gmail.com> wrote:
...
Just some internet bungholes on reddit. Brother, just remember, *you*
asked, and you can never get the time back:
https://www.reddit.com/r/programming/comments/d92jj/why_computers_use_twos_…
Nice.
I have a copy of 1948 (!) lecture notes on computer design. It discusses one's
complement and two's complement. It points out the advantage of two's complement
(no two zeroes) but also the disadvantage that negating is harder (requiring two steps).
In early computers that was significant, which explains why you see one's complement
there.
Another consideration which may have played a role is that with one's complement you
need fewer instructions: bitwise complement serves both for Boolean logic and for negate,
for example.
The "two zeroes" problem is handled best by using the CDC 6000 technique: it
doesn't use an adder, but rather a subtractor (so adding is done by subtracting the
complement). If you do that -- an exercise for the student to demonstrate why -- the
result will never be negative zero unless there were negative zeroes in the inputs. In
particular, adding x and -x will produce +0 for all x.
I just added a few more machines to the table in the Wikipedia article referenced by the
comments you mentioned.
https://en.wikipedia.org/wiki/Word_(computer_architecture)
Another interesting aspect where people may not be aware of how much variety existed is in
the encoding of floating point numbers. IEEE is now the standard, but PDP-11 users will
remember the DEC format which is a bit different. CDC and IBM were different still. The
Dutch machine Electrologica X8 had a particularly interesting approach (parts of which
were adopted, many years later, by the IEEE standard).
paul