From: Peter Ekstrom <epekstrom(a)gmail.com>
I am tinkering with some C-code where I am working on something that
can process some microcode. The microcode is from a DG MV/10000
machine and while working on it, I noticed it is in little-endian.
That's simple enough to work around but that had me wondering, why do
we have big and little endianness? What is the benefit of storing the
low-order byte first? Or is that simply just an arbitrary decision
made by some hardware manufacturers?
Mostly because hardware support for dividing a word into smaller chunks
(and addressing them individually) was something manufacturers added at
different times, on their own initiative, and there was no agreed-upon
way to do it. And since there are two obvious ways to turn a sequence
of X Y-bit chunks into a word of X * Y bits and neither one is exactly
"wrong," it ended up being a crapshoot as to whether manufacturers
would do it the one way, or the other.
(...or do something demented instead, like the PDP-11's "middle-endian"
approach to 32-bit values...)
And most of the debate probably came down to matters of taste; big-
endian is how we write things on paper, so it seems "natural" to most
people, while little-endian means that byte offset matches place value
(i.e. byte 0's value is multiplied by (256 ^ 0) = 1, byte 1's value by
(256 ^ 1) = 256, etc.,) so it seems "natural" to math types.
That said - and I have no idea whether this actually influenced
anyone's decision for any system anywhere ever - one hard advantage of
little-endian representation is that, if your CPU does arithmetic in
serial fashion, you don't have to "walk backwards" to do it in the
correct sequence.