On Fri, Jan 29, 2021 at 01:12:55PM -0800, Chuck Guzis via cctalk wrote:
[...]
Most old (pre S/360) digit/character-addressable
architectures were
big-endian (i.e. higher-order characters occupied lower addresses)
Even PDP-11 isn't strictly little-endian, though Intel X86 definitely is.
I note that modern x86 and ARM have big-endian load and store operations, so
while both architectures are little-endian by default, there is no extra
overhead for handling big-endian data.
Little-endian tends to be more useful when doing multi-word arithmetic.
Big-endian is handy for text and human-readable numbers. That there are
heated arguments over which endianness is best mainly tells us that there's
bugger all in it either way. After all, the word "endian" is a satirical
device in Gulliver's Travels.
Numbering of bits in a word is also interesting. Is
the high order bit in
a 64 bit word, bit 0 or bit 63? Both conventions have been employed.
On every CPU I've used, LSB has always been bit 0. Unlike endianness, this
is clearly better than the other way round since the value is 2**bit_number
and the bit number doesn't change if the value is converted into a different
word width.
When it comes to I/O devices which don't do arithmetic, either convention
may appear. Hardware people rarely pick names or conventions that make sense
to software people.