On Jul 2, 2012, at 9:18 PM, Sean Conner wrote:
It was thus said that the Great David Riley once
stated:
On Jul 2, 2012, at 6:53 PM, Sean Conner wrote:
AFAIK
sizeof(unsigned short) is not defined anywhere :-).
I'm not sure I follow you here. In C, you can indeed do a
sizeof(unsigned short)
and get back the size (in characters) of a short int (with the size in bits
of a character defined by CHAR_BIT).
But it's not defined in a standard, which is a problem. It could be 128
bits when it's compiled for all you know.
Um .. it *IS* defined in a standard---the ANSI C standard:
... Their implementation defined values shall be equal or greater in
magnitude (absolute value) to those shown, with the same sign.
CHAR_BIT - number of bits for smallest object that is not a
bit-field (byte)
CHAR_BIT 8
I should clarify. Yes, you can get sizeof(whatever) and it will be
the size of whatever in bytes; I meant that sizeof(unsigned short)
is undefined by a standard. Assuming that it has even a fixed
minimum (1 char notwithstanding) is a fool's errand in a lot of
cases. 32-bit architectures have dominated computing since I
started programming, but I've still seen plenty of breakage (even
in new code) caused by such assumptions.
I think actually the most obnoxious example I've seen is the
assumption that sizeof(int) == sizeof(void *). One of the bog-
standard codebases for MMCs on the ATCA architecture, written
by PigeonPoint, has some post-processing utilities written for
Linux in the source distribution. They made the boneheaded
moves of a) typedefing "u32" to "unsigned long" (which made
"u32" a 64-bit int on amd64) and b) assuming that "unsigned
int" could be used to hold pointer differences (which it
cannot on amd64). The resulting code just crashed because the
heap was getting stomped, which is hard to debug, and it was
so difficult to worm out the errors (especially because some
of them just produced slightly-wrong binaries instead of
crashing) that I gave up and just forced compilation as 32 bit
code.
- Dave