It was thus said that the Great Bryan Pope once stated:
In one of the top calling routines, it gets
truncated to a BYTE-size
(so to speak) anyway.
If you only want a byte sized data type, why not use a char type for the
variable and return value.
Because a C compiler is free to treat an unspecified char declaration as
signed or unsigned (i.e. in the declaration ``char c;'' c can either be a
signed quantity or unsigned quantity). You need specify explicitely if you
want signed or unsigned characters (okay, some compilers let you set the
default signness of chars, but not all).
To be safe, you probably want something like:
#include <limits.h>
#if (UCHAR_MAX == 255U)
typedef unsigned char u8;
#else
# error No integral type is 8 bits on this platform
#endif
#if (UINT_MAX == 65535U)
typedef unsigned short u16;
#elif (USHRT_MAX == 65535U)
typedef unsigned int u16;
#elif (UCHAR_MAX == 65535U)
typedef unsigned char u16;
#else
# error No integral type is 16 bits on this platform
#endif
#if (ULONG_MAX == 4294967295UL)
typedef unsigned long u32;
#elif (UINT_MAX == 4294967295UL)
typedef unsigned int u32;
#elif (USHRT_MAX == 4294967295UL)
typedef unsigned short u32;
#elif (UCHAR_MAX == 4294967295UL)
typedef unsigned char u32;
#else
# error No integral type is 32 bites on this platform
#endif
This code should generate types of the appropriate bit sizes for most of
the general use systems in use today, and yet alert you to any problems or
assumptions that you might have when porting to a new system. A word is
*NOT* a 16 bit quantity (well, except for 16 bit systems), despite what
Microsoft says.
-spc (Has had to deal with portability issues)