It was thus said that the Great Mouse once stated:
3) It's slower. Two reasons for this:
Even to the extent this is true, in most cases, "so what"?
Most executables are not performance-critical enough for dynamic-linker
overhead to matter. (For the few that are, or for the few cases where
lots are, yes, static linking can help.)
I keep telling myself that whenever I launch Firefox after a reboot ...
I use the
uintXX_t types for interoperability---known file formats
and network protocols, and the plain (or known types like size_t)
otherwise.
uintXX_t does not help much with "known file formats and network
protocols". You have to either still serialize and deserialize
manually - or blindly hope your compiler adds no padding bits (eg, that
it lays out your structs exactly the way you hope it will).
First off, the C standard mandates that the order of fields in a struct
cannot be reordered, so that just leaves padding and byte order to deal
with. Now, it may sound cavalier of me, but of the three compilers I use at
work (gcc, clang, Solaris Sun Works thingy) I know how to get them to layout
the structs exactly as I need them (and it doesn't hurt that for the files
and protocols we deal with are generally properly aligned anyway for those
systems that can't deal with misaligned reads (generally everything *BUT*
the x86)) and that we keep everything in network byte order. [1]
-spc
[1] Sorry Rob Pike [2], but compilers aren't quite smart enough [3]
yet.
[2]
https://commandcenter.blogspot.com/2012/04/byte-order-fallacy.html
[3]
https://news.ycombinator.com/item?id=3796432