Seems
excessive... I seem to recall you need log2(n) - 1 bits,
which would be 3 bits (32-bit ECC needs 4 bits).
Doesn;t that assume the
'extra' bits are known to be correct.
Those can be in error too (even if the'real' data bits are
correct),
No, it doesn't. See Wikipedia's Hamming code page (asking for
SECDED redirects to it) for a brief treatment of the subject, or any
of many more detailed treatments of coding theory for more.
I am missing something here... The OP says that adding 3 bits to a 16
bit word is enough to be able to correct any single-bit error.