Seems
excessive... I seem to recall you need log2(n) - 1 bits, which
would be 3 bits (32-bit ECC needs 4 bits).
Doesn;t that assume the 'extra'
bits are known to be correct. Those
can be in error too (even if the'real' data bits are correct),
No, it doesn't. See Wikipedia's Hamming code page (asking for SECDED
redirects to it) for a brief treatment of the subject, or any of many
more detailed treatments of coding theory for more.
I am missing something here... The OP says that adding 3 bits to a 16 bit
word is enough to be able to correct any single-bit error.
Now, consider those 16 'real' data bits. If any single one is in error,
that generates a new 16 bit word, and each of these much give the same
output 16 bit word after error correction. So it would appear to me that
there have to at least 17 possible input words (the 'correct' one, and
the 16 wach with one bit fillped form the corrrect one) that give the
same 16 bit output -- that is what is meant by correcting single bit erorr.
And yet adding 3 bits only gives you 8 times as many possible data words,
which doesn't seem enough.
The example I know, is the MK11 memory box.
ECC there is done at the single-error correction, double-error detection level on 32-bit
words.
This takes 7 check bits.
I find the most satisfying illustration to be the MK11B print set, there's a very nice
11x17
page in large type, illustrating how all this is done with XOR gates. I find this much
more digestible
than the usual mathematical equation stuff found in textbooks. That's a brilliant
page.
It shows how to, by eye, to read the XOR gate outputs to identify the error syndrome
uniquely,
with just a few words. Many fewer words than I used in this paragraph!!!!
If you didn't want double-error detection it would take 6 check bits per 32 bit word.
Maybe
the implication that there is 3 check bits per 16 bit words, assumes single-bit
correction
and the actual ECC logic is working on 32 bit words. If the ECC was working on 16 bit
words,
it would take 5 check bits.
Tim.