> my rule of thumb is to expect 1MB/sec out of a
nominally 10Mbit/sec
> line.
Concidering that 1MB ([...]) is very close to the
10Mbit limit of
1.2Mbytes/s, that gives the view that there'a approx 12.5% protocol
overhead on network transmissions.
I've been juggling numbers and I can't see where you got 12.5%. 1MB is
16.666...% less than 1.2MB, regardless of whether your MB is metric or
binary, as long as it's the same MB for both figures. If the 10Mb is
metric and the 1MB is binary (which is how I'd measure it), 1MB is
16.11392% less than 10Mb. Using binary 10Mb and metric 1MB gives
23.706+%. I've been unable to find a combination that gets below 16%.
Those are measuring the difference as a percentage of the larger value
(which seems appropriate for saying "A is X% less than B"). Measuring
as percentage of the smaller value gives even larger percentages (20%,
19.2+%, 31+%, respectively).
Could you elucidate?
But yes, what I am saying is that, in practice, at least in my
experience, the various levels of overhead eat up approximately 16% of
the raw bandwidth. This is not a carefully measured figure; the
uncertainty is high enough that it's more scientifically accurate to
say "between 10% and 20%". And, of course, this is best-case.
/~\ The ASCII der Mouse
\ / Ribbon Campaign
X Against HTML mouse at rodents.montreal.qc.ca
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B