On Jan 23, 2014, at 11:41 PM, Eric Smith <spacewar at gmail.com> wrote:
On Jan 23, 2014 8:54 PM, "David Riley"
<fraveydank at gmail.com> wrote:
When I've done UARTs in FPGAS, I typically
only do the
sampling in the middle of the bit period.
I'm sure you are aware of it, but for the benefit of others, it should be
noted that this is the middle of the bit period from the receiving UART's
perspective, and may not match the transmitting UART's middle-of-stop-bit
time for two reasons:
1) There may be a timebase mismatch between the two ends. In the 1980s an
1990s, almost all async comms were locked to a crystal oscillator or at
least a ceramic resonator, so there was not much mismatch. Before that
there were mechanical async devices (e.g., Teletypes), and even some
electronic devices that used poor timebases such as RC oscillators (e.g.,
early PDP-11/05). It was considered acceptable to have a timebase error at
each end of more than +/- 1%. By the end of an 10-bit or 11-bit character,
the cumulative timebase error is significant. In recent year there are an
increasing number of async interfaces that are using various forms of
trimmed electronic oscillators, or even temperature-compensated RC
oscillators, so non-trivial rate mismatches are become more common again.
Yup. In my line of work, most serial comms are running off the clock
of the host micro, so rate mismatches aren't super uncommon. Most
users shoot for just a few percent max, because you don't want to
accumulate more than half a bit time of error by the end of 10 bits
(assuming 8N1, which is what most people use in systems I work with).
If you're shaving the end of a stop bit, you bring that tolerance in
even lower, but 1-2% shouldn't kill it.
I've had very little luck with bitrates of any significance running
off the built-in oscillators on micros, which generally have clock
tolerances in the significant single digits. I generally have OK
luck with 9600 baud, but since it's a matter of percentages more
than absolute speed, that's probably just luck. If I really NEED
it to work, a crystal (or a clock synchronous with the other end
of the line, if I were to be so lucky) is the way to go.
The 8530's digital PLL mode is quite nice, but of course it doesn't
work for async comms. Quite handy for SDLC, though (on which I've
actually had to work more than you might expect for modern devices
recently).
2) The receiving UART typically oversamples the
receive data signal at 16x
the receive bit rate. The receiver uses the first sample that it detects a
space (vs mark) signal as the leading edge of the start bit, and samples
the bits at 8 +/- 16n clock cycles thereafter. That introduces a 1/16 bit
time uncertainty in the timing of the samples, though that is
non-cumulative and thus usually of little concern.
Indeed, though it factors into the overall margin for byte time.
I do check
for
a stop bit in order to detect framing errors, but slicing off
the last eighth of the stop bit would probably go entirely
unnoticed by any of my implementations, which seem
to follow most "best practices" as far as efficiency goes.
That's right. Normal UART receivers, once they've sampled the putative
middle of the stop bit, will look for a leading edge of a start bit at
every subsequent sample time, rather than delaying until the expected end
of that stop bit.
Exactly. The stop bit is just the idle state of the line, and if
all is going right, it's indistinguishable from an idle line. So
as soon as I see the stop bit(s) (sometimes 2 are required), I
start hunting for a start bit at the high sample rate, which means
it could be as soon as the middle of the last stop bit.
Unless one were being particularly pedantic about framing, I can't
imagine why someone would design the chip like that, unless they
sampled at the end of the bit time (which seems like a bad idea).
I have no idea what the internal construction of the chip is, of
course, so I may be entirely off base in guessing how it's doing
the sampling. It just seems it's probably a waste of silicon to
be doing that check.
- Dave