On 01/24/2014 10:30 AM, David Riley wrote:
On Jan 23, 2014, at 11:41 PM, Eric Smith <spacewar
at gmail.com> wrote:
> On Jan 23, 2014 8:54 PM, "David Riley" <fraveydank at gmail.com>
wrote:
>> When I've done UARTs in FPGAS, I typically only do the
>> sampling in the middle of the bit period.
> I'm sure you are aware of it, but for the benefit of others, it should be
> noted that this is the middle of the bit period from the receiving UART's
> perspective, and may not match the transmitting UART's middle-of-stop-bit
> time for two reasons:
>
> 1) There may be a timebase mismatch between the two ends. In the 1980s an
> 1990s, almost all async comms were locked to a crystal oscillator or at
> least a ceramic resonator, so there was not much mismatch. Before that
> there were mechanical async devices (e.g., Teletypes), and even some
> electronic devices that used poor timebases such as RC oscillators (e.g.,
> early PDP-11/05). It was considered acceptable to have a timebase error at
> each end of more than +/- 1%. By the end of an 10-bit or 11-bit character,
> the cumulative timebase error is significant. In recent year there are an
> increasing number of async interfaces that are using various forms of
> trimmed electronic oscillators, or even temperature-compensated RC
> oscillators, so non-trivial rate mismatches are become more common again.
Actually the TTY was locked to the power line and that was at laest .02%
over time and because of inertia that had a a long term effect only.
Yup. In my line of work, most serial comms are
running off the clock
of the host micro, so rate mismatches aren't super uncommon. Most
users shoot for just a few percent max, because you don't want to
accumulate more than half a bit time of error by the end of 10 bits
(assuming 8N1, which is what most people use in systems I work with).
If you're shaving the end of a stop bit, you bring that tolerance in
even lower, but 1-2% shouldn't kill it.
Depends of if your shaving the sent
stop bit or the received version of it.
All parts sampled in the middle of the bit determined by the start bit.
Usually the clock was 16X that so in 10 bit times (one transmitted
character)
you had to be well off to miss by a half bit.
I've had very little luck with bitrates of any
significance running
off the built-in oscillators on micros, which generally have clock
tolerances in the significant single digits. I generally have OK
luck with 9600 baud, but since it's a matter of percentages more
than absolute speed, that's probably just luck. If I really NEED
it to work, a crystal (or a clock synchronous with the other end
of the line, if I were to be so lucky) is the way to go.
I ran up to 115300 with no issues other than could the system keep up with
a character rate of 11530 bytes/sec. Most could but not all.
The 8530's digital PLL mode is quite nice, but of
course it doesn't
work for async comms. Quite handy for SDLC, though (on which I've
actually had to work more than you might expect for modern devices
recently).
Note that the 8530 was the SIO with DPLL and DMA so if you had those
things in the system you had a SIO. The 8530 could go faster, and the
DMA lighted the processor load some. Though the CPU still had to mange
buffers and keep up with status and errors.
2) The
receiving UART typically oversamples the receive data signal at 16x
the receive bit rate. The receiver uses the first sample that it detects a
space (vs mark) signal as the leading edge of the start bit, and samples
the bits at 8 +/- 16n clock cycles thereafter. That introduces a 1/16 bit
time uncertainty in the timing of the samples, though that is
non-cumulative and thus usually of little concern.
Indeed, though it factors into
the overall margin for byte time.
The total error for a 10 bit char has to be less
than 8/160 clock times
or less than a few percent.
I do check for
a stop bit in order to detect framing errors, but slicing off
the last eighth of the stop bit would probably go entirely
unnoticed by any of my implementations, which seem
to follow most "best practices" as far as efficiency goes.
That's
right. Normal UART receivers, once they've sampled the putative
middle of the stop bit, will look for a leading edge of a start bit at
every subsequent sample time, rather than delaying until the expected end
of that stop bit.
Exactly. The stop bit is just the idle state of the line, and
if
all is going right, it's indistinguishable from an idle line. So
as soon as I see the stop bit(s) (sometimes 2 are required), I
start hunting for a start bit at the high sample rate, which means
it could be as soon as the middle of the last stop bit.
The edge of the start bit
starts the half bit counter then every center
is expected to be 16 bit times later.
Unless one were being particularly pedantic about
framing, I can't
imagine why someone would design the chip like that, unless they
sampled at the end of the bit time (which seems like a bad idea).
I have no idea what the internal construction of the chip is, of
course, so I may be entirely off base in guessing how it's doing
the sampling. It just seems it's probably a waste of silicon to
be doing that check.
What if they sampled the falling end and then center insure that the
start bit
wasn't a false transistion (noise)? Same with stop bits, if you expect
2 stops and
you see a transistion before the end of the second there is an error
there that might
be worth flagging in some system.
Allison