On 01/24/2014 12:36 PM, John Wilson wrote:
On Fri, Jan 24, 2014 at 11:03:36AM -0500, Paul Koning
wrote:
I suspect this was done to provide one?s density
for the receiver to lock
onto. The standard way to achieve that in sync communication is with non-zero
idle characters (DDCMP and Bisync) or bit stuffing (SDLC, HDLC).
Almost definitely
showing my own ignorance (I'm very new to serial comms),
but my impression is that in the old days, synchronous ports *always* used
a modem-supplied (etc.) external clock signal, and it's only newer fancy-pants
ports like the Zilog Z85(2)30 that try to be cute about using a PLL to derive
a clock from transitions in the bit stream.
It was a matter of necessity. Its very
hard to generate long bit
streams without
clock regeneration or resynchronization. it only takes a minute with a
calculator
to see that is the clocks both crystal controlled are apart by +50ppm
and -50ppm
(decent crystals) that over time they will drift apart.
So my understanding (correct me!) was that sync characters exist for, well,
sync (i.e. byte framing -- getting to a character boundary for sure, when
it's possible the receiver wasn't listening or just started up), and idle
characters are to maintain that byte framing when you don't have anything
else to send, i.e. TX underrun (because if you just sent a variable amount
of "mark" like an async line, you'd lose your byte framing when you
started
sending valid data again).
Several function including padding between packets as
most all Sync
protocols
are packet oriented rather than character oriented.
So the sync and fill chars serve to allow time for PLLS to sync, or fill
dead time
in transmission beteen packets (and keep the pll locked). Income case
the fill
pattern (byte?) was used when the cpu could not get to the task fast enough.
And bit-stuffing in SDLC is just there to make sure
that the 01111110 flag
character that begins/ends packets can't possibly occur anywhere except in
those places (this is a huge bug in DDCMP -- if a header gets garbled, the
receiver can be fooled by a fake packet contained entirely within the data
field, which is delimited only by a byte count in the header).
It's important to be aware of the standards evolving and what they were
then
as compared to even a few months or a year later.
But through all of this there's a 1x clock coming
in the RxC/TxC pins on
the DB25, either from the modem, or on a local connection, from a 1x BRG
in one of the ports that has been strapped to drive it onto the connector.
ANYWAY so the point of the SYN character is not to have a certain # of
guaranteed transitions, but to be intentionally lopsided so that no rotation
of it can be mistaken for valid (e.g. 55h would be useless as a sync char),
so that a receiver in "sync search" mode can click into position and be
ready for the LSB of the real data.
OK rip me to pieces.
That last paragraph was part of the standards and how protocols evolved.
Drove me and others nuts at NEC back then as what the part could do vs
what the
moving target of standard sis were very fluid.
One of the big things emerging but far from there yet is that systems
were only
starting to talk to other systems and that there as many ways to do it
was carter
had liver pills.
Allison