On Jan 24, 2014, at 12:36 PM, John Wilson <wilson
at dbit.com> wrote:
On Fri, Jan 24, 2014 at 11:03:36AM -0500, Paul
Koning wrote:
I suspect this was done to provide one?s density
for the receiver to lock
onto. The standard way to achieve that in sync communication is with non-zero
idle characters (DDCMP and Bisync) or bit stuffing (SDLC, HDLC).
Almost definitely
showing my own ignorance (I'm very new to serial comms),
but my impression is that in the old days, synchronous ports *always* used
a modem-supplied (etc.) external clock signal, and it's only newer fancy-pants
ports like the Zilog Z85(2)30 that try to be cute about using a PLL to derive
a clock from transitions in the bit stream.
Yes, the modem would supply the clock,
but the modem has to get that clock from somewhere. It might be from the modulation, but
for simple modulation schemes about all you have is the bit transitions of the data
stream.
An extreme example is the Bell 202 modem: 1200 baud one way (or duplex only if you have
four wires). It uses simple FSK modulation 1200 Hz and 2200 Hz to encode the bit values
if I remember right. No clock. To send sync data over such a modem, you need to build
your own PLL. I?ve done exactly that, for amateur packet radio (the original AX.25
connection oriented flavor, 1200 baud AFSK on a 2 meter FM radio).
So my understanding (correct me!) was that sync
characters exist for, well,
sync (i.e. byte framing -- getting to a character boundary for sure, when
it's possible the receiver wasn't listening or just started up), and idle
characters are to maintain that byte framing when you don't have anything
else to send, i.e. TX underrun (because if you just sent a variable amount
of "mark" like an async line, you'd lose your byte framing when you
started
sending valid data again).
Sync characters certainly are for byte framing, but
(again, unless the modulation itself provides bit clocking) they may also be needed to
help with bit framing.
And bit-stuffing in SDLC is just there to make
sure that the 01111110 flag
character that begins/ends packets can't possibly occur anywhere except in
those places (this is a huge bug in DDCMP -- if a header gets garbled, the
receiver can be fooled by a fake packet contained entirely within the data
field, which is delimited only by a byte count in the header).
DDCMP doesn?t have
that bug, because DDCMP headers have CRC. The only thing that comes close is that you can
get false framing if you construct a fake whole packet inside the payload, but ONLY if the
receiver has lost packet framing and is in frame search mode.
BTW, SDLC flag is not a character though it is often wrongly described that way. It?s a
pattern on the encoded bit stream. It ties to the original encoding for SDLC, which is
NRZI ? invert the current polarity if sending a zero bit, leave it alone if sending a 1
bit. Bit stuffing was done to ensure transition density: at least one per 6 bitclocks.
So the flag is a code stream pattern that doesn?t appear in data: transition, 6 clocks
without transition, transition.
SDLC was a BOP (bit oriented protocol) and BiSynce and other tended to
be COP (character oriented protocol).
at the physical level then there were other layers of what to do...
By the way, SDLC/HDLC allow frames that are any number
of bits in length (not necessarily a multiple of 8), and USARTS typically support that.
(The NEC chip where this thread started does, for example.) I?ve never run into any
network protocols that used this fact, though. Perhaps CDC did to send data encoded in 6
bit characters?
But through all of this there's a 1x clock
coming in the RxC/TxC pins on
the DB25, either from the modem, or on a local connection, from a 1x BRG
in one of the ports that has been strapped to drive it onto the connector.
ANYWAY so the point of the SYN character is not to have a certain # of
guaranteed transitions, but to be intentionally lopsided so that no rotation
of it can be mistaken for valid (e.g. 55h would be useless as a sync char),
so that a receiver in "sync search" mode can click into position and be
ready for the LSB of the real data.
That ?no rotation? property is true for
Bisync, which relies purely on the sync character for framing as far as I know. But for
DDCMP, it?s helpful but by no means required; the main framing mechanism is sync character
search combined with header CRC validation.
The ?no rotation? property does come back in later protocols; the JK character pair in
the 4b/5b code of FDDI and 100 Mb/s Ethernet has this property (indeed, it?s sometimes
referred to as the ?JK property?). Similarly, the start of packet character in the 8b/10b
code of Fibre Channel and 1Gb/s Ethernet has this property. This is used in receiver
circuitry to do code word alignment ? there?s a shift register that accepts the incoming
bitstream and delivers it word-wide to the remaining logic so it doesn?t have to run at
extreme speeds. For that alignment to be practical, it has to be very easy to recognize
the first bit of a packet, and these codes were designed to do that.
paul
Therin lies the key, there were codes, physical protocols, and packet
level protocols in the sync world.
Allison