On Feb 22, 2019, at 6:21 PM, Grant Taylor via cctalk
<cctalk at classiccmp.org> wrote:
On 02/21/2019 07:43 AM, Paul Koning via cctalk wrote:
...
The mapping from Ethernet to 802.2 SNAP is
trivial, but yes, you do need that mapping.
I'm still pontificating how trivial the mapping between Ethernet II and 802.2 SNAP
is. I guess as long as you translate the Ethernet Frame Type to the SNAP Protocol ID (and
vice versa) and the Ethernet frame payload would fit in the upper layer data area, things
would be okay. There might be some payloads that would fit in an Ethernet II frame that
wouldn't fit in an 802.2 SNAP frame on Ethernet. Fortunately Token Ring had bigger
MTUs.
You would probably only be going between Ethernet II and 802.2 SNAP if you were going
from Ethernet (802.3) to a different 802 network. Which means that other things on that
non-Ethernet 802 network would already understand the same protocol(s) in 802.2 SNAP
frames.
SNAP as a way of encoding bridged Ethernet II frames applies only to non-Ethernet LANs,
all of which have larger MTU.
The idea of using a mixture of Ethernet II and 802.2
SNAP on an Ethernet seems odd. (en0 vs et0 interfaces in AIX come to mind.)
SNAP covers more than encoding bridged Ethernet II. It was intended as a way to carry
protocols in 802 format for which you couldn't get a SSAP/DSAP code point (such as
private protocols). DEC did this in various places, it's perfectly straightforward.
Perhaps some implementations make it hard to support both simultaneously, but there is no
technical reason to make such a mistake.
Broadcast is just a special case of multicast.
...
Was one functional address used in lieu of the broadcast? Meaning that all
stations would receive it, look at it, and decide if they needed to act on it or not.
Much like traditional 10Base5 / 10Base2 / hubs?
The pretense that broadcast is different from multicast is just a confusion. The
description says that it is used for traffic that every station wants to get. If you take
that literally, no protocol should use broadcast, because there isn't any protocol
that every station on every LAN wants to see. For example, ARP should have used multicast
for the same reason DECnet does: it is traffic that is interesting to stations which speak
that protocol, and only to those stations.
The ring
passes through all stations of a LAN segment, just as the Ethernet bus (in the original
version) passes through all stations of the segment.
I think there's a minutia difference. I don't know if it's germane or not.
To me, Ethernet (10Base5 / 10Base2 / Hubs) are functionally a broadcast that every
station hears without any action on other stations behalf. Conversely, Token Ring each
frame is actively passed station to station by each station. It's also my
understanding that the station will not pass the frame on to the next station in the ring
if the incoming frame was destined to the local station. Thus, not all stations would
necessarily hear every frame like they would on Ethernet.
I think that's right. For 802.5, that is. In FDDI the frames are
"stripped" by the sending station, which allows things like network monitors in
promiscuous mode to work just like on Ethernet
...
One example of this is the behavior under high
load. At one time, token ring marketeers claimed it was better because it wouldn't
"collapse" under load "like Ethernet". That is actually false, but in
any event,
Please elaborate on why it's false.
The claim of collapse under load -- meaning throughput goes down beyond a certain load
level -- is valid for ALOHA and similar networks, but not on Ethernet because it uses
carrier sense and collision detect. Under overload it runs at close to full utilization.
802.5 worst
case latency is incredibly large for large rings.
Ya, I can see that.
802.4 and FDDI with their "timed token
protocol" have far lower worst case latency.
I effectively don't know anything about 802.4 or FDDI.
I'm trying to reconstruct what exactly the difference is, but I never knew 802.5 well
and my FDDI brain cells are all from around 1986...
paul