On Feb 23, 2019, at 3:01 AM, Grant Taylor via cctalk
<cctalk at classiccmp.org> wrote:
On 2/22/19 6:15 PM, Paul Koning via cctalk wrote:
SNAP as a way of encoding bridged Ethernet II
frames applies only to non-Ethernet LANs, all of which have larger MTU.
Nope. I'm quite sure that NetBIOS used SNAP on Ethernet.
I'm betting that 3174's Ethernet interfaces also used DLC / LLC2 via SNAP.
IPX could run over SNAP on Ethernet if you wanted to.
Yes, but that's not what I was trying to say, apparently not very clearly.
There is a translation of Ethernet 2 frames into SNAP (by using an OUI of 00-00-00 or
00-00-F8 followed by the Ethentype). Those particular SNAP values are meant to be used
only on LANs different from Ethernet, and bridges connecting those to Ethernet would look
for those SNAP values and convert to the corresponding Ethernet 2 format.
SNAP covers
more than encoding bridged Ethernet II. It was intended as a way to carry protocols in
802 format for which you couldn't get a SSAP/DSAP code point (such as private
protocols). DEC did this in various places, it's perfectly straightforward.
*nod*
Perhaps some implementations make it hard to
support both simultaneously, but there is no technical reason to make such a mistake.
I feel like putting TCP/IP in SNAP on Ethernet is a mistake in that most OSs will not
know how to work with TCP in a SNAP frame as they will be expecting Ethernet II frames.
I don't know that there's a technical reason per say. I do think that there is a
market reason.
A specific case of the general point above: on Ethernet you'd use 08-00 and 08-06; on
non-Ethernet you'd apply RFC 1042 which gives the SNAP equivalents using the 00-00-00
prefix.
The pretense
that broadcast is different from multicast is just a confusion. The description says that
it is used for traffic that every station wants to get. If you take that literally, no
protocol should use broadcast, because there isn't any protocol that every station on
every LAN wants to see. For example, ARP should have used multicast for the same reason
DECnet does: it is traffic that is interesting to stations which speak that protocol, and
only to those stations.
Flip things on their head. I think it's that the sender wants every receiving
station to see.
Yes, but no sender and no protocol has a valid expectation that this is the right thing.
I think
that's right. For 802.5, that is.
ACK
In FDDI the frames are "stripped" by
the sending station, which allows things like network monitors in promiscuous mode to work
just like on Ethernet
Intriguing.
The claim of collapse under load -- meaning
throughput goes down beyond a certain load level -- is valid for ALOHA and similar
networks, but not on Ethernet because it uses carrier sense and collision detect. Under
overload it runs at close to full utilization.
Okay. So you weren't saying that Token Ring had problems as much as you were saying
that Ethernet can work at close to capacity.
I remember seeing references to Ethernet would start to have problems with increasing
backouts as the number of stations wanting to transmit at the same time would grow.
Though that may be that the average throughput of a given station may go down while the
network segment itself is closer to saturation.
That's necessarily true for any sharing system. If you're not sharing you can get
up to 100%, give or take how well the scheduling works. Two equal clients each get 50%,
and so on. The merit of a sharing system is in how well it approaches 100% total
throughput, and how well it delivers the desired split of service among the competing
clients. Ethernet and 802.5 and FDDI all do it differently, and all do it pretty well.
IBM once put out a marketing document full of FUD about Ethernet, and DEC, Intel, and 3Com
(I think) put out a joint rebuttal going point by point (I participated in that effort).
I have it in stored away somewhere; should look for it next time I'm in the right
spot.
paul