IBM PC Connecting to DECNET
Paul Koning
paulkoning at comcast.net
Fri Jun 3 08:07:09 CDT 2022
> On Jun 3, 2022, at 8:55 AM, Bill Gunshannon via cctalk <cctalk at classiccmp.org> wrote:
>
> On 6/3/22 08:46, Antonio Carlini via cctalk wrote:
>> On 03/06/2022 03:09, Rick Murphy via cctalk wrote:
>>> On 6/1/2022 12:49 AM, Glen Slick via cctalk wrote:
>>>> No one ever called it a "Digital Ethernet Personal Computer Bus
>>>> Adapter", just a DEPCA. I never previously knew that there was any
>>>> meaning behind the DEPCA name.
>>>
>>> Yes, that's what it meant. "DELNI" - Digital Ethernet Local Network Interface. "DESTA" - Digital Ethernet Station Termination Adapter. DELQA - Digital Ethernet Local Q-Bus Adapter (this one probably means something else. Working?). DEMPR - Multi Port Repeater. DEREP - Repeater. And so forth. Yeah, nobody spelled it out, but those DExxx names usually meant what the device was. DEBNT, DEUNA, DEQNA. Same naming convention. I'm probably missing several.
>>> -Rick
>>>
>> The DEPCA manual (http://www.bitsavers.org/pdf/dec/ethernet/depca/EK-DEPCA-PR-001_Apr89.pdf) says "DIGITAL Ethernet Personal Computer Adapter", without "Bus".
>> DELQA was "DIGITAL Ethernet Local-Area-Network to Q-bus Adapter" according to its user guide.
>> It's predecessor, the DEQNA, was "Digital ETHERNET Q-Bus Network Adapter", according to its user guide, or "broken", according to most people :-)
>
> I see comments like this all the time but I used DEQNA, DELQA,
> DELUA and DEUNA for years with minimal problems. I think most
> of the complaints originate after more modern networking equipment
> showed up and people's expectations rose beyond the abilities
> of the technology. Like crashing systems by flooding network
> segments with traffic.
The QNA worked for some applications, but when Local Area VAXclusters appeared it became clear it wasn't good enough for that load. For that matter, one of the transceiver chips used at that time wasn't, either. DEC did a lot of work to try to make it right and could not.
Of course, it always was the rule that the devices and systems must not fail under load, any load. That's why DEC bridges were designed for worst case loads. And when they couldn't actually forward under worst case load (as for the DECbridge-900, which would top out at about 2/3 of full 6 port wire rate), the design would ensure that critical control packets were always handled even under overload. We learned from the Cisco Bay Area meltdown, where those routers took down a chunk of the Internet by getting their high load scheduling rules wrong.
An extreme example of bad hardware was the 3C501, which had a single buffer so it could never deal with back to back packets. The DECnet architecture group at one time was asked for protocol changes to accommodate that. Our answer amounted to "use correctly designed hardware".
paul
More information about the cctalk
mailing list