Power supply power-up and power-down sequencing: how did it work?

Tom Stepleton stepleton at gmail.com
Fri Oct 22 13:35:21 CDT 2021


Hi cctalk,

I asked this question over on vcfed, but in the spirit of knowing more (and
avoiding silly, costly mistakes) I thought I'd ask again here. I hope this
is okay --- I think there must be a lot of community expertise to draw
from, and not everyone is in the same forums.

After seeing CuriousMarc's horror film about the killer transistor that
zapped the guts of his HP9825T, I've been working for some time on a
solid-state DC power supply monitor device that will chop all power to
logic if there's ever any excursion above or below critical voltage
thresholds on any power supply channel. I've been pretty successful in the
development so far and have a gizmo that accomplishes the basic goal, even
if it's not going to win any industrial design awards. It's not a crowbar
circuit: each voltage channel passes through a substantial driver IC that
can switch the power right off.

My device can detect and react to anomalies very quickly. But then you
browse through old DRAM datasheets and see warnings like these:

"Vbb must be applied prior to Vcc and Vdd. Vbb must also be the last power
supply switched off."

"Forward biasing this supply [that is, Vbb] with respect to Vss will
destroy the memory device."

And so even though my device is fast, it's possible that when it's slamming
the doors shut, there's a split second where -5V is off and +12V is still
on, or even the chance of a Vbb-Vss forward-bias "blip", who knows? Of
course you can measure whether this is happening, but it's difficult to
know how meaningful that will be: maybe a computer that loads the voltage
rails differently will have different behaviour, and remember, the case you
really care about is when a power supply behaves abnormally! System
characterisation is hard...

Anyway, my question is: what did hardware designers in the '70s do to
satisfy specified power supply requirements for the chips they were using?

The conversation so far on vcfed has two remarks: one observing that a lot
of folks just didn't worry about it, and one pointing out an anti Vbb>Vss
gimmick in the PSU for the Nascom kit computer involving a ladder of
protective diodes between each of the rails:

(-12V) ---->|---- (-5V) ---->|---- (0V) ---->|---- (+5V) ---->|---- (+12V)

I noticed a similar pattern in a few other PSU schematics, but usually only
for the negative channels.

Were there any other common tricks? How serious was the danger of getting
it wrong? How fast could you fry your DRAM if you did?

Thanks for any insight,
--Tom


More information about the cctech mailing list