On Sep 6, 2014, at 12:01 PM, drlegendre . <drlegendre at gmail.com> wrote:
Brent,
On Sat, Sep 6, 2014 at 2:53 AM, Brent Hilpert <hilpert at cs.ubc.ca> wrote:
My own little treatise on the organization and electronics of core memory
for more depth:
http://www.cs.ubc.ca/~hilpert/e/coremem/index.html
I read your linked article - very enjoyable, informative and
clearly-written piece.
I?ll second that.
On unusual memory designs: CDC mainframes used 5 wires: two inhibit wires instead of one,
and the inhibits were split in quarters. I suspect that was to keep the inductance seen
by all the various types of drivers reasonably close, and fairly low, for speed. 1
microsecond full cycle was a whole lot faster than usual for 1964, when those systems
first appeared.
One lingering question though.. why on earth were the big iron
manufacturers still building massive core memories as recently as the late
1970s? As early as the mid-1980s, 64K of (faster!) SRAM could be bought for
around a hundred dollars. Clearly these core modules are orders of
magnitude more costly.. so why were they still being produced? It can't
only be a matter of their non-volatility?
Through at least the late 1970s core memory was the modest performance low cost option,
with DRAM the up and coming new thing and SRAM the high cost option when speed was
required at any price. For example, the DEC PDP11/45 with core memory, and 11/55 with
SRAM ? same basic system just different memories.
In some applications, the fact that core is basically impervious to radiation was a
consideration too.
paul