Chuck Guzis wrote: On 6/8/2006 at 7:59 PM Billy Pettit wrote:
Billy:
There were several reasons, but the biggest one was: they could do
parallel
read and writes. You could stripe them for 16 bit wide
data paths that
became incredibly fast. And with virtually no bit skew. Plus access time
was never more than one revolution.
Yeah, but in a Star-100 SBU, there was enough memory in the thing that the
drum hardly got used past booting the thing up. There was just no point to
the speed. The Star paged to disk; there were some plans initially for a
huge drum, running something like 512 bits parallel, but I don't know if
the concept ever made it off paper.
The funny thing was that there were the same SBUs on the two Star-1B's in
Sunnyvale--the SBU's were qutie a bit faster than the 1Bs themselves.
(Those two systems eventually went into the dumpster per company policy).
During the 60's, drums gave disks a run for their money. I remember a
Univac 1108 with the very large FASTRAND-II moving-head drum on it.
There were a number of late 50's-early-60's machine with drum as the main
memory, such as the IBM 650 and the LGP-30. "one plus one" addressing.
Cheers,
Chuck
----------------------------------------------------------
Billy:
What a meaty message! All sorts of hooks. I'll try to stay in order.
1. The Star-100 was massive. And memory technology was in a transition, so
a lot of the plans changed mid-stream. That was part of what killed it. By
time it was working, it was obsolete. The storage concept came from 7600
ideas. You are dead on with the word staging. What software wanted was to
stage ALL data transfers. The program would be loaded (including all
required data tapes) from a tape station. In turn, the tape station would
send the package to the storage station via memory to memory transfer. Then
the storage station would transfer to the main memory also via memory to
memory. Except that this transfer was supposed to be via 512 bit
SWords(SuperWords). It was sort of the ultimate of Seymour's philosophy of
using the CPU for calculating and every thing else was done off line.
2. The drum was an interim device. There was a point to it - the speed was
needed to debug the channels and the streaming busses. This program
required a lot of band aids while waiting for the new technology to come
along. It made a lot of risky bets on new peripherals to keep up with the
speed. The 512 bit wide drum did get to the prototype stage. But it was
incredibly costly. To get the bit density and performance needed, the
entire drum was filled with Helium under pressure. But it was fast. If I
remember the analysis correctly, the big drum would have cost more than the
main frame.
3. The disks were never close to what was needed. Cray was using Ibis and
Fujitsu disk with parallel heads (4 bits at a time). The Star team funded a
parallel head disk from Normandale, but I don't it ever got of the ground
either. They also had a multiple actuator drive in the lab that had the
same problems that caused all other attempts of this idea to fail.
4. The most exotic peripheral I saw was an 18" wide tape unit that used a
helical scan head bar. It was being developed by the Government Systems
Division. It made so much noise that you had to be in another room when it
was running.
5. So there were all sorts of experiments running side by side to find some
way to support the huge data rate required. The drums were more a proof of
concept. They allowed the channels to run at full speed even if they
couldn't supply much data.
6. The Star-1Bs were never intended to be products or peripheral stations.
They were a totally microcoded pseudo-processor. Again, they were proof of
concept, to have something for the programmers to run on. Working Star
hardware was years away when they started the program. The OS needed
development. And the designers needed machines to debug the commands on.
The instruction set was pure IPL. Some of the instructions were very
complex. The Star-1B was frequently recoded as the instruction set was
refined. It wasn't pretty and it wasn't fast. But it was very easy to
reconfigure, the critical parameter at that time.
7. I was assigned to the Star-65 program being developed in Mississauga.
It had to run at least half the Star-100 speed using the same technology
where possible. We had a couple of 1-Bs for the software folks. Used the
same memories and Star Stations. But the processor was a different concept:
the Star-100 was a massive brute force parallel machine. the Star-65 was a
hybrid using some of the streaming units, but with a microcoded main
processor
Damn, those were exciting times! So many new ideas and concepts being
tried. Today, 30 years later, I see an occasional reminder of those Halcyon
machines. For example, the streaming function in ATA-7. Or pipelines in
communication processors.
8. FASTRAND II and III lived on for decodes in the Sabre airline scheduling
systems from Univac. I know some were still in operation until the
mid-1990s. Somebody on the list probably worked on the 490 Series. Anyone?
9. All of the early computers I worked on were drum memory based. In the
Army, it was the Pershing Fire Control Computer. And the Redstone jukebox,
a fixed head sealed disk, actually. But CDC trained me on the Bendix G-15,
LGP-30 and RPC-4000. I can't say I have fond memories of all of them. But
I did prefer drums over disks until the first Winchesters became reliable.
Billy