CDC 6600 - Why so awesome?
Chuck Guzis
cclist at sydex.com
Wed Jun 22 12:08:01 CDT 2016
On 06/22/2016 08:32 AM, Swift Griggs wrote:
> I was trying to find some video of one of those actually running. I wanted
> to see how the "calligraphic displays" painted the graphics. Do you happen
> to know why they went with two displays like that? Did the two have
> different purposes?
I think Paul's covered that pretty well. I'll add that the more complex
the display, the more flicker was present. Another odd effect was that
systems that made extensive use of ECS (extended core storage) could
make the display flicker something awful, as ECS transfers tended to be
block-oriented, due to the high startup time. (ECS used a wide word
that was disassembled into CM words. Once a transfer started, it was at
full central memory speed.) ECS transfers could also torpedo certain
types of tape I/O (e.g. 1LT, the long tape driver used to transfer
records longer than PP memory to CM).
>> Much of the architectural concept was shared with IBM 7030 STRETCH
>> (another system worth researching).
>
> Hmm, I've never heard of it. I'll check it out. Thanks.
Do check it out--there was some bleeding-edge technology that went into
that system that was later used by Cray in his 6600. One of those
project that could be said to be a technical success, but a financial
fiasco.
> I tried to find some info on SCOPE, but it's very sparse. Did it have an
> interactive command line? What was your main "interface" to the OS?
Well, SCOPE had INTERCOM, an interactive facility, as well as
EXPORT/IMPORT which was an RJE facility. But the system was targeted
primarily at batch jobs. Its illegitimate relative, KRONOS, made
extensive use of ECS for support of the PLATO system. Note that the
6000 series had no hardware memory management to speak of. An active
job had a relocation address (RA) and field length (FL), but memory
space belonging to a job was contiguous and present in physical memory
(no paging or segmentation). So jobs were moved or "rolled out" to mass
storage as needs for resources arose. So that was for standard
offerings--"special systems" (i.e. spooks) had their own adaptations,
probably still classified today.
However, there was more than one SCOPE--and when reading the bitsavers
stuff, you have to be careful.
The CDC 7600 used pretty much the same CPU instruction set as the 6000
series, so user programs were compatible. The PP setup was different
however. In the 6000, any PP has unrestricted access to central memory
(CM). In the 7600, each PP was assigned a fixed memory frame in CM and
could not access anything outside of its hard-wired "buffer" area. The
implication is that you couldn't have a PP-resident operating system.
So SCOPE 2 was developed for the 7600. In it, PPs are relegated to I/O
and (one very special unit) for maintenance. The operating system
proper resides in a set of "nested" program units. That is, there
would, for example be a Buffer Manager with an RA and FL that
encompassed the Record Manager program, which, in turn would encompass
the Job Supervisor...and eventually the user program itself. A system
of "rings of protection" if you will, long before that was in vogue.
Although bulk core (LCM = large core memory; the 7600 term for 6000 ECS)
was used as program storage, the whole affair turned out to be more
cumbersome than originally envisioned. The SCOPE 2 folks were always a
little defensive about this result of necessity.
So, SCOPE 2 is not the same as SCOPE 3. SCOPE 3.4 was the last version
to be called that before it was renamed NOS/BE (Network Operating
System, Batch Environment) and eventually merged into NOS proper (which
had been KRONOS, which had been NOS). CDC was sharply split in culture
as well as geography--the Minnesota clan was cut from different cloth
than the Palo Alto-then-Sunnyvale clan, so discussions from the West
coast tend to be more SCOPE-oriented, while the pickled watermelon rind
clan talks fondly about KRONOS.
> I figured it was something like that, but I'm so used to 8-bit bytes and
> such. It takes a minute to adjust my thinking to a different base, but
> it's not that hard.
Working with full words and shifting and masking can be remarkably
efficient. For a time, CDC had one of the fastest COBOLs around, even
against IBM's big iron.
I recall the "what do we do about more than 64 characters" discussion
was raging. One interesting alternative proposed was to fit 7.5 8-bit
characters to the word, with the odd 4-bit leftover being called a
"snaque" (snaque-byte; get it?). Instead what was done was to reserve
some of the lesser used characters of the 64-character set as "escape"
codes for what amounts to 12-bit characters. So, in theory, you get the
advantage of uppercase compatability, while providing for an extended
character set. Very messy.
As an aside, take a look at the UNIVAC 1107/1108 instruction set from
roughly the same period. It has an instruction to define the byte size
(36 bit words).
> Well, the sample code I could find was particularly well put together by
> someone who knew they were doing. I'm a pretty poor ASM programmer, since
> the only one I ever put much effort into was for the M68k (which is really
> easy compared to some). I've got a big crush on MIPS ASM but I never was
> any good with it. C ruined me. :-)
Another cultural difference. CDC had coding standards. When a
programmer wrote code that either defined a new product or modified an
existing one, it had to pass peer review. Aside from very old code,
you quickly learned the system.
For example:
http://bitsavers.informatik.uni-stuttgart.de/pdf/cdc/cyber/nos/NOS_COMPASS_Coding_Standard_Jun83.pdf
You develop a disciplined style and you never forget it.
--Chuck
More information about the cctech
mailing list