On 06/22/2016 06:15 PM, Paul Koning wrote:
Slightly different. A rolled out job was a file,
containing the
whole job state, including stuff like currently attached files,
memory content, exchange package (program registers). Like any other
"local file" it would show up in memory as an entry in the file table
-- just 2 60-bit words if I remember right. When selected by one of
the scheduler components to be run again, it would be assigned a
control point, memory, rolled back in, and execution resumed.
Yes, it was a file, but it still occupied a control point--at least it
did under SCOPE.
On CYBER 200 SOS, each controlee maintained a "drop file", which held
modified pages, the "invisible package" and file information, so that a
job could be stopped and restarted any time later by the user.
Of course, we also had memory-mapped files.
Jobs could also be moved in memory without being
rolled out; this
could happen if they or some other job changed memory size, forcing
something to move to make room. PPU programs would have to watch out
for that to happen and "pause for storage relocation". Getting that
wrong was a great way to wedge the OS; I've got that t-shirt...
Basic memory management that I referred to. Initially, under SCOPE,
this was pretty much the only OS task that the CPU took part
in--"storage move", as moving memory was much faster if done by the CPU
than PPUs. If you had a 6600, you could do it with a simple in-stack
loop that moved two words per iteration with no wasted cycles.
If you had a 6400/Cyber 73, you could use the CMU (Cyber) or ECS if
available. That was the only way to keep memory busy on the lower Cybers.
Performance was always an issue. When SCOPE 3.4 came out, a new CIO
request was introduced for the benefit of the the loader. You presented
CIO with a request "Read List String", which was nothing more than a
linked list of disk addresses (well, RBT numbers) which were passed to
1SP, and 1SP would do its best to keep the program's read buffer full.
It made for very fast loader operation. Unfortunately, some wiseacre
decided that he could keep adding to the list of addresses and keep 1SP
busy forever--which meant that any disk-resident PP code, such as 1EJ
couldn't be loaded either. Fortunately, a fix was easy--simply have 1SP
drop any too-long requests back into the queue.
That business with a PP not being able to do I/O on a job with a storage
move pending was one reason that we had to write our own 844 servicing
program for Zodiac--DBD. All buffers were permanently allocated in CM
and data moved in an out of those.
I think that 1SP--the SCOPE "stack processor" was one area where SCOPE
and KRONOS differed significantly. On SCOPE, pending requests were
sorted according to priority based on nearness to the current disk
position and the number of times it had been passed over for a more
favorable request. From my discussions with Greg, I seem to recall that
KRONOS processed disk requests on a first-come, first-served basis.
--Chuck