On 03/29/2018 03:48 PM, Alexander Schreiber via cctalk wrote:
Also, AFS is built around volumes (think "virtual
disks") and you have
the concept of a r/w volume with (potentially) a pile of r/o volumes
snapshotted from it. So one thing I did was that every (r/w) volume
had a directory .backup in its root where there was mounted a r/o
volume snapshooted from the r/w volume around midnight every day.
CDC 6000 SCOPE 3.3 and later implemented permanent files with file
"cycles". That is, earlier versions of a file were kept around.
The approach was a little different. You initially started a job or
session with no files, but for INPUT (wherever it came from) OUTPUT
(display or print output) and optionally PUNCH (obvious meaning). A
dayfile was also maintained, but the individual user could only add to
it, not otherwise manipulate it.
To do real work on an ongoing project involved ATTACH-ing a permanent
file that had been CATALOG-ed. Paswords (up to 3) and permissions
needed to be specified to ATTACH a file. This, IIRC, created a local
copy of the file. If you mistakenly deleted the local copy, you still
had the permanent copy. If you saved the local copy after modifying it,
it was saved as a new cycle.
A user could PURGE old permanent file cycles.
The beauty of this was that a user had access to the files that were
needed for a session. A user could, of course, create as many local
files as desired, but these were all disposed of at the end of the
job/session, so there wasn't a lot of garbage floating around in the system.
A side benefit was that permanent files could be archived to tape, so
when an ATTACH was issued for an archived file, the job was suspended
until the relevant tape was located and read.
I suspect that modern users would consider the system to be too
restrictive for today's tastes, but it was fine back then.
--Chuck