On Oct 24, 2025, at 1:43 PM, ben via cctalk
<cctalk(a)classiccmp.org> wrote:
On 2025-10-24 7:33 a.m., Paul Koning via cctalk wrote:
On Oct
24, 2025, at 8:00 AM, cz via cctalk <cctalk(a)classiccmp.org> wrote:
OS level caching of disk devices is a gennable option in M+ with split I/D.
It's actually pretty impressive; the system can do read-aheads, and also writes can
be deferred (which makes turning off the system hot very bad). I think it can also cache
the directory entry table and even a small cache of 256kb or so makes a nice performance
difference.
It's also a standard feature in RSTS/E, which can allocate up to
496 kW to the storage cache (plus additional memory, if you want, for a ramdisk). I
haven't tried to see what the performance numbers look like with and without, that
would be an interesting experiment.
paul
I really wonder how much swapping of memory with disc is done internally or
how many buffers it has for files?
Time sharing was created for use
with 110 baud TTY's, so the first bench mark needs to be at that speed.
Yes, RSTS-11 in 1973 would likely have ASR33 terminals. Certainly our college system did,
on an 11/20 with 16 terminal lines. But when it was upgraded to RSTS/E on an 11/45 with
better interfaces (a DH11 instead of 16 KL11 or DL11 cards) its terminals moved to 300
baud or better.
The RSTS systems at DEC had 9600 baud terminals throughout, on 11/70 machines typically.
I don't remember how many lines; 32 seems likely if not more. With 4 MW of memory
there wasn't that much need for swapping, even after subtracting out up to 496 kW for
cache. Perhaps less was used; as I recall RSTS would do file data caching only for files
tagged for caching, which on development systems would not be usual practice.
paul