On Nov 1, 2022, at 1:05 PM, Wayne S
<wayne.sudol(a)hotmail.com> wrote:
...
> On Nov 1, 2022, at 09:01, Paul Koning via cctalk <cctalk(a)classiccmp.org>
wrote:
>
>
>
>>> On Oct 30, 2022, at 2:49 PM, Wayne S via cctalk <cctalk(a)classiccmp.org>
wrote:
>>
>> The difference between dz and dh interfaces is that the dh used dma instead of
interrupts to get characters to the cpu. It would be transparent to any software.
>
> No, it doesn't. I was confused about this but was recently corrected.
>
> The DH11 does DMA output, but not DMA input. I don't know any DEC serial port
devices that have DMA input; it would make very little sense to do that since input
generally is one character at a time. Block mode terminals do exist in DEC's world
but they are rare, and even those are hard to operate with simple DMA.
>
> DZ is programmed I/O in both directions, which makes the difference. In typical
usage, the bulk of the terminal traffic is output, so doing that with DMA is a big win.
>
> paul
>
Also, can you define what the phrase “programmed io” refers?
AFAIK, pretty much everything does that, so a clarification would help.
Yes, in the sense that all I/O happens under control of some program. The term
"programmed I/O" normally means I/O where the entire job is done in software, as
opposed to DMA or similar schemes where the software initiates the process but the I/O
device then does a lot of the detail work autonomously, without bothering the CPU.
Take terminal interfaces. With interactive terminals (standard usage in DEC systems)
it's unavoidable that the software has to do work for each character, so programmed
I/O is normal. It typically has a FIFO to deal with interrupt latency, and as a result
also tends to do interrupt coalescing (under hight load each interrupt results in several
characters being taken from the FIFO and acted on).
Terminal output often comes in bursts, for example an application may write a line of text
or even a larger chunk of data. If so, the OS can do block transfers for those bursts.
Even if it has to copy from user to kernel buffers, it can fill such a buffer and the
start a DMA transfer of the entire buffer contents. The result is that the OS only has to
deal with the device every 30 characters or so (RSTS case) or even less often if the
buffer size is larger.
Consider disk for another example. With very rare exceptions, disks do DMA: the OS points
to the place where the data lives, supplies a transfer length and starting disk position,
and says "go do it and tell me when it's finished". Newer devices like the
MSCP controllers support a queue of requests, but even simple devices like the RK05 will
do a full transfer without CPU involvement.
The one notorious exception is the Pro, where the disk controllers use programmed I/O: the
OS has to move every word one at a time to or from a controller CSR. So the CPU overhead
of disk I/O in that system is much higher than on every other DEC machine, and partly for
that reason the Pro is utterly incapable of transferring consecutive sectors. Instead, it
is forced to use interleaving, where logically-adjacent sectors are actually 5 sectors
apart ("4:1 interleave"). That too contributes to pathetic performance.
paul