Speed now & then
Boris Gimbarzevsky
boris at summitclinic.com
Sat Apr 14 20:40:09 CDT 2018
TAhanks for that link which fits with my
measurements (nowhere as detailed) of ones actual
ability to do things with "modern" hardware. In
the 1980's I was used to being able to measure
events with 0.2 microsecond precision using a
PDP-11 and my expectation was that the accuracy
was only going to improve as processors got faster.
I ported a program I wrote on the PDP-11 to a
Commodore 64 in 1988 and was using it to measure
finger tapping with a switch array to 1 msec
accuracy. This was done through the simple
expedient of speeding up the sample rate for the
keyboard to 1 KHz and the adding in my 4 external
switches as "keys". Used a 512 K Mac to get the
serial data and display results. To do the same
now would require custom hardware to do the
timing and a USB link to a "modern" CPU or implimentation on a microprocessor
When I attempted to get this same type of timing
accuracy from a PC, found out that it was no
longer easy to get access to interrupts as easily
as before and keyboard latency was longer as now
keystrokes were detected by an on board
microprocessor and sent out as a series of
packets for each keystroke. In DOS and W95 where
one could still easily get at interrupts, then a
serial port could be used to do msec
timing. Once XP and beyond arrived, then the
best temporal precision one can expect from a 3
GHz machine is 15 msec. I suspect the same holds
for Macs and haven't tried running real time
Linux as I either pull out my trusty C64 from
time to time and use it for precision timing
(unfortunately have only one copy of the code on
casette tape so when that goes can't do this
anymore) or I use various microprocessors to do
the job. Have a nice microsecond precision timer
that I wrote for a Propeller chip and feel much
more comfortable programming for it than the
latest windoze bloatware system. The Propeller
has the same amount of RAM as the PDP-11's I
started on, runs 20x faster/core and is fun to
program. The microsecond timer is attached to a
geiger counter to generate random bytes for OTP encryption.
Boris Gimbarzevsky
>On 29 March 2018 at 19:53, Paul Koning via
>cctalk <cctalk at classiccmp.org> wrote:
> >
> > It would be fun to do a "generalized Moore's
> Law" chart, showing not just transistor count
> growth (Moore's subject) but also the many
> other scaling changes of computing: disk
> capacity, recording density, disk IOPS, disk
> bandwidth, ditto those for tape, CPU MIPS,
> memory size, memory bandwidth, network bandwidth...
>
>This is the most telling I've seen in a long time...
>
>https://danluu.com/input-lag/
>
>--
>Liam Proven Profile: https://about.me/liamproven
>Email: lproven at cix.co.uk Google Mail/Hangouts/Plus: lproven at gmaill.com
>Twitter/Facebook/Flickr: lproven Skype/LinkedIn: liamproven
>UK: +44 7939-087884 ÄR (+ WhatsApp/Telegram/Signal): +420 7002 829 053
More information about the cctalk
mailing list