Reminds me of a challenge I had in the early 80's The place I worked
made IC test and evaluation systems, starting price in 1980 was around
$300K and many where close to $1.5 million. This one was for IBM. They
were designing a 288K bit ram and one thing they wanted was to be able
to 'see' failed bits as parameters such as supply voltage were
changed. If you looked at the die it was 9 'squares' of i think 128 x
256 ( i think that was the size) cell or bits. The 9 th was for
parity. The memory was read by the system and a 0 or 1 was stored in a
buffer in the system. The system was run by a PDP11/44 the display was
a Tektronix GMA125 with option 42/43. The GMA 125 was the OEM display
used in the 4116, a 25" DVST terminal. Option 42/43 was feed from a
DR11. The 42/43 could be driven in Tek 401x format (that's the same
you still see today when you put your X11 display into Tek mode) which
had a point plotting set of commands.
So one had to read in a loop this external memory which came back in
some long forgotten 16 bits per something mode, calculate the position
of the 'bit' which was in memory block x and position x and y in each
block and either display a dot or plus or something at the respective
location on the CRT. Doing it all in some time IBM wanted it done in
.. loops within loops within loops and finally a test for 1 or 0 and
out the DR11W.
The only way I could get the code to meet IBM speed requirement was to
do the unthinkable. Upon start the inner most work that was test for
'display if zero' or display if one' was modified to be branch of true
or branch if false.
Sometimes you just have to violate all the rules. Other fun things
were using shifting bits and using indexing for some of the coordinate
translations.
oh well when every instruction time made a difference.
it was challenging but fun
-pete
On Wed, May 4, 2016 at 9:56 AM, Bill Sudbrink <wh.sudbrink at verizon.net> wrote:
> I took a peek at the access logs for the Cromemco Dazzler
> files that I recently put up on my web server. I'm
> gratified to see that a lot of people are taking advantage
> of the availability of these documents, that have not
> recently (if ever) been easily available on the web. I
> also see that a lot of people took the Dazzlemation HEX
> file and the Magenta Martini paper tape image, presumably
> to run on Udo Monk's great Windows Cromemco Z1 simulator.
>
> Also, thanks to everyone that generated pdf files for me!
>
> One thing I noticed is that not many people looked at the
> disassembly of Dazzlemation. If you are an 8080 or Z80
> programmer (or any 8-bitter for that matter) I really
> recommend that you take a look, it's a real treat. I'm
> reliably informed that Mr. Dompier hand wrote that program
> LITERALLY (hand, pencil, paper), no editor, no assembler.
> He then toggled it in (or maybe raw keyed it in with a
> primitive ROM monitor) and went through a few iterations of:
>
> 1) store to paper tape
> 2) modify in memory
> 3) test
> 4) go to 1
>
> It's neat to see some of the "tricks" he used and also the
> level of sophistication of the code. It does a lot of
> stuff in not a lot of bytes. Also, here and there, in
> "dead" areas, you can also see the debris of ideas that he
> started and then abandoned.
>
> Bill S.
>
>
>
I took a peek at the access logs for the Cromemco Dazzler
files that I recently put up on my web server. I'm
gratified to see that a lot of people are taking advantage
of the availability of these documents, that have not
recently (if ever) been easily available on the web. I
also see that a lot of people took the Dazzlemation HEX
file and the Magenta Martini paper tape image, presumably
to run on Udo Monk's great Windows Cromemco Z1 simulator.
Also, thanks to everyone that generated pdf files for me!
One thing I noticed is that not many people looked at the
disassembly of Dazzlemation. If you are an 8080 or Z80
programmer (or any 8-bitter for that matter) I really
recommend that you take a look, it's a real treat. I'm
reliably informed that Mr. Dompier hand wrote that program
LITERALLY (hand, pencil, paper), no editor, no assembler.
He then toggled it in (or maybe raw keyed it in with a
primitive ROM monitor) and went through a few iterations of:
1) store to paper tape
2) modify in memory
3) test
4) go to 1
It's neat to see some of the "tricks" he used and also the
level of sophistication of the code. It does a lot of
stuff in not a lot of bytes. Also, here and there, in
"dead" areas, you can also see the debris of ideas that he
started and then abandoned.
Bill S.
Back in the early 90's I remember that many times I'd see a print
advertisement for a Video Toaster or a new genlock card, they'd say things
like "features you'd have to pay thousands for in a professional paintbox
or titler!" I always wondered what they were talking about, since I'd
never seen how broadcast was done back then (and still don't know). So,
I'm really talking about the tech of the 80's (since that's what the
marketing folks were referring to, I assume).
Here's what I could find that I'm speculating were the "competition" of
the time:
The Quantel Paintbox:
https://en.wikipedia.org/wiki/Quantel_Paintbox
Superpaint running on a DG Nova 800
https://en.wikipedia.org/wiki/Superpaint
The Bosch FGS 4000
https://www.youtube.com/watch?v=9oyGaEu7D7s
These are about the only ones I could find. Does anyone know of any
others?
Also, here are my favorite paint and 2D animation programs of yore. If you
guys have others that you loved and remember, what were they?
DOS
1. Deluxe Paint II Enhanced
2. PC Paintbrush
3. Autodesk Animator
4. Paul Mace's GRASP
5. Deluxe Paint Animation
Amiga
1. Photogenics
2. Photon Paint
3. TVPaint
4. Brilliance
5. Disney Animation Studio
Sorry, I didn't use the Mac enough to form any favorites, though I did
love Fractal Design Painter (now Corel Painter).
-Swift
> I have a Visual Basic 4 application that I need to run on modern 64-bit
> hardware I can do this in a VM, but I really need this VM to be wicked
> small, like under a gig. The smallest XP VM I?ve seen is 600MB (which
> might
> be good) but XP is becoming very hard to source these days.
VB4 was a bridge between 16-bit Windows 3.1 applications and 32-bit
everything later (such as the DOS-based Win-95, -98, and -ME, and all of the
NT-based operating systems, which is everything else through Win-10 64-bit).
As such, the package included both a 16-bit an 32-bit compiler. If your
application was compiled using the 16-bit version, you're pretty much stuck
with XP-32 or earlier (in a VM, if necessary), as it will automatically
spawn a 16-bit virtual environment (ntvdm.exe) to run the 16-bit
applications. Win7 and beyond, and all 64-bit versions, do not support this
feature (I supported a VB3 application for 20 years; Win7 was what finally
broke it for good.)
If it was compiled to 32-bit, then you should be pretty much good to go; you
may run into a few insurmountable problems with some now unsupported OCX's.
Other than those, all of the 32-bit code should run fine on anything
current.
If you have the source, you're also in pretty good shape. VB4 is very easy
to port to VB6; there were almost no backward-incompatible features of the
later Visual Basic classic languages. Find an old copy of VB6 SP6,
re-compile it (perhaps replacing some of the failed OCXs with others that
will work - a common one was DBGrid, which is quite easy to replace with
FlexGrid), and you're golden. I currently support just such an application,
and although the development environment requires a couple of tricks to get
working smoothly, the compiled application works just fine on Win10-64.
Drop me a note off-line if you'd like any additional or more specific help
with this; I have a reasonable amount of experience with just this problem.
Of course, there are always older versions of Wine...
~~
Mark Moulding
Anyone here do any transformer specification or design and would be
interested in some consulting dollars to help me source/create a weird
transformer option?
What I need is a 12V primary, 12V:12V center tapped secondary that can
support 12VA of power. Higher voltages are OK, but not needed. I am
struggling on the Xformer details, but I know it needs to be center
tapped, 1:1:1 @12VA
Jim
--
Jim Brain
brain at jbrain.comwww.jbrain.com
Yet another nice color brochure.
https://dl.dropboxusercontent.com/u/96935524/Datormusuem/lab11.pdf
Has anyone seen a VR20 in real? Rather interesting to be able to do a red
and green X/Y screen based on different energy levels. Someone care to
explain how that works?
If I read the fine print on the back correctly (and comparing with the
others) I would guess that this brochure is from 1971.
For those of you running DECnet/E on simulators...
paul
> Begin forwarded message:
>
> From: Paul Koning <paulkoning at comcast.net>
> Subject: Re: RSTS and slow DECnet operation in SIMH
> Date: May 2, 2016 at 1:37:45 PM EDT
> To: SIMH <simh at trailing-edge.com>
>
>>
>> On Apr 19, 2016, at 2:46 PM, Paul Koning <paulkoning at comcast.net> wrote:
>>
>> With help from Mark Pizzolato, I've been looking at why RSTS (DECnet/E) operates so slowly when it's dealing with one way transfers. This is independent of protocol and datalink type; it shows up very clearly in NFT (any kind of file transfer or directory listing) and also in NET (Set Host). The symptom is that data comes across in fairly short bursts, separated by about a second of pause.
>>
>> This turns out to be an interaction between the DECnet/E queueing rules and the very fast operation of SIMH on modern hosts. DECnet/E will queue up to some small number of NSP segments for any given connection, set by the executor parameter "data transmit queue max". The default value is 4 or 5, but it can be set higher, and that helps some.
>>
>> The trouble is this: if you have a one way data flow, for example NFT or FAL doing a copy, the sending program simply fires off a sequence of send-packet operations until it gets a "queue full" reject from the kernel. At that point it delays, but the delay is one second since sleep operations have one second granularity. The other end acks all that data quite promptly, but since the emulation runs so fast, the whole transmit queue can fill up before the ack from the other end arrives, so the queue full condition occurs, then a one second delay, then the process starts over.
>>
>> This sort of thing doesn't happen on request/response exchanges; for example the NCP command LOOP NODE runs at top speed because traffic is going both ways.
>>
>> I tried fiddling with the data queue limit to see if increasing it would help. It seems to, but it's not sufficient. What does work is a larger queue limit (32 looks good) combined with CPU throttling to slow things down a bit. I used "set throttle 2000/1" (which produces a 1 ms delay every 2000 instructions, i.e., roughly 2 MIPS processing speed which is at the high end of what real PDP-11s can do). Those two changes combined make file transfer run smoothly and fast.
>>
>> Ideally DECnet/E should cancel the program sleep when the queue transitions from full to not-full, but that's not part of the existing logic (at least not unless the program explicitly asks for "link status notifications"). I could probably add that; the question is how large a change it is -- does it exceed what's feasible for a patch. I may still do that, but at least for now the above should be helpful.
>
> Followup: I created a patch that implements the "wake up when the queue goes not-full". Or more precisely, it wakes up the process whenever an ack is received; that covers the probem case and probably doesn't create many other wakeups since the program is unlikely to be sleeping otherwise.
>
> The attached patch script does the job. This is for RSTS V10.1. I will take a look at RSTS 9.6; the patch is unlikely to apply there (offsets probably don't match) but the concept will apply there too. I don't have other DECnet/E versions, let alone source listings which is what's needed to create the patch.
>
> With this patch, you can run at full emulation speed, with the default queue limit (5). In fact, I would recommend setting that limit; if you make the queue limit significantly larger, the patch doesn't help and things are still slow. I suspect that comes from overrunning the queue limits at the receiving end. (Note that DECnet/E leaves the flow control choice to the application, and most use "no" flow control, i.e., on/off only which isn't effective if the sender can overrun the buffer pool of the receiver.)
>
> To apply the patch, give it to ONLPAT and select the monitor SIL (just <CR> will give you the installed one). Or you can do it with the PATCH option at boot time, in that case enter the information manually. The manual will spell this out some more, I expect.
>
> I have no idea if this issue can appear on real PDP-11 systems. Possibly, if you have a fast CPU, a fast network (Ethernet) and enough latency to make the issue visible (more than a few milliseconds but way under a second). In any case, it's unlikely to hurt, and it clearly helps a great deal in emulated systems.
>
> paul
>
On Thu, 4/28/16, Liam Proven <lproven at gmail.com> wrote:
>>> The efforts to fix and improve Unix -- Plan 9, Inferno -- forgotten.
>
> It is, true, but it's a sideline now. And the steps made by Inferno
> seem to have had even less impact. I'd like to see the 2 merged back
> into 1.
Actually, it's best not to think of Inferno as a successor to Plan 9, but
as an offshoot. The real story has more to do with Lucent internal
dynamics than to do with attempting to develop a better research
platform. Plan 9 has always been a good platform for research, and
the fact that it's the most pleasant development environment I've
ever used is a nice plus. However, Inferno was created to be a
platform for products. The Inferno kernel was basically forked from
the 2nd Edition Plan9 kernel, and naturally there are some places
that differ from the current 4th Edition Plan 9 kernel. However, a
number of the differences have been resolved over the years, and
the same guy does most of the maintenance of the compiler suite that's
used for native Inferno builds and for Plan 9. Although you usually
can't just drop driver code from one kernel into the other, the differences
are not so great as to make the port difficult. So both still exist and
both still get some development as people who care decide to make
changes, but they've never really been in a position to merge.
And BTW, if you like the objectives of the Limbo language in Inferno,
you'll find a lot of the ideas and lessons learned from it in Go. After
all, Rob Pike and Ken Thompson were two of the main people behind
Go and, of course, they had been at the labs, primarily working on
Plan 9, before moving to Google.
BLS