>
>Subject: RE: 8251 troubles
> From: dwight elvey <dkelvey at hotmail.com>
> Date: Sun, 27 Apr 2008 18:02:19 -0700
> To: "General Discussion: On-Topic and Off-Topic Posts" <cctalk at classiccmp.org>
>
>
>
>
>
>> From: cclist at sydex.com
>
>> Date: Sun, 27 Apr 2008 08:23:09 -0700
>> From: dwight elvey
>>
>> I'm going to assume that you have an 8251A, not an 8251. If the
>> latter, either sell it to a collector or have it bronzed and made
>> into a tieclasp. There are substantial differences between the -A
>> and non-A parts, all annoying.
>
>Hi Chuck
> The chip is a NEC 8251C.
NEC part number are D8251C for intel 8251 and D8251AC for intel 8251A
and the nec part have the same bugs for compatability (really!).
>>
>> Glancing at your code, I'm a bit puzzled by the final initialization
>> byte of 0x10. Why isn't this, say, 0x37? Why would you disable the
>> receiver? 8251A commands are bit-inclusive; that is, ALL bits in the
>> command register are interpreted independently of one another. Thus,
>> 0x10 sent to the command register doesn't just reset the error flags,
>> it also disables the transmitter and receiver and drops DTR and RTS.
>
> I'd originally done a 37 but looking at some example code,
>I thought I'd try separating out that bit. No change in results.
>The data sheet seems to indicate that the flags will not effect
>or stop operation.
Be very careful with soft reset as that means sending the whole command
string again and most users of 8251 reset the this many times (up to 3)
then write commands and read the data input a few times to clear it.
>
>>
>> The implication is that since the command register's write-only, you
>> have to remember the last command you sent if you want to reset the
>> error flag. One of the minor annoyances of a few early Intel
>> peripherals.
>>
>> Anent that last one--make certain that your handshake lines
>> (RTS/CTS/DTR) are set to the proper levels--an inactive CTS will
>> prevent the 8251A from transmitting.
>
> At least at the port, there is no change. There could be something
>at the chip. Since I've not even gotten to the sending serial, yet, CTS
>isn't yet an issue.
First assure your self that the RS232 input device 1488/1489 arent
cooked and data is reaching RXD pin and there is clock and baud
clock of the correct rates.
Some 8080 code known to work.
INIT: ;...
LXI H,$7A37 USART 7 BITS, NO PARITY, HIGH-SPEED
CALL SETURT INITIALIZE 8251
;...
*
* INITIALIZES 8251 USART TO VALUE PASSED IN H-L
*
SETURT:
MVI A,3 VALUE TO RESET UART
OUT UCTL MAKE SURE.
OUT UCTL UART IS RESET
MVI A,$77 VALUE TO ENTER COMMAND MODE
OUT UCTL ENTER COMMAND MODE
MOV A,H GET HIGH BYTE OF NEW COMMAND WORD
OUT UCTL WRITE TO CONTROL PORT
MOV A,L GET LOW BYTE OF NEW COMMAND WORD
OUT UCTL WRITE TO CONTROL PORT
RET
Allison
Date: Sun, 27 Apr 2008 08:23:09 -0700
From: dwight elvey <dkelvey at hotmail.com>
I'm going to assume that you have an 8251A, not an 8251. If the
latter, either sell it to a collector or have it bronzed and made
into a tieclasp. There are substantial differences between the -A
and non-A parts, all annoying.
Glancing at your code, I'm a bit puzzled by the final initialization
byte of 0x10. Why isn't this, say, 0x37? Why would you disable the
receiver? 8251A commands are bit-inclusive; that is, ALL bits in the
command register are interpreted independently of one another. Thus,
0x10 sent to the command register doesn't just reset the error flags,
it also disables the transmitter and receiver and drops DTR and RTS.
The implication is that since the command register's write-only, you
have to remember the last command you sent if you want to reset the
error flag. One of the minor annoyances of a few early Intel
peripherals.
Anent that last one--make certain that your handshake lines
(RTS/CTS/DTR) are set to the proper levels--an inactive CTS will
prevent the 8251A from transmitting.
Hope this helps,
Chuck
> Date: Thu, 24 Apr 2008 22:25:49 -0500
> From: Jim Leonard
> Wow, I can't see it being useful at launch with 16K -- the DOS 1.0 took up
> about 11K if memory serves, and command.com 5K all by itself... I guess
> that left 1.5K to run BASIC.COM and that was all she wrote...
But with the 5150, the diskette drive (160K format) was optional.
All you really needed was a CGA card to get going. You had BASIC in
ROM and a cassette tape interface. The CGA card could be hooked to a
modulator and make pretty pictures on your TV. IIRC, that's one of
the configurations described in the IBM literature.
As another poster has mentioned, it really did seem like the target
was the Apple II and TI 99/4 type of market, rather than the higher-
end "office" machines like the Morrow or Eagle. I almost bought a
NEC APC after I first saw a 5150, thinking that I'd completely
misunderstood IBM's marketing objectives.
Those "pick your poison" expansion slots are what saved that box,
which succeeded, sometimes it seems, in spit of IBM's worst efforts.
Maybe someone remembers that the US 120v 60Hz models were availble
quite a bit before the 220v 50Hz models were--and IBM charged a
premium for the 220v models. I remember going to the local sales
office on Arques and trying to place an order for a dozen of the 220v
models. "Not available yet" was the answer. So, says I, "How about
buying some US models and running them at 120v 50Hz via a
transformer?" "That would void the warranty" was the response.
Apparently IBM also refused to support US systems sent overseas to
50Hz land. It was very strange.
We bought a bunch of US 5150s and shipped them anyway. They ran fine
on 50Hz.
Cheers,
Chuck
>
>Subject: RE: 8251 troubles
> From: "Chuck Guzis" <cclist at sydex.com>
> Date: Sun, 27 Apr 2008 11:12:06 -0700
> To: cctalk at classiccmp.org
>
>Date: Sun, 27 Apr 2008 08:23:09 -0700
>From: dwight elvey <dkelvey at hotmail.com>
>
>I'm going to assume that you have an 8251A, not an 8251. If the
>latter, either sell it to a collector or have it bronzed and made
>into a tieclasp. There are substantial differences between the -A
>and non-A parts, all annoying.
I have both in quantity. Generally it's the quirks of the
"improved part"that are annoying but for simple Async serial IO
they are identical enough.
>The implication is that since the command register's write-only, you
>have to remember the last command you sent if you want to reset the
>error flag. One of the minor annoyances of a few early Intel
>peripherals.
Yep, helps to have a table of set up variables.
>Anent that last one--make certain that your handshake lines
>(RTS/CTS/DTR) are set to the proper levels--an inactive CTS will
>prevent the 8251A from transmitting.
Yep,
Allison
Date: Sat, 26 Apr 2008 20:58:36 +0100 (BST)
From: Tony Duell
> Another silly thing is that refresh was controlled by a DMA channel. I'm
> sure it saved a couple chips, but it meant that errant, or
>
> And, indeed, using the 8237 DMA chip with a paging register (and not even
> doing that as elegantly as the FTS-88 did, which at least had one paging
> register per DMA channel) rather than using the 8089 'I/O processor'.
The 5150 has 4 page registers, one for each DMA channel (an LS670 4x4
RAM). Of course that limits one to doing DMA inside of 64K physical
blocks, but that's not too awful. The DMA-driven RAM refresh earned
some writers beer money as they could write articles on how to alter
the refresh rate to squeeze a bit more performance out of the basic
box. On the PC-AT, the big design flaw in the DMA circuitry to me
was the omission of handshaking for DMA memory-to-memory transfers.
It would have made the memory above 1024K much more useful on the
system without having to switch into protected mode.
The "use DMA for refresh" wasn't an awful compromise. At least the
video adapters had their own private memory and didn't load the
system down refreshing out of system RAM (i.e., IBM didn't use an
8275 CRTC, thankfully). Given that the 5150 used 16K DRAMs on the
planar, the Intel memory controller would have been the 8202, a
miserable piece of silicon. It would also have made upgrading to
64K DRAMs a bit more of a chore (requires an 8203).
The 8089 was pretty much of a dead-end product; limited to 20 bits of
addressability, expensive, with only 2 DMA channels. I never did
figure out why Intel introduced it. Our Intel sales engineer didn't
even want to talk about it.
I only vaguely remember the tamper-proof stuff on the 5150 PSU since
I dug into it only a few days after I had the system. Reversed the
fan--again, it was incredible that the case interior was kept at
negative pressure. Suck all sorts of crud in through the floppy
slots. I added a filter over the fan port on the case.
I was still doing the same thing 20+ years later. In an odd twist of
events, I also like to replace the cheap Chinese DC fans with nice
Japanese AC line-powered fans. I've never had one of the latter
develop fan noise--they're just better built.
Cheers,
Chuck
Date: Fri, 25 Apr 2008 22:18:59 +0100 (BST)
From: (Tony Duell)
> I was refering to critical timing between the hardware control lines. For
> example I've used an interface (not QIC anything) where one device
> asserted a signal, then the other device had to acknowldedge within a
> certian (short) time (1us or so), otherwise there would be ig problems.
The best known example of this in spades is the Pertec tape
interface. "Here's the data, catch it!" type of interface. On read,
there's a strobe asserted when data's ready, but no handshake or
other means of throttling the flow. Same for the write side--a "data
accepted" sort of signal, but the data must have been presented on
the host side. Lost data conditions aren't diagnosed, unless the
host decides to incorporate logic to do it (i.e. detect strobe before
data accepted/ready). Given that there's no standard on tape block
length, most Pertec controllers have a bunch of RAM or at least a
good-sized FIFO to deal with the condition that the host may not be
able to keep up with events. Data errors are detected by the drive,
but presented during the course of a transfer, so again, the
controller must be there to latch them when they occur.
IIRC, QIC-02 uses handshaking; QIC-36 does not.
> The QIC36 ISA card I have in my junk box appaers to be essentially a
> QIC-02 host interfaxce and a QIC-02-to QIC36 bridge on the same PCB. I
> keep it because the ASICs on the board are the same as those on a separate
> QIC-02 to QIC36 bridge that I sometimes use with my PERQ, and thus the ISA
> card is a source of spares...
The QIC-36 ISA cards I have are Wangteks--they use an 8085 and a fair
number of house-labeled ICs to do their dirty work. They're
integrated units, though I suspect there's a fair amount of shared
logic with the Wangtek QIC-36-to-02 controller, as it also uses an
8085 and has some of the same house-labeled packages on it.
My QIC-02 cards are by Alliance Technology; nothing to write home
about--just some LSTTL logic and maybe a PAL for address decoding.
About what you'd expect--and apparently clones of the Wangtek PC-02.
Cheers,
Chuck
Date: Thu, 24 Apr 2008 13:45:10 -0500
From: Jules Richardson
> Indeed - in the context of the discussion I got involved in (which was
> actually about memory prices, not the PC specifically), we were just
> interested in what could be done with a 5150 *when it was new* - and I
> think all that IBM offered then was the 64K boards (and of course third
> parties didn't exist!)
Yup, but vendors like Quadram and Everex came along pretty quickly.
Lots of folks realized that the 64K limitation was a huge one.
> You know, I had a thought - I wonder if those 64K boards can't be
> jumpered beyond the 256KB boundary? Maybe that's why I'm remembering a
> 256KB limit on the original machines (and using original IBM expansion
> boards). Getting around that would mean physically hacking the address
> lines/decoding of the boards...
There were also some hacks, since 64K DRAMs were available when the
5150 was launched (why IBM didn't design the planar with jumpers to
select memory type is beyond me). If you were handy with a soldering
iron and an Xacto knife, you could cut-and-jumper your way to 256K
planar memory. The big pain was the soldered-in first row of 16K
DRAM.
I have a booklet from an outfitlled "Purple Computing" that marketed
a little piggyback board that allowed one to leave the first row of
DRAM in. It was basically 4 rows of sockets--you still had to do the
cutting and jumpering, but without removing the 16K DRAM. Anyone
wants the booklet can have it for postage.
I was happy to retire my 5150 and get a genuine Taiwanese clone mobo
with 256K and 8 slots.
Still, PC-DOS would run in 64K.
Cheers,
Chuck
Hello,
I'm trying to find someone who can print an old mag card for a selectric
typewriter ca 1973. I can't seem to find anyone around who has got or
seen one of these in person (lately). Wondering if you can point me in
the right place?
Thanks for your help,
Jon Walkwitz
I just remembered that I'd wanted to mention this, but things have
been busy and I forgot. My mother knows of my fondness for antique
computers, and she keeps her eyes open at thrift stores and related
places. She picked up a Texas Instruments Compact Computer 40 for me
and presented it to me for my birthday a month ago.
I had never heard of this machine, but now I've done some reading
and I've played with it a bit. It's pretty neat! Has anyone else
here messed with one?
-Dave
--
Dave McGuire
Port Charlotte, FL
Re: "I'm not aware of any other mainstream microcomputer introduced in or
before 1981 that could support contiguous, directly addressable RAM
configurations from 16KB to over 512KB."
The Heathkit (Zenith Data Systems) Z-100 supported 768k of addressable
contiguous RAM with full color bit mapped graphics video. Generally
considered contemporaneous with the PC, but was actually introduced slightly
later.