Hi All;
For those poor mis informed folk -- the Epson QX-10 was not a laptop.
Many of the common features that are in use in keyboards today first
appeared on the OX-10 (Help key, Undo key and others).
Jerry Parnell (held stock in IBM ---- ick!!!!!!) was a major influence in
the demise of the QX series. He spent 3 days with Chris Ratowski to
review to the Valdocs III Integrated Software package. He saw a beta
version with some bugs in it. In the course of those three days he kept
saying what a great piece of software Valdocs was. Chris explained
several times that it was a beta version and not quite ready for release.
Jerry was at that time a reviewer for the three major computer mags
(Personal Computing, Byte Mag. and Popular Computing). Epson was just
approaching the big time with 100,000 QX-10s sold in the USA and Chris
was countering on Jerry's reviews as a big market boast. Will Jerry
released these reviews in all three mags at the same time and the market
for Epson QX-10 dropped over night. He never said the version he saw
was beta and proceeded to rip the QX-10 apart with very much erroneous
information; it was obvious that he hated anything that wasn't IBM (esp
if it was better which the Epson was). It was ten times the computer.
I know much more about if any are interested.
I have a QX-10 complete that I bought in 1986 or 1987 (don't remember).
It still works somewhat. I the Valdocs software, CPM 80 and much other
software. The reset button was by passed as it failed and the keyboard
cord has been spliced together. It also has a 300 baud modem and the
beta version of an optical mouse which works with Val Draw only. I was a
beta tester for the Epson QX-10. I know first hand most of its history
and what came with it. I had a complete collection of all mags written
for it. Don't know what I did with them.
I am interested in selling it if some one wants it. It also needs a new
clock battery (nicad). I am not sure the floppies are still
readable?????????????
doug
kreation(a)juno.com
From: Pete Turnbull <pete(a)dunnington.u-net.com>
>> I used to sell upD8080AF for NEC and I had to know my competition.
>
>Ah, then you'll know what the difference(s) was/were. While looking up
>8080A and 8080 (except all my 1976 and 1979 Intel Data Books say is that
>they're functionally and electrically compatible) I discovered that NEC
>made two versions, both called 8080A, but one with some enhancements. I
>assume that this was rather like the idea they used in the V20. The
note
>said that 8080A's from authorised second-sources were competely code
>compatible but the enhanced NEC version was not, and wouldn't always run
>certain Intel code. What was the difference, and what made it not run
>certain programs?
Ah no, not a V20 thing. The first version of the NEC 8080A was not fully
compatable at the hardware level. It was the interrupt/hold thing. It
origninated
with the fact that NEC did not use intel masks but reverse engineered
from
working intel parts. The D8080AF was fully compatable with intel 8080A,
and I mean fully. FYI: only one part was 8080AF the other was 8080A.
The program error was mostly invisible but impacted those programs that
used both interrupts and DMA. The specifics are centered about the hold
state and the DAD
instruction, Intel treated it like a write and NEC 8080A treated it as
read.
generally speaking the halt/hold/interrupt interactions and timing made
designing
complex systems much more difficult than would first appear. It was a
reason for
the rapid adoption of 8085 and z80 even though they were more expensive
early on.
Allison
This weekend was good for finding vintage HP items, but I'm
looking for additional documentation and parts:
1. HP 9836C
Picked up the CPU and monitor, but there were no manuals or media.
The machine boots and displays:
9836C 2250A02013
Copyright 1982,
Hewlett-Packard Company
All Rights Reserved
BOOTROM 3.0
Keyboard
Color Graphics
2 Flexible Discs
HP-IB
HP 98628- at 20
917344 Bytes
SEARCHING FOR A SYSTEM (ENTER to Pause)
RESET To Power-Up
At which point it hangs, presumably waiting for an operating system.
Any leads on documentation, operating systems, and additional options
for this computer would be most welcome.
2. HP 98241-67901 I/O Extender
Appears to be for the 9825 series and has a test point for each of the
interface
lines.
3. HP ROM Drawers
These were found loose in a parts bin and they appear to be ROM drawers
for the HP 9825 series of desktop computers. I have three drawers, one with
six slots (all of 'em empty) and two drawers with four slots. Each of the
four-slot
drawers has a single ROM labeled 98338A Assembly Execution 1 (the other
slots are empty). Pictures can be found on my "Items Needing Help" web page
at www.decodesystems.com/help-wanted/index.html .
Again, any leads on documentation or additional ROMs for these drawers would be
appreciated.
Thanks!
Cheers,
Dan
www.decodesystems.com
Okay, so I've been sitting here trolling the newsgroup ever since my last
post. I've read almost every post that drops into my mail-box - a lot of
questions I can't answer.
Anyway, I'd asked about an IBM PowerStation 530, and got a lot of responces,
and eventually got a cable (Thanks Peter, your check's going in the mail on
Monday) to make the 3 micro-BNC in DB housing output into a 3 BNC RGB
hookup. I went to the local place to look at crap, and found only one
monitor that has the three BNC hookups - it's an HDS ViewStation, with no
other real info on it... Will this work with the system? Anyone heard of
this thing before. Also, it's $50 - is that really a reasonable price or
should I try to talk him down?
If it helps, the diagrams next to the RGB hookups on the back of the monitor
showed a vertical arrow in next to the R and a horizontal one next to the B,
and nothing next to the green. This would lead me to believe that it syncs
vertical on Red and horizontal on Blue, which obviously won't work, but I
just want to make sure.
Thanks again for any and all help. If this monitor won't work, I'm gonna go
look again and see what sort of 5 BNC models I can find... I was told some
will work (Thanks Bennett).
Blair
On Sep 30, 14:41, Derek Peschel wrote:
> On Sun, Sep 30, 2001 at 01:38:13PM +0000, Pete Turnbull wrote:
> > confusing, try comparing the carry flags implemented in a Z80 and a
6502
> > (they do different things for subtractions!).
>
> And (on the 6502) if you want to get a given result using addition vs.
> subraction, the carry flag must be set differently. i.e., if you have
>
> LDA #$FF LDA #$FF
> ADC #$01 vs. SBC #$FF
>
> and you want A to end up with $00, then you must put before the LDA:
>
> CLC vs. SEC
>
> I might never have known this except that I wanted to check my post
> before posting it.
Yes, I found that confusing when I got my first 6502 machine (I had a Z80
before that).
> > The other problem you have is with the overflow. It's not a problem
with
> > signed vs unsigned numbers as some people have implied, it's with the
order
>
> Well, as one of those people I may as well ask you how much of my post is
> corect and how much is junk.
I don't think much was junk :-) You just didn't quite solve the puzzle.
You suggested the flags might change if you change the system (from signed
to unsigned, I think you meant), and they don't. In general, a processor
doesn't know whether you're thinking of signed or unsigned numbers when you
write the code. However, the meanings may change. Normally you don't pay
much attention to the carry for signed arithmetic, it's the overflow that
tells you useful things like whether the sign bit is correct or the answer
is meaningful. You still use the carry for multi-byte arithmetic, of
course, but it's automagically correct and you don't have to think about it
(other than a preliminary SEC or CCF or whatever). On the other hand, you
don't normally have any interest at all in the overflow for unsigned
arithmetic, though it's still there; it just doesn't mean anything useful.
You also suggested the flags might change between addition and subtraction,
and in some processors, yes they do. Some processors complement the carry
flag after a subtraction (ones that don't want a SEC before a subtraction,
for example :-))
The example you chose was perhaps what confused you (1--2= 3 1+-2=-1);
the problem being that -2 is its own 2's-complement in a 2-bit signed
system (0 is the other one that causes trouble, this time with the carry).
You'd get the same problem with -8 in a 4-bit system, or -128 in an 8-bit
system.
--
Pete Peter Turnbull
Network Manager
University of York
On Sep 30, 13:54, Cameron Kaiser wrote:
> > Maybe part of the confusion arises because many processors (including
the
> > 8080) complement the carry flag at the end of a subtraction, so that it
can
> > be used directly as a "borrow" flag in multibyte subtractions. Others
> > (like the 6502) don't do that.
>
> I'm not sure what you mean by this, but on my C128,
>
> 1300 ad 01 04 lda $0401
> 1303 38 sec
That's my point: you have to SEC first. You would clear it on most other
processors. Also, after a subtraction, the 6502 sets the carry to 0 if a
borrow was necessary, or to 1 if not; a 6800, 8080, or Z80 does the
opposite.
--
Pete Peter Turnbull
Network Manager
University of York
On Sep 30, 19:12, ajp166 wrote:
> no, it was 2mhz.
>
> using 8224 the usual crystal was 18.435 (2.0483333*9).
> there was a -1. -2 and -3 version of the part but the fastest was 3mhz.
>
> I used to sell upD8080AF for NEC and I had to know my competition.
Ah, then you'll know what the difference(s) was/were. While looking up
8080A and 8080 (except all my 1976 and 1979 Intel Data Books say is that
they're functionally and electrically compatible) I discovered that NEC
made two versions, both called 8080A, but one with some enhancements. I
assume that this was rather like the idea they used in the V20. The note
said that 8080A's from authorised second-sources were competely code
compatible but the enhanced NEC version was not, and wouldn't always run
certain Intel code. What was the difference, and what made it not run
certain programs?
--
Pete Peter Turnbull
Network Manager
University of York
no, it was 2mhz.
using 8224 the usual crystal was 18.435 (2.0483333*9).
there was a -1. -2 and -3 version of the part but the fastest was 3mhz.
I used to sell upD8080AF for NEC and I had to know my competition.
Allison
-----Original Message-----
From: Richard Erlacher <edick(a)idcomm.com>
To: classiccmp(a)classiccmp.org <classiccmp(a)classiccmp.org>
Date: Sunday, September 30, 2001 6:23 PM
Subject: Re: 8080 vs. 8080A
>BTW, the 8080 was a 2.5 MHz part, wasn't it? I've got a couple Intel
app-notes
>where they generate a baud-rate clock from 24.576 MHz and generate the
CPU clock
>from that, at 2.4576 MHz for the CPU. That's on an i8080-2.
>
>Dick
>
>----- Original Message -----
>From: "ajp166" <ajp166(a)bellatlantic.net>
>To: <classiccmp(a)classiccmp.org>
>Sent: Sunday, September 30, 2001 2:31 PM
>Subject: Re: 8080 vs. 8080A
>
>
>> Wrong!
>>
>> The I8080A is AS fast as the i8080. the i8080A-1 is faster but not
twice
>> as the fastest 8080[A] was only 3mhz and hte standard part was 2mhz.
>>
>> Allison
>>
>> -----Original Message-----
>> From: John Galt <gmphillips(a)earthlink.net>
>> To: classiccmp(a)classiccmp.org <classiccmp(a)classiccmp.org>
>> Date: Sunday, September 30, 2001 3:57 PM
>> Subject: Re: 8080 vs. 8080A
>>
>>
>> >"The i8080A is essentially twice as fast as the
>> > standard i8080 and COULD be used more easily with low-power logic
since
>> its
>> >demands aren't as stringent".
>> >
>> >Ok, that's a good start.
>> >
>> >But, I don't think "low power" TTL (transistor transistor logic) had
>> >anything to do with the complexity of the code being executed on the
>> chip.
>> >True? I had assumed
>> >that the references to the 8080 only being compatible
>> >with "low-power TTL" and the 8080A being compatible
>> >with "standard TTL" had something to do with the support chips (Ram,
>> clock,
>> >etc) that could be used with the 8080 vs. the 8080A.
>> >
>> >Since I'm new to this mail list, let me explain why I would
>> >show up here and ask such a question to begin with.
>> >
>> >I'm a chip collector. I am trying to document the differences
between
>> the
>> >different early Intel microprocessors. Not worried about massive
>> detail,
>> >just the major differences (PMOS, vs. NMOS, vs.
>> >HMOS, clock speed, transistor count, etc).
>> >
>> >The only microprocessor that I don't have a good handle
>> >on is the 8080 and the difference between the 8080 and 8080A.
>> >
>> >I also know that the 8080 was introduced sometime
>> >around April 1974. I have not been able to find an
>> >introduction date for the 8080A. Was it introduced at
>> >the same time? Does anyone know?
>> >
>> >I also need an Intel C8080 or C8080-8 for my
>> >collection. If you have one, I want it. I have been looking
>> >for one for months and have not been able to find one.
>> >If you have either of these chips in good condition
>> >(no desoldered parts wanted), I'm offering 400.00
>> >for the C8080-8 and 500.00 for a C8080.
>> >
>> >If you need a replacement for the C8080 or C8080-8 you sell me, I'll
>> GIVE
>> >you a D8080A free as part of the
>> >deal.
>> >
>> >----- Original Message -----
>> >From: "Richard Erlacher" <edick(a)idcomm.com>
>> >To: <classiccmp(a)classiccmp.org>
>> >Sent: Sunday, September 30, 2001 1:21 PM
>> >Subject: Re: 8080 vs. 8080A
>> >
>> >
>> >> This makes no sense at all, though it may be because I'm
>> misinterpreting
>> >the way
>> >> in which you've put it.
>> >>
>> >> I have Intel boards that come in versions with the i8080 and also,
>> >> optionally,with the i8080A, and, aside from the clock frequency and
>> memory
>> >> access times, they're identical. The i8080A is essentially twice
as
>> fast
>> >as the
>> >> standard i8080 and COULD be used more easily with low-power logic
>> since
>> >its
>> >> demands aren't as stringent.
>> >>
>> >> The i8080A will, AFAIK, replace the i8080 in all applications
without
>> ill
>> >> effects.
>> >>
>> >> BTW, please turn off "rich-text" mode in your email editor when you
>> >compose
>> >> messages for this group, as some folks' mail readers can't
interpret
>> the
>> >> rich-text/HTML format.
>> >>
>> >> Dick
>> >> ++++++++++++++++++++++++++++++++++++++
>> >> ----- Original Message -----
>> >> From: John Galt
>> >> To: classiccmp(a)classiccmp.org
>> >> Sent: Sunday, September 30, 2001 10:17 AM
>> >> Subject: 8080 vs. 8080A
>> >>
>> >>
>> >> Can anyone here describe the technical differences between
>> >> an Intel 8080 and Intel 8080A CPU?
>> >>
>> >> The ONLY ref. I have been able to find seems to indicate that there
>> was a
>> >bug in
>> >> the 8080 and as a result it would only work with low power TTL?
>> >>
>> >> The problem was fixed in the 8080A and it would work with standard
>> TTL?
>> >>
>> >> Does this make sense to anyone?
>> >>
>> >> Could anyone put this into laymans terms for me?
>> >>
>> >> Thanks,
>> >>
>> >> George Phillips - gmphillips(a)earthlink.net
>> >>
>> >
>>
>>
>
Can anyone here describe the technical differences between
an Intel 8080 and Intel 8080A CPU?
The ONLY ref. I have been able to find seems to indicate that there was a bug in the 8080 and as a result it would only work with low power TTL?
The problem was fixed in the 8080A and it would work with
standard TTL?
Does this make sense to anyone?
Could anyone put this into laymans terms for me?
Thanks,
George Phillips - gmphillips(a)earthlink.net
That agrees with my 1976 and 1978 intel data books.
Allison
-----Original Message-----
From: Michael Holley <swtpc6800(a)home.com>
To: classiccmp(a)classiccmp.org <classiccmp(a)classiccmp.org>
Date: Sunday, September 30, 2001 7:02 PM
Subject: Re: 8080 vs. 8080A
>Intel did not refer to clock frequency but to Instruction Cycle to
indicate
>the speed of the chip. From Intel Component Data Catalog 1978 page 11-11
>
>8080A 2 us
>8080A-1 1.3 us
>8080A-2 1.5 us
>
>My SDK-80 Users Guide has schematics dated July 1975 and show an 8080A.
>
>-----------------------------------------------
>Michael Holley
>holley(a)hyperlynx.com
>-----------------------------------------------
>
>