Fact is, the serial protocol for communicating with your 'AT keyboard is
widely understood and well documented. I'm sure anyone who could program an
older 8-bit micro could program a PIC or other single-chipper, like the
87C42 which I believe is still made, to do what the old 8042 does. If you
get an 8742, I don't think they even have a code protection bit.
Given that you have too much principle, and perhaps not enough interest, to
replicate the 8042 ( a clever choice of chips ) you could simply decode the
binary you do get from the keyboard with a lookup table.
Dick
-----Original Message-----
From: Tony Duell <ard(a)p850ug1.demon.co.uk>
To: Discussion re-collecting of classic computers
<classiccmp(a)u.washington.edu>
Date: Sunday, April 04, 1999 6:31 PM
Subject: Re: homemade computer for fun and experience...
>>
>> > True. But AFAIK the AT keyboard host interface was never implemented in
>> > TTL (it always used a programmed 8042 microcontroller), so it's a
little
>> > harder to build from scratch.
>>
>> If what you're trying to do is interface the AT keyboard to some custom
>> controller that doesn't need to be otherwise AT-compatible, there's no
>> reason why you need the 8042. The AT keyboard interface is not
particularly
>> harder to implement than the XT interface was. I've written code for
several
>> products that bit-banged it on a microcontroller.
>
>Absolutely. BUT : if you are making a homebrew machine, the last things
>you need are (a) I/O that's timing critical (at least not for the
>keyboard) or (b) a microcontroller that you have to program and debug.
>
>And then, as you said below, the AT keyboard protocol is not that well
>documented. The XT is a little better documented, if only because there's
>a circuit using standard chips (plain TTL chips) that accepts XT keyboard
>input. You can work out any odd bits of the protocol from that.
>
>Alas IBM never published the 8042 ROM source, so you can't use that as a
>reference.
>
>>
>> The AT keyboard interface protocol is really a pathetic design, though.
It's
>
>I'll not argue with that.
>
>> a pain in the ass to deal with, and it's not well documented anywhere
(even
>
>The documentation is not brilliant, but you can figure out how to talk to
an
>AT keyboard from the info in the TechRef if you have to. Not an ideal
>first project, though.
>
>-tony
>
> The problem now is, if I change the file that contains foo(), I have to
>apply my patch again. Or in other words, once I patch the output from the
>compiler, I can no longer use the compiler. If this is a one time shot and
>I will only work with the output from then on, then no problem. But
>otherwise ...
>
> -spc (Although from the discussion it seems that the deal was a one
> shot anyway ... )
This sort of situation (compiler doesn't quite do what the writer
wants) is actually widely encountered in some classic Unix kernels. There
are parts of the kernel that need interlocking, running at a different
priority, etc. The "classic" way of doing this is to compile the C code
into assembly code, run a program that massages the assembly code to
change the details of how some actions are done, and then assemble the
modified code.
As the old fortune cookie program says, "I'd rather write programs that
write programs than write programs" :-).
--
Tim Shoppa Email: shoppa(a)trailing-edge.com
Trailing Edge Technology WWW: http://www.trailing-edge.com/
7328 Bradley Blvd Voice: 301-767-5917
Bethesda, MD, USA 20817 Fax: 301-767-5927
>> Your argument, Eric, was that the microcode compiler generated code
>> that is equally as efficient as that you, or someone else, could have
>> constructed by hand. Megan in no way implies the use of assembly code.
>> The microcode compiler would generate an object file, which by your
>> own admission above, generated more code than could fit in the
>> memory space available. You accepted her argument that the human
>> was required to generate code more efficient than that produced by
>> the microcode compiler. You protest _too loudly_ my friend.
>
Again, you used the word *assembly* and that implies my point.
>No, I accepted her argument that for conventional machine code compiled
from
>a conventional high-level language, a human can fairly easily generate
>better code. But if you had read my posting *carefully*, I specifically
>protested that this is *not* the same problem as compiling horizontal
>microcode from a specialty source language.
>
>I *still* stand by my statement. The compiler produced better code
>in minutes than I could have produced in three months. Your argument seems
>to be that a compiler can't produce better code than a human with an
infinite
>amount of time could. I'll concede you that point. Or maybe I won't. A
>compiler with an infinite amount of time could have simply tried every
possible
>combination of control store bits (for the 512*72 example, 2^36864
>possibilities), and run a set of test vectors against each candidate to
>determine which ones meet the specifications, and of those which yields
>the highest overall performance. And by applying some relatively simple
>heuristics, the number could be reduced from 2^36864 down to a number that,
>while still huge, could at least be done during the remaining life of the
>universe. But this is irrelevant, because neither the human nor the
computer
>has an infinite amount of time available.
Halting problem (P vs NP) difficulties aside, I have never seen the
situation
in which the resultant output of a language translator could not be further
optimised, with the exception of trivial cases. The value that you ascribe
to your time, notwithstanding.
>If my job had depended on finishing the project in question without using
>the compiler, the only way to do it would have been to expand the control
>store to 768 or 1024 words, because after spending a lot of time writing
>microcode by hand, it would probably have been larger than 512 words.
It is always easier for the human to find a wasteful application of
resources
to facilitate job completion than to hunker down and produce a quality and
efficient product. Witness the ubiquitous supremacy of Windows, i.e. NT.
>It was the use of the compiler that allowed me the luxury of shrinking it
to
>fit in the 512 words available. Without using the compiler, there is no
way
>in hell that I would have had time to do such a thing.
You argument, again, is the value that you place on your time, and not the
quality of your intellect. I maintain that the computer, no matter the
skill
of the algorithm, is always to fall short of human productivity. In this, I
agree
with such notable researchers as Roger Penrose and Douglas R. Hofstadter.
Have you read Godel, Escher, and Bach: The Eternal Golden Braid?
>It is instructive to note that when I was trying to squeeze the 514 words
>down to 512, I discovered that the compiler had succeeded in combining
>several things that I wouldn't have easily found,
Here, again, you base your argument on your lack of skill and capacity, not
on the limitations of algorithms.
> because the compiler is
>actually *better* at doing data flow analysis than I am. That's not
because
>the compiler is inherently more clever than I am, but because it is not
>subject to the Hrair (sp?) limit as I am. It's not more clever, but it's
>more tolerant
With regard to self-deprication, you seem to hold the decathelon in
tolerance.
>of doing tedious recordkeeping and matching. Of course, if I
>had the time to meticulously do the same thing, I obviously could do at
least
>as good a job of data flow analysis as the compiler. But in practice
that's
>simply not going to happen. Life's too short.
I do accept that life is, indeed, too short. I, too, would not want to
spend my
life on a single problem, a single implementation of algorithm to the limits
of optimality. That, however, is not the point.
>Most everyone in this discussion is just parroting the conventional wisdom
>that compilers don't generate code as compact or efficient as humans can,
>without considering the possibility that for specific problems and under
>specific constraints, they actually can be *better*. I'm absolutely
willing
>(and eager) to concede that in the general and unconstrained case, the
>conventional wisdom holds true.
Demonstrate a case where an algorithm provides a better solution to a
translation problem, and I'll show you a case where the algorithm provides
exactly the solution obtainable by a human but, no better a solution than
that
obtainable by a human.
Your argument is that no human can perform the act of a computer, and this
is shear lunacy.
William R. Buckley
According to the jumper settings on the 5150, it appears that 4 floppy
drives can be connected to the computer. How is this possible? I'm
guessing 2 internal, and two external, but there's only one connector for an
external drive, so it would only allow 3 drives.
Or is there a special controller that has dual external ports?
Any suggestions?
ThAnX,
--
-Jason Willgruber
(roblwill(a)usaor.net)
ICQ#: 1730318
<http://members.tripod.com/general_1>
-----Original Message-----
From: Bill Sudbrink <bill(a)chipware.com>
To: Discussion re-collecting of classic computers
<classiccmp(a)u.washington.edu>
Date: Tuesday, 6 April 1999 00:55
Subject: RE: bringing up an 8f...
>Not to mention that the Acrobat user interface _SUCKS_!
Agreed. I'd be a darn sight happier with a simple text file, or even
html. I can read that even on a Vax with a VT100 and Lynx. Acrobat is
somewhat tedious to manipulate. And I hate having to zoom on text
that's too small to read.
Just my $0.02 worth as well.
Cheers
Geoff Roberts
-----Original Message-----
From: Eric Smith <eric(a)brouhaha.com>
To: Discussion re-collecting of classic computers
<classiccmp(a)u.washington.edu>
Date: Sunday, April 04, 1999 8:03 PM
Subject: Re: microcode, compilers, and supercomputer architecture
>Megan wrote:
>> well put... I've yet to find a compiler which can produce code which
>> could not then be further optimized in some way by a person well
>> versed in that machine's architecture...
>
>Yes, but if you paid attention to the original claim, you would see that
>I asserted that it was true for horizontal microcode with large amounts
>of data dependency. This is *very* different than trying to compile C
>(or Pascal, or Bliss, or whatever) for a typical architecture (which more
>closely resembles vertical microcode).
>
>One of the systems I microcoded had 512 words of control store (of about
>72 bits each), and running my microprogram source code through the compiler
>produced 514 words of microinstructions. With about two weeks of
>concentrated effort, I was able to eventually squeeze out two
>microinstructions. Total development time: 6 weeks.
>
>If I had tried to write all of the microcode in "assembly", it would have
taken
>me longer to write, and it probably would have been *bigger* on the first
>pass. And I still would have had to spend a lot of time on hand
optimization.
>I think this would have taken at least 12 weeks of development time,
although
>since I didn't do it that way I'll never know.
And, from your most recent posting:
>> Again, you used the word *assembly* and that implies my point.
>
>Now you've lost me completely. You were quoting your own writing, not
>mine. I didn't even *mention* "assembly" in my posting, except in quoting
>you.
>
>> Halting problem (P vs NP) difficulties aside,>
Is this use of the word "assembly" not yours? I, sir, am quoting you, not
me!
>
>I've never seen the situation in which human-generated code could not be
>further optimized, with the exception of trivial cases. Your assertion
>does not contradict my claims. Of course, this brings up the issue that
>"trivial" is not objectively quantifiable. One could perhaps credibly
argue
>that a trivial code sequence is one for which no further optimization is
>possible. I'm not taking that position, but simply pointing out the
>difficulty in basing arguments on non-objective statements.
>
>In point of fact, I've seen huge amounts of human-generated code that was
>nowhere near as optimal as code that a compiler would generate.
>
>All this proves is that neither humans nor compilers tend to produce
>optimal code. It says nothing about which tends to produce more optimal
>code.
>
The fact that an individual program is incapable of producing superior code,
relative to optimality, only serves to indicate that humans suffer a greater
degree of falability vis-a-vis the computer, which as you said is quite
happy
to act on tedium. That says nothing about the general case that humans
have superior intellectual capacity vis-a-vis the computer. After all, who
invented what?
This discussion is founded upon your statement:
>> Maximisation of processor throughput, and minimization of
>> microinstruction count, is at least half the purpose of microprogramming.
>
>Sure. And the microcode compilers I've written and used are much better
>at optimizing horizontal microcode than I have the time or patience to do
>by hand.
>
>> For such optimisation to be effected, on must necessarily write directly
>> in microcode, either bit and byte streams, or coded as in assembly
>> languages.
>
>No, it doesn't. Microcode almost always has a lot of data dependencies,
>which means that a compiler can often do as well as a human at optimizing
>it.
>
And yet, you argue against yourself with:
>... when I was trying to squeeze the 514 words down to 512, I ...
Herein, you admit that your personal skills quite outweighed those of the
algorithm that you constructed for the purpose of compiling a high-level
code into a particular microcode. Recall:
>Sure. And the microcode compilers I've written and used are much better
>at optimizing horizontal microcode than I have the time or patience to do
>by hand.
Also, recall:
>Therefore if I can use four weeks of my time to write a compiler and two
weeks
>to slightly tweak the output of that compiler ...
So, we are agreed that a human has greater capacity for the preparation of
optimal code. I conceed the notion of sufficient time to complete a task.
What you have failed to address is that the human intellect is not limited
by
the capacity to algorithmatise a solution. Hence, P vs. NP, GEB, and in
particular, the notions of Godel: that within any axiomatic system, the
answers
to some positable questions are indeterminable.
Humans have the capacity to make judgements by means outside of those
mathematical and logical, hence the reference to Penrose.
For all the nit-picky details of the works of these masters, the points they
make
are far grander. The real value of their works is not kept solely within
the realm
>from which their conclusions emerge, but within which such conclusions find
additional value.
William R. Buckley
At 12:46 AM 4/5/99 -0700, you wrote:
>
>Are you insane? The excrutiatingly slow and bloated Microsoft Word
>screams compared to Acrobat. I get so antsy waiting for Acrobat to update
>a fricken PDF page on the screen that my head wants to explode.
I suspect it's like PostScript, or metafiles, or executable code
in general: it all depends on what's generating the PDF file.
Some PDFs are apparently just bitmaps, others a mix of text and
bitmap, others just text. The existence of a PDF print driver
doesn't mean what goes through it will be the best.
- John
<IF you can stick the XT keyboard (are keyboards that talk that protocol
<still being made?) then look at the circuit of the PC or XT. The keyboard
<interface is a few TTL chips hung off an 8255.
At keyboards can be used as well as they are similar (not the same). You'll
have to make a interface as the serial is not compatable with UARTs, also
you will have to convert the key down/Key up codes to something more human.
<I'd make it modular (in that I'd have expansion slots), but I'd probably
<put the CPU + RAM + basic I/O on the 'motherboard'. For prototyping,
<DIN41612 connectors are easier than edge connectors because you don't
<need special boards with the connector fingers on them.
An acceptable bus is ISA-8bit and there are plenty of FDC, VIDEO, HDC cards
for that bus that could easily interface to z80.
<SRAM is a _lot_ easier. And now that 64K SRAM is 2 chips at most (62256's
<are cheap now), I'd use that. DRAM is not too hard until you realise that
<layout and decoupling are critical if you want to avoild random errors.
Same comment, one proviso, if your doing over 256k consider DRAM and MMU.
a good article for that is at the TCJ site.
<[For the hardware wizards here : Yes you can homebrew with DRAM - I've
<done it. But not as my first real project].
For a z80 system of 64 or 128k static is far easier. Also 128kx8 parts are
cheap so even 256k or 512k ram systems are modest.
Allison
>I was unclear.
>
>I meant that the microcode/assembly language words _corresponding to a
>particular LISP program_ were created from that program and then executed.
>If you define a function (like the ever-popular factorial function)
>something has to be stored in memory as the definition; presumably it is
>some sort of primitive (as in not-easily-decomposed) machine language, and
>presumably there is a program that converts source text into object code.
>
>So wouldn't that converter be a compiler? I believe that a number of
subtle
>details happen during the conversion process, so you couldn't even say the
>compiler is a simple compiler.
>
>-- Derek
You were quite clear. The answer is no.
Consider the instruction set of the x86. The MOV instruction is actually
implemented as a small sequence of microinstructions. There is, in fact,
no dedicated series of gates and other electronic aparatus which
implements the operation of MOV. Instead, it is implemented as a
series (or sequence) of smaller operations, such as LOAD REGISTER,
ADD REGISTERS, etc. If you are not familiar with the processes of
microprogramming, then you should become so. Microprograms are
not stored in RAM. Instead, they are stored in ROM.
Also, the only processors which today are founded upon the operation
of dedicated electronics (that is, electronic circuits which implement
fully and singly the operation of a machine instruction for a computer)
are the RISC machines. This is why they are so bloody fast. All CISC
machines are microprogrammed.
For those who are aware of the operations of the HP 21MX processors,
these are microprogrammed machines. As it happens, the user of
such a computer can alter the microprogramming. This is the computer
upon which I obtained my experience as a microprogrammer.
I do not mean to say that the factorial function is microprogrammed. It
is not. However, the operators CAR, CDR, CONS, etc. are implemented
in microcode. Hence, there is no need for translation - they are executed
directly.
For confirmation of this, contact my friend, Chuck Fry at
chucko(a)ptolemy.arc.nasa.gov
Now, it is true that the printed text of the program must be converted to
the instruction set of the computer but, the process is like this.
"CAR" corresponds to the instruction with byte code 0x01
"CDR" corresponds to the instruction with byte code 0x02
and so on. Of course, the byte values I give are only examples. The
true translations are not known to me. However, each operator of
the Lisp language will correspond to a single instruction code of the
Lisp machine.
This is a far cry from the result one usually obtains from a compiler.
If a compiler were used by a Lisp machine, then the operation of
CAR would involve the production of dozens of machine
instructions, just as the call of a subroutine in C involves dozens of
machine instructions. Heck, a simple addition in C results in an
instruction sequence like
MOV AX, address of data1
ADD AX, address of data2
MOV address of result, AX
For the Lisp machine, CAR would result in an instruction like
CAR address of source operand list, address of result operand
list
William R. Buckley
>> Consider the PDP 11/44 in my living room. It is constructed using the
>> AMD 2900 series of bit-slice microprocessor chips. In this case, the
>
>Well, I've never seen a PDP11 processor (as opposed to a floating point
>processor or a VAX) that uses 2900 series. IIRC the 11/44 uses 74S181
>ALUs and a sequencer built from TTL (and maybe some 82S100 PLAs)
>
>-tony
>
Not mine. I just pulled the processor card and it contain 16 of the
AM2901BDC chips, copyright 1978. The card has designation M7093
imprinted in the PCB metalisation layer. Well, upon closer inspection
this seems to be the FPP. The card designated M7094 does have
four of the 74181 type chips, and this is probably the general purpose
CPU component.
Any way, the point that I was trying to make is that the control code
for the 2901 was contained in ROM, and not so much that the CPU
was implemented via the 2901. Lets concentrate on the issue, not
the errors associated with making the point.
William R. Buckley