Chuck Guzis wrote:
On 4 Feb 2007 at 0:56, Tony Duell wrote:
And another oddiity. The whole design of the
Apple ][ seems to have been
to save a chip if at all possible (provided the machine still works --
just). And yet the kayboard was encoded in hardware. Why? It meant you
couldn't implelement a lower case keyboard in software (there are the
well-known shift key mods where you run a wire from the shift keyswitch
to one of the single-bit inputs on the games connector, which shouldn't
have been necessary).
Thank you for absolving me of being the first to use the term
"gutless wonder". :)
In a way, I suppose the disk controller was a clever design. But it
locked the CPU into 2MHz operation. The use of a simple arithmetic
checksum for each sector was not perhaps the most reliable solution
either. But the biggest problem is that disk reading and writing
required 100% attention from the CPU. On most other computers that
used dedicated LSI controllers, the possibility existed for
overlapped computation/disk access.
Cheers,
Chuck
The northstar disk controller and many similar ones required the
processor to suck in each byte anyway, so there was no chance there of
the CPU doing something productive. Now look at the chip count of the
nortstar controller board:
http://www.sol20.org/manuals/nstar_MDS-A-D.pdf
vs. the apple II disk controller (scroll down a bit):
http://www.8bit-museum.de/docs/apple3b.htm#
By comparison, the apple II controller is brilliant. Yes, there existed
disk controllers with DMA, but those were more complex and cost a whole
lot more in 1977. Besides, the Apple II had a great expansion bus, and
some enterprising company could have plopped a 177x disk controller in
machine at the point it made sense, but apparently it never did as its
market was happy with what apple offered.
The disk controller didn't lock the CPU into 2 MHz operation -- the CPU
was 1 MHz. :-) Besides, the apple disk controller was a point solution
for a point in time. Later Apples had higher speed clocks and could
read/write the old disks. Obviously this inexpensive solution didn't
preclude decoupling the CPU speed later once VLSI made it practical to
do so.
As for the apple II minimalism -- saving a chip here and there --
indeed, it is a toy compared to, say, the the way HP would do it. But
then the HP would have cost 10x as much.
Let's look at contemporaries (1977 micros). The Sol-20 had 64x16 text,
and unless you polled the hblank signal and limited yourself to poking
one character per 50 uS, you'd get snow on the screen. There were
others that had the snow problem as well (eg, exidy sorcerer). The
TRS-80 didn't have the snow problem, but it too had a 64x16 text
display. The PET (which I never used) also had character graphics of
the most pathetic sort.
Now look at the apple. Yeah, crappy 40x24 text mode, but they were in
color and, for the time, unheard of bitmapped graphics modes in a
microcomputer. By running the DRAM at 2x the CPU cycle time and locking
it to the video rate, all graphics could be written real-time with no
snow and no wait-states. Other than DRAM and the CPU, all cheapo TTL.
Tony said:
> And another oddiity. The whole design of the Apple
][ seems to have
> been to save a chip if at all possible
Yes and no. They spent some chips to give the machine great
expandability and color, bitmapped graphics. On the other hand,
wherever Woz could save a chip (or ten), even if it meant an oddball
mapping of memory to screen, was done to save cost and keep the board
size down. It is fantasy to think that cost wasn't one of the most
important factors in a successful home computer.
I admire the design greatly for what it was and for the time it was
done. The fact the machine came standard with full schematics and a
commented disassembly of the ROM should not go unappreciated with this
group. The machine has character, the good kind, in spades.