< It looks like the only characteristic that a multi-chip implementation
< partially breaks here is "highly integrated". Then again, a two-chip
< implementation is not necessarily much less integrated than a single-chi
< Now I wonder why this level of integration matters. Is there something
< that a two-chip implementation can't do, and a single-chip can? Did
< people really care about this level of space-savings to the extent that
< it was worth introducing a new word into the language?
The key is the limits of IC gate density of the moment. Now we can have
lterally millions so complexity is very high. Back when (1970-71) teh
semi houses were hitting the ceiling at about 1000 gates/2500 raw devices
to a chunk of silicon. So splitting a function across two chips was not
unresonable idea. It's was a reflection of *manufacturability*.
< Actually, until ten minutes ago, I would have had trouble calling the
< two-chip thing a microprocessor because it broke the definition I learne
< as a kid: single-chip. But even the characteristic of being similar to
< 4004 is relevant to the extent that you are careful in choosing which wa
< it has to be similar. The first 4004's were probably in ceramic; shoul
< that be part of the definition? Probably not. Why did we care about th
< 4004? Is being implemented on a single chip really the important bit?
< was it cost, ease of use, small size, ...? A two-chip implementation
< could very well have been important to us for exactly the same reasons
< that the 4004 was.
The 4004 was significant at several levels. It was relatively low cost
commercial product. It had a return address stack. It had a fairly
large number of registers (for that time it was a very large number).
There were other chips to facilitate low cost construction of dedicated
systems. Being few is packages and low in numbers of pins made PCboard
consturction cheap. the PMOS process used was low power compared to TTL
or DTL of the time. Each one of those elements were significant relative
to computer systems of the day regrdless of the type!
< So, when is it useful to distinguish single-chip from, say, dual-chip?
When talking at the archetecture level or when interfacing.
< What kind of practical decision would someone make based on that?
COST, number of pins, flexibility. The LSI-11 for example was the WD13
chip set, with differnt MICROMS it was the Alpha Micro or the WD
microengine. Same chips some containing differnt microcode. If you have
a LSI-11 and the rare but manufacured WCS you could acutally add
instructions to your LSI-11 to suit specialized needs. This is not
doable with most single chip implmentations.
< BTW, was the 4004 really the first in the Intel series of 4004, 4040,
< 8008, and 8080? I seem to remember that something in this sequence
< actually happened in non-ascending order, like maybe the 8008 preceded
< the 4004, or the 4040 came out last, or ...? It could make sense; you
< could imagine scaling back an existing design to penetrate some niche
< market with a cheaper part.