On Mon, 16 Mar 1998, Doug Yowza wrote:
On Mon, 16 Mar 1998, David Wollmann wrote:
If you believe Turing, there's nothing an analog computer can compute that
a digital one can't. A brain is many things: it's wet, it's analog, and
it's massively parallel. I don't think anybody believes that it's wetness
or analogness that matters, but clearly a high degree of parallel
There was one engineer on the program that *did* believe just that. He was
making insects, 100% analog, and insisted that we could never emulate that
which is analog (brain activity) in the digital realm. His focus is on
creating a will-to-survive instinct in the machines, and then tricking
them into working for us. He likened it to putting blinders on an ox to
make it plow.
The early AI folk thought they could do brainstuff
using straight-forward
algorithms. Maybe you can, but there's no biological analog that anybody
can find to support the idea that humans work that way.
Who was it that mentioned Hofstadter's GEB the other day? He talks a lot
about the rules of reasoning in there, illuminating the seeming paradox of
writing a structured program that allows a machine to think freely. His
explanation of the language/metalanguage/metametalanguage hierarchies
makes one leap the chasm of the technology and just assume that it will
happen. I have read that book three times and can still find new things in
it.
So, the connectionists tried to create machines and
structures modeled
after the brain, but they didn't get too far. Let's say that you build
an OS and a programming language that allows you to accurately model a
brain. If you stick a human brain in front of a newspaper, you get
nothing. So add some input devices and actuators. Now you stick a baby
in front of a newspaper -- still nothing. So let the baby run for a
while, experience a variety of sensations, make a whole bunch of
associations, stick it in front of many newpapers and many non-newspapers
for many years, and finally you get a pretty good character recognizer.
Well, sure. That's the learning curve. But isn't that the idea of
replicating a machine? That it could duplicate it's RAM/ROM contents as
well? In the human world it would be like having direct access to a
million years of evolutionary experience! What couldn't we do if we new
everything that came before? What won't these machines be able to do?
We're still a long way from being able to put a
multi-billion node
connection machine onto a sturdy frame with millions of intricate wires,
actuators, and sensors, let it experience and manipulate the environment
for many years, and then get the thing to demonstrate emergent properties
that make the whole greater than the sum of the parts. But I don't see
any reason why it won't or can't happen.
I don't think we're that long off, in terms of evolutionary speed. If
computer evolution is occuring, by some estimates, at 5-10 million times
the speed of human evolution, we could expect a fully-evolved human
replicant within our lifetime! The technology is coming around, with
microelectronics and the promising prospect of nano-technology being
fulfilled.
And if you think that the prospect of human cloning is
causing moral
pangs, wait until somebody creates the first artificial life form!
One of the things I really liked about the Discovery program
is the quote about the state of technology, something to the effect of:
"We are a lot closer to being able to create an artificial human than we
are to being able to comprehend the consequences of creating an
artificial human."
I realise that this is not *directly* about +10 year old computer systems,
but it does directly relate to the them and their role in the history of
this field (which is what I originally asked the list about). Does anyone
on the list want to take it outside to a temporary list to discuss the
moral/ethical/probability issues of artificial life? Let me know by
email and I'll set one up.
Aaron