[I'd suggested that we take this discussion off the
list. I'm continuing
to reply here in this case only because I don't want people to get the
incorrect idea that Godel's Incompleteness Theorem can be used to magically
explain away any philosophical problem regarding computers.]
> Is this use of the word "assembly" not yours? I, sir, am quoting you,
not
me!
OK, that one was mine. It wasn't in the context you originally quoted, or
even from the same message you quoted
(<19990405060635.29296.qmail(a)brouhaha.com>)m>). I had used it three
hours earlier in the discussion
(<19990405030452.28640.qmail(a)brouhaha.com>)m>).
So perhaps you see why I didn't understand what you
were complaining about.
It is customary to include a brief quote of the actual context you are
referring to.
The quote was passed down several layers of reply. I expect one to
remember one's own words. Your failure to do so does not provide any
obligation on my part.
That says
nothing about the general case
that humans have superior intellectual capacity vis-a-vis the computer.
In the general case, I've never claimed that they do. I've only claimed
that
in a sufficiently limited problem domain with a time
limit (i.e., the
solution
value vs. time curve is a flat with a sharp drop to
zero), a computer may
reach a better solution than a human would. I also claim that this is true
for other common solution value vs. time curves; if the solution is worth
$x
today but only $x/2 tomorrow, the computer may produce
a more valuable
solution than would a human.
Time limits accepted but, that is not my concern. I am refering to an
ultimate issue, which is that humans have intelligence, computers
do not. Any high-speed moron has the opportunity to surpass a
considerate intellect. Witness the ability of Deep Blue to challenge
the best chess player. Yet, ultimately, a human can decide by means
not algorithmic.
> What you have failed to address is that the human
intellect is not
limited
by the
capacity to algorithmatise a solution.
[and later:]
Humans have the capacity to make judgements by
means outside of those
mathematical and logical, hence the reference to Penrose.
Sure. A human may proceed in a manner that is not based upon logical
deduction or any (obvious) deterministic algorithm.
It is yet to be proven that this human ability (as manifested in complex
problem-solving) is not equivalent to a non-deterministic algorithm,
or even to a sufficiently complex deterministic system. Penrose claims
that quantum uncertaintly is necessary to intelligence. While he provides
insufficient proof of this claim (really just anecdotal evidence), as an
argument against machine intelligence it is a red herring, since it is
not especially difficult to build a system that uses quantum uncertainty
to influence nondeterministic algorithms.
This begs the question, for proof is necessarily mathematical (I, for one,
do not agree with Judicial notions of proof, such as a preponderance of
the evidence). That you hinge your argument upon the lack of a proof
of the means of some human ability simply points to flaws therein.
in particular,
the notions of Godel: that within any axiomatic system, th
answers to some positable questions are indeterminable.
You know, since you mentioned the book GEB, I thought you might have been
trying to bring Godel's Incompleteness Theorem into the discussion. But
since you didn't specifically state that, I wanted to give you the benefit
of the doubt.
The Incompleteness Theorum if very useful for certain lines of reasoning.
And it might be relevant to the strong AI problem. But it has no relevance
to the compiler problem we've been discussing.
It is relevant to the notion that humans must use methods not algorithmic.
In the compilers "axiomatic system", it is
not possible to even construct
the kind of questions to which GIT refers.
The compiler is not burdened with proving that it is correct, or that its
own output is correct. At most we are asking it to select the more
efficient
of several proposed solutions. This in some sense does
involve a "proof",
but the required proof is no of the validity of the axioms (i.e., the
compiler algorithm), nor is it a proof that the "system" is
self-consistent.
> For all the nit-picky details of the works of these masters, the points
they
make are far
grander. The real value of their works is not kept solely
within the realm from which their conclusions emerge, but within which
such conclusions find additional value.
If you know where to apply them. You can't just willy-nilly claim that
GIT applies to any random problem.
This is one of the wonders of human intelligence: to make leaps of logic
and application.
If you are going to maintain that GIT precludes
compilers generating code
as efficient as the best human-generated code, you'd best be prepared to
present a logical argument as to why GIT applies. It's not a magic wand,
and I'm not going to concede your point at the mere mention of it.
I am not applying GIT to the operation of compilers. Instead, I am applying
it to the operation of human intelligence. Whether you concede the point
makes no difference to me. My purpose is to refute your claims of the
superiority of software versus human intelligence, and that is all.
William R. Buckley