[Replying to multiple messages here.]
["William R. Buckley" <wrb(a)wrbuckley.com>]
[me, der Mouse]
["William R. Buckley"
<wrb(a)wrbuckley.com>]
There are several architectures for computers,
with all being the
equal of the Turing machine.
That's interesting, because not one of the
devices I have called
computers, used as computers, or heard/seen called computers, is
even theoretically equivalent to a Turing machine. In particular,
they all have finite storage.
Consult von Neumann. When you do, remember that he
devised an
architecture which is neither the classic von Neumann nor the classic
Harvard. It is, however, a true data flow architecture, and highly
parallel.
This has nothing to do with my point, which is that defining "computer"
in a way that makes all computers equal to Turing machines is not a
useful definition, since there are then no "computer"s.
Nevertheless, for all systems of computation, there is
the
possibility of self-modification of code which is accessible
through other means, typically in the form of a file.
So I suppose a dedicated
microcontroller (eg, code in
mask-programmed ROM) is not a "system of computation"?
The fact is that
dedicated microcontrollers define a computing
environment, and the location of the code is immaterial.
The location of code is not immaterial when it is in an inherently
read-only medium, since you are specifying the possibility of
self-modification of code as part of your definition.
Even if the
application that's in ROM is, say, a pocket calculator?
A programmable pocket calculator?
The microcode-and-microcpu view of a CISC CPU chip?
Where do you draw the line?
This is the value of the Turing machine. I encourage
you, and like
minded list members, to consult your local computer science faculty
of any legitimate university you like, and ask them if the computers
they use are equivalents of the Turing machine. I seriously doubt
that you will find a single professor who will deny the equivalence.
Perhaps you are right; perhaps not - but any such claim of equivalence
either springs from ignorance or from linguistic shorthand; anyone,
professor or not, who persists in claiming such equivalence after
having had it pointed out that real computers have finite storage is
simply _wrong_.
And in any case, I don't see how what you wrote has anything to do with
my point, which is that what looks like a Harvard architecture when
considered from one point of view may look like a von Neumann
architecture when considered from another, and that restricting
yourself to points of view which include the possibility of
code-which-writes-code (I hesitate to call it *self*-modifying code) is
unnecessarily restrictive, not to mention contrary to the usual uses of
the words to the point of being confusing.
Also, remember the features of some languages, like
APL, which will
operate on von Neumann and Harvard architectures, and which APL
code might include the execute operator, which facilitates run-time
code generation and execution.
In the sense of "code" for which the
underlying CPU is a von Neumann
or Harvard machine, the execute operator does not necessarily imply
run-time code generation.
It is disengenuous, and intellectually dishonest, to use
a lack of
implication to assert a lack of occurrance.
That's not what I was trying to do at all. I read your statement about
APL both (a) running on both von Neumann and Harvard architectures and
(b) including the execute operator as implying that (c) both vN and H
architectures can do run-time code generation. My point was that this
implication is false: execute does not require run-time code
generation, so (c) does not necessarily follow from (a) and (b).
In truth, there is no logical difference between a
hardware based
mechanism of computational processing, and a software based mechanism
of computational processing.
There is when you are drawing distinctions based on whether the
hardware can modify its own code or not. For example, it is likely
that the machine I'm typing this on is microprogrammed with
mask-programmed microcode ROM, in which case the real hardware is a
strict Harvard architecture, with no possibility whatever of
code-that-writes-code. Based on what you've written, it seems to me
that this implies you do not think it is reasonable to call it a
computer.
[more "William R. Buckley" <wrb(a)wrbuckley.com>om>, another message]
Neither can you alter the microcode of the x86, yet it
is the essence
of a computer.
This makes it appear that you do _not_ think that being able to alter
code is necessary for something to be considered a computer. I'm
becoming confused about what your stance actually is here.
Perhaps your position is that it is necessary for there to exist some
level of abstraction at which code-that-writes-code is possible for
something to be considered a computer? If so, it's not at all clear
from what you've written.
["Brian L. Stuart" <blstuart(a)bellsouth.net>]
(Almost no authors highlight the distinction between
unbounded and
infinite. [...])
Well, not when writing about computation, at least. :-) This is
probably because it's a very subtle distinction, one which I'm not
convinced even exists for some models of infinity, and one which makes
no difference whatever for (almost?) all uses.
Now here I do agree with the desire to define a
computer in terms of
machines that can compute functions that are computable in a
Church-Turing sense.
Hm, so you consider "analog computer" to be an oxymoron?
So if physical realizations of the computing ideal
will always come
up short of the models of the Turning machine or the lambda calculus,
(or most
other models of computation)
then why do we ever study these models and why do we
make
distinctions between recursive and recursivly enumerable?
(a) Because we're theoreticians.
(b) Because they're useful idealizations.
I'd say that if you can implement a simulation of
a universal Turing
machine so that the range of it's computations is limited only by the
amount of addressable memory, then you have a real computer.
Very nicely put - and aside from leaving analog computers out in the
cold, I think I basically agree with it.
Notice that with the definition above, any computer
can implement a
universal Turing machine and [...] such a machine can modify it's own
programming.
Only after you make the shift from the universal Turing machine itself
(which has fixed code) to the specific Turing machine being simulated
by the uTm - and not then, even, if the Turing machine simulator is
simulating the sort of Turing machine I studied, which has fixed code,
set when its state transition table is designed. (The simulated
machine's code (= state transition table) may change from one run of
the simulator to the next, but not during a run.)
(Remember that a universal Turing machine interprets
instructions
read from the tape.)
Yes, and the tape may change - but that doesn't constitute changes in
the code being executed by the simulated Turing machine.
[Back to "William R. Buckley" <wrb(a)wrbuckley.com>]
I did not contradict myself. I admit fully that the
ideal TM has
infinite memory. I also note that typical, contemporary computers
are not exactly a TM. Yet, they are computationally equivalent,
Not if there are computations that can be performed on one but not the
other.
And there are computations that can be performed on a Turing machine
with infinite - or unbounded - tape which cannot be performed on any
existing, or foreseeable, computer. (As an example, consider computing
Ackerman's function with arguments 100,100.) Therefore they are not
computationally equivalent.
[more "William R. Buckley" <wrb(a)wrbuckley.com>]
> The
important point for computation is closure, [...]
In the case of Turing closure,
the notion is much broader. Turing
closure refers to the ability of a system to perform any and all
computations that can be expressed.
This is not a prticularly interesting notion, as what computations can
be expressed depends on your expressive notation.
For example, if I choose the notation of real analysis and calculus,
there are computations that can be expressed fairly easily which cannot
be carried out by a Turing machine - because the Turing machines cannot
work with real numbers, only discrete approximations to them. (Any
numerically unstable iterated computation, applied to a transcendental
number like pi, will do fine.)
/~\ The ASCII der Mouse
\ / Ribbon Campaign
X Against HTML mouse(a)rodents.montreal.qc.ca
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B