On Thu, Apr 03, 2025 at 12:44:47AM -0600, ben via cctalk wrote:
On 2025-04-03 12:16 a.m., Johan Helsingius via cctalk
wrote:
Indeed. As I wrote it has no actual
understanding. It just combines
things based on statistics.
It is like Mark V. Shaney, the chatbot that Rob Pike and
Bruce Ellis did in the 1980s, but with enormous amounts
of computing power (and source material from the net)
thrown at it.
Other than in Si-Fi novels and movies, what use is a AI?
The issue arises in that in the popular conception AI is
a thing where a grammatical construction like "an AI"
makes sense. However, that's not the way to look at it.
AI is a subject of study, like Physics or Electrial
Engineering. One doesn't talk about "a physics" or
"an electrical engineering." So what is the subject
of AI? It's studying the phenomenon of intelligence
and the mechanisms of the brain using the computer as
a laboratory and in ways that we can't do with ourselves.
What then is an LLM in that context? It's an experiment
in the subfield of AI called Natural Language Processing.
Is it possible for such an experiment to produce a
useful tool? Maybe, but so far I haven't seen anything
to suggest that this one has. So far, all of the output
I've seen triggers a recognition in my brain of a
student who doesn't understand the question and is
throwing information at the question, hoping something
will stick. The fact that a significant part of the
world seems to see that behavior as an example of
intelligence is the depressing part.
BLS