That is not a solution either: It just locks the AI into a 2015 or so
time period where they can't adapt to changing writing or speaking
styles. All the output is going to sound like an outdated person.
I'm already seeing this in "appliance repair" articles that are just AI
bots reading other AI bots info and making the results into complete
garbage. When you have one site that is a human and 100 sites that are
AI copies guess what the probability is that your new "repair site" is
going to get accurate data or garbage.
What's needed is to enslave humans to write new content to feed the AI
systems. :-)
On 1/20/2025 4:49 AM, Adrian Godwin via cctalk wrote:
They guard against that now by using outdated training
data. Not an
approach that will work more than once.
I think we'd all be happier if AI output had to be marked as such.
On Mon, Jan 20, 2025 at 7:02 AM Christopher Zach via cctalk <
cctalk(a)classiccmp.org> wrote:
> As you get more ai systems running you just run into the xeroxgraphoc
> effect and it all falls down
> Sent from my iPhone
>
>> On Jan 19, 2025, at 1:11 PM, Chuck Guzis via cctalk <
> cctalk(a)classiccmp.org> wrote:
>>
>> On 1/19/25 09:39, Carlos E Murillo-Sanchez via cctalk wrote:
>>
>>> What happened to them? They're everywhere in academic papers.
They've
>>> become an easy way to get published. Nowadays they make up so many bio-
>>> inspired variations of these algorithms that I think that they are going
>>> to run out of animal names to assign to them. Most of these publications
>>> are thrash, but there are a handful of genuine applications.
>>
>> I recall reading that behavior predictions made by genetic algorithms
>> would enable one to corner the US stock market. Of course, that was
>> before 2008.
>>
>> --Chuck
>>
>