On 1/18/23 09:44, Chuck Guzis via cctalk wrote:
On 1/18/23 04:16, Peter Coghlan via cctalk wrote:
> If these AI weapons are implemented with
current computing >
technology, it's hard to imagine them managing to take
over the world >
before they crash/panic/BSOD/bomb etc depending on their specific >
flavour and/or get bricked by malware and/or fail due to > leaking/dried
up electrolytic capacitors and/or batteries. It's hard > enough to keep
systems going without interruption when this is what > people are
actively trying to do. > > In any case, how are they going to prevent
the humans from cutting > off their power? > Just imagine, instead of a
soldier or airman sitting at a display picking out targets for a
airborne drone, doing away with the psychological stress and uncertainty
My apologies, this is the kind of crap that gets sent when I try to trim
a post. I've set my email client now to disable HTML rendering, so
maybe the following will come through okay:
Just imagine, instead of a soldier or airman sitting at a display
picking out targets for a airborne drone, doing away with the
psychological stress and uncertainty and letting an AI select and attack
targets. As far as I can tell, this doesn't violate any conventions.
How about artillery using the same system? The AI will be distant from
the actual weapon, so no concerns about cutting off power.
Of course, I'm stating the obvious--I would be very surprised if various
governments weren't already developing platforms based on AI.
--Chuck