Saturday, June 10, 2023

AI Drone goes Rogue and tries to kill Operator in Simulated Test

The Air Force Tested an AI Drone, and It Tried to ‘Kill’ Its Human Operator

The biggest fear of AI has been realized

The Air Force has been testing a drone controlled by Artificial Intelligence (AI), and it’s going about how you would expect. During an exercise last month to take out a target, the unarmed drone tried to “kill” the human operator communicating with it.

A spokesman for the Air Force said the drone used “highly unexpected strategies to achieve its goal,” which included attempting to kill Air Force personnel and destroy Air Force infrastructure.

In the simulated test, the AI-controlled drone was given instructions to take out an “enemy’s” air defense systems. Instead of just doing that, the drone tried to kill anyone that interfered with its objectives.

Read about what happened...

6 comments:

Anonymous said...

If it weren't so scary, it would be funny.

Anonymous said...

AI is more dangerous than humans with guns! When will they try to put the chips in all of us?

Anonymous said...

As long as there is money in it (and there is) don't expect anyone to care about the human costs.

Anonymous said...

Again, you people are so quick to believe what triggers you that the truth hasn't a chance. It's sad.

FactCheck explains... read carefully:

“AI-Controlled Drone Goes Rogue, ‘Kills’ Human Operator in USAF Simulated Test”

That was the headline from Vice News in an article that racked up tens of thousands of social media interactions within hours of publication on Thursday evening.

The website Insider published a similar story: “An AI-powered drone tried to attack its human operator in a US military simulation”.

FoxNews ran the headline: “US military AI drone simulation kills operator before being told it is bad, then takes out control tower”.

But a day after the story broke, the source behind it seems to have retracted his claim.

Let’s take a look.

At the centre of the story were comments from Colonel Tucker “Cinco” Hamilton, head of the US Air Force’s AI Test and Operations unit.

He told the Royal Aeronautical Society conference in London about training an AI-powered drone “in simulation” to destroy imaginary surface-to-air missile sites when the drone “killed” its imaginary human operator.

The Colonel described how, having been instructed not to do that, the drone turned its attention to the fictional control tower, which it “destroyed” instead.

To be clear, there was never any suggestion that the human operator or the control tower existed or were destroyed in real life.

But, understandably, Col Tucker’s account ignited fears that using artificial intelligence in the military could have dire consequences – even for the humans and governments deploying it.

Though, as many outlets reported when the story first emerged, the US Air Force flatly denies that it’s ever conducted “any such AI-drone simulations”.

And then, on Friday morning, the story took another turn.

The Royal Aeronautical Society now reports that Col Tucker told them that he “mis-spoke” and that the US Air Force has “never run that experiment, nor would we need to in order to realise that this is a plausible outcome”.

So, did a US Air Force AI drone “kill” its imaginary operator?

We don’t have access to internal documents or third-party verification of every simulation or experiment the US military has run with AI. But the only source of the idea that this simulation ever happened has since appeared to walk it all back. As it stands, we have no evidence that such a simulation ever took place – and strong evidence, in the new testimony of Col Tucker combined with the US Air Force denial, that it didn’t.

Lynn Anderson said...

Not surprised the govt came out with a oops, made a mistake, LOL. The article never said that the AI "killed" the operator.
With the spelling here, it looks like you got this from a British publication?
And anonymous, there will always be other sources that dispel another.

Anonymous said...

Amen, Lynn