TERMINATOR? Gen Mark Milley warns US against dangers of military AI going rogue

0
Chairman of the Joint Chiefs of Staff General Mark Milley testifies during a Senate Armed Services Committee hearing on Capitol Hill in Washington, DC, April 7, 2022 (L) and a image of a AI-powered futuristic robot from the movie Terminator (R).—AFP/File

US Army General and former chairman of the Joint Chiefs of Staff Mark Milley warned the US to “be prepared” as artificial intelligence (AI) — an advanced technology that every nation has access to — makes its way into the military, drastically altering the nature of war soon.

In a recent interview for 60 Minutes, Milley shared these predictions while sitting aboard the USS Constitution, the oldest naval warship still afloat.

“Our military is going to have to change if we are going to continue to be superior to every other military on Earth,” Milley said during the interview.

According to Milley, AI will speed up and automate the so-called OODA loop — observe, orient, direct, and act — which is the decision cycle meant to outwit an adversary, CBS News reported.

More than two centuries ago, this strategy looked like Napoleon getting up in the middle of the night to issue orders before the British woke up for tea, Milley explained.

Soon, it will be computers automatically analyzing information to help make decisions about where to move troops and when.

“Artificial intelligence is extremely powerful,” Milley said. “It’s coming at us. I suspect it will be probably optimized for command and control of military operations within maybe ten to 15 years, max.”

Currently, all decisions must go via a human OODA loop, as per the Department of Defence (DoD) norm, and according to departmental standards, completely autonomous weapons systems must “allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

Deputy Secretary of Defence Kathleen Hicks says this requirement will apply to a new DoD program called “Replicator,” a Pentagon project designed to challenge the size of China’s military.

The program aims to produce thousands of autonomous weapons systems powered by AI.

“Our policy for autonomy in weapon systems is clear and well-established: There is always a human responsible for the use of force. Full stop,” Hicks said last month. “Anything we do through this initiative or any other, must and will adhere to that policy.”

The International Committee of the Red Cross says autonomous weapons — including those that use AI — could lead to unintended consequences, like civilian casualties or an escalation of conflict.

In response to the question of whether AI will make war more likely, Milley said: “It could. It actually could. Artificial intelligence has a huge amount of legal, ethical, and moral implications that we’re just beginning to start to come to grips with.”