Friday, January 24, 2020

In the very near future, soldiers on the battlefield may depend more on artificial intelligence than their own comrades. In fact, it doesn’t take wild conjecture to believe that soldiers won’t need to be anywhere near the battlefield but could instead remotely operate sophisticated, intelligent and sensor-laden weapons of war possessing innate problem-solving abilities.

It would be a completely different type of warfare—and it could be coming in 15 years or less.

While Army leaders are determined that humans will always be in the loop when the fighting turns deadly, they are equally determined to take full advantage of what science and innovation can provide to guarantee U.S. advantages in future warfare. And, in the face of what is widely accepted to be deadly battlefields of the future and as the lethality of weaponry increases, a soldierless battlefield has a lot of attraction. “Only a human being can bring context to a decision,” Secretary of the Army Ryan McCarthy said while speaking at the American Enterprise Institute in November. “As this technology gets more and more mature and gets fed into a weapons system, those are very challenging times ahead.”

Be Prepared

The most significant challenge in artificial intelligence (AI), as with any new technology, will be understanding its strengths and how it can best be applied in a military setting, said Paul Scharre, a defense expert and former Army Ranger who served in Iraq and Afghanistan.

“Technology has changed how we do business, and we see throughout the history of warfare these periods of very disruptive change where the way of fighting changes,” Scharre said. “We need to be prepared for the reality that the ways of fighting on the ground may radically change in the coming years.”

The key to success might be finding the right tasks in the right settings, he said, also warning that “uncontrolled environments” can pose challenges.


Army researchers oversee a robot acting as a forward observer that has identified a possible enemy position.
(Credit: U.S. Army/Tamara Williams)

The right system does not yet exist, Scharre said. “None of the AI systems we’re talking about today exhibit the kind of general-purpose intelligence that people have, where people can flexibly adapt on the fly to new circumstances,” he said. Instead, AI will be used in more narrow ways to augment human warfighters, he said.

Machines currently face limitations in being able to learn new things like humans can, he said, so AI systems are used in combination with people.

“The idea that we’re just going to build some AI system and then send it on its own and it’s just going to do its own thing, never checks in with a person, is not realistic,” Scharre said.

Instead, Scharre said, DoD is looking at a “centaur model” of human-machine teaming in which controlled, specific tasks are given to machines, such as airline autopilot and cruise control in cars, and overseen by humans.

“The person’s always responsible for getting the job done, but there may be situations where it makes sense to hand over controlled-specific tasks to the machine,” Scharre said. “There are simply going to be places where we can use AI to advantage, and the challenge is figuring out how to do that.”

An Enabling Technology

Much of the Army’s current AI capabilities are operated or assisted by humans, but senior leaders are envisioning a future when these systems can be optionally manned, making soldier-robot teaming a possibility on the battlefield. 

Brig. Gen. Matthew Easley, director of the Army Artificial Intelligence Task Force at the U.S. Army Futures Command, said the Army sees AI as an enabling technology for all modernization priorities, from Future Vertical Lift and long-range precision fires to soldier lethality.

“It’s critically important that we enhance our current capabilities with AI,” said Easley while speaking at the Association of the U.S. Army Annual Meeting and Exposition in October. Current unmanned aerial vehicles that fly with attack helicopters, for example, have low-level capabilities for autonomous work, he said.

“[We’re] attempting to advance those systems to make them much more friendly, where these soldiers don’t need to spend all their time focusing on remotely piloting these systems and instead can do other tasks,” Easley said.

From Boston Dynamics’ robotic dog to the Army’s Robotic Manipulator, systems are being built to become humans’ counterparts and teammates, ready to assist.

Part of decade-long research led by the U.S. Army Combat Capabilities Development Command Army Research Laboratory and its partners, “RoMan” was designed with arms and hands to help remove heavy objects and other road debris from military vehicles’ paths, and it was recently tested to perform exercises.

According to the Army, RoMan was able to clear debris, drag heavy objects and open a container, and its soldier teammates were able to give verbal commands to the robot using natural language.

“That is a big win for the Army wanting to deploy robots, because now robots can be used to … help reduce obstacles in a way that they hadn’t previously been able to do,” said Stuart Young, a division chief at the Army Research Laboratory (ARL).

Building Trust

Ethan Stump, a robotics scientist at ARL, said robots and soldiers training together can be one way to build trust.

“We’re imagining a future where robots will actually be a much more organic asset,” Stump said. While robots may not be issued to every soldier, Stump said he can see a future where every squad will have a robot and soldiers will train with it—not only to learn how the robot will react, but also to learn how to react to and understand the robot.

“As a squad sort of develops their own unique tactics of how they like to operate together, the robot is also picking up on that, and it’s training with the squad,” said Stump, adding that the robot would learn not just how to be part of the squad but also how it can best operate within it.


A soldier tests a robotic vehicle on an expeditionary air bridge in Hawaii.
(Credit: U.S. Air Force/Staff Sgt. Christopher Hubenthal)

“Training becomes kind of a two-way street,” Stump said.

In the future, robots will be able to learn by example in ways similar to humans without humans necessarily needing to program the systems to understand new rules, he said.

“We’re going to be able to learn very complicated sorts of activities and maneuvers and behaviors simply from demonstration,” Stump said. Researchers are using natural language, deep learning and language grounding to help robots understand their surroundings. 

Young said this framework allows soldiers to interact with robots in a more natural way. “It allows the soldiers to interact with the robot in a way that they would naturally interact with a teammate.”

Us Versus Them

Trust between soldiers and machines is a complex issue, as AI capabilities are often still viewed with an us-versus-them mentality—and trust can be impacted by a robot’s transparency and predictable behavior, Young said.

“If you’re going hunting with a hunting dog, you train the dog to help you hunt and do things, and its behaviors may not be perfect, but they’re predictable,” Young said. “That really helps to engender trust in the human or the soldier in the system.”

New Army-led research shows confidence in a robot decreases after it makes a mistake, even if it’s able to explain its reasoning process, but transparency from the robot can help.


A soldier works with a robot in the field.
(Credit: U.S. Army /Pat Molnar)


Robots are controlled remotely by a South Carolina National Guard soldier during route clearance training.
(Credit: Army National Guard/2nd Lt. Jorge Intriago)

In this study, a small ground robot that typically interacts and communicates with an infantry squad navigated a training course while teamed with a human. The team responded to situations along the course and, while the human always acted correctly, the robot made occasional errors.

The study showed the human’s trust and perceptions, as well as the robot’s reliability, were affected when errors were made, and the human’s confidence in the robot’s reliability didn’t fully recover, even if the robot made no more errors.

The us-versus-them mentality in human-robot teaming can affect how soldiers approach the systems in real-life scenarios, and that mentality can lead to reduced cooperation between them.

According to another study by ARL, humans tend to treat machines like outsiders through social categorization—the process of classifying people into groups based on characteristics—and that can lead to reduced cooperation between them. Research showed that bias can be overcome when the machine uses situational cues, such as cooperative or competitive emotions, and its intent is clear.

Into the Future

How exactly the Army plans to incorporate robots into its formations and how it will address inevitable challenges—soldiers dealing with glitches in a combat zone, protecting systems from cyberattacks, and the consequences of reduced soldier trust in robotic companions, to name a few—remains unknown for now.

While AI is vital to helping the Army save lives, augment soldier performance and maintain its competitive advantage on the global scale, McCarthy said it’ll change the future in more ways than Americans are prepared for.

“We’ll have a human in the loop, but that’s something that policymakers will face in the years to come,” McCarthy said. “Will China and Russia behave the same way?”

* * *

AI Ethics

Artificial intelligence is vital to national security and maintaining a competitive lead, but success takes more than technical advantage—it takes leadership, too. The Defense Innovation Board released recommendations for ethical use of artificial intelligence by DoD. According to the board, DoD’s AI-centered goals should be:

  • Responsible
  • Equitable
  • Traceable
  • Reliable
  • Governable