Understanding Is Key to Trusting Robots
Soldiers fighting alongside robots on future battlefields will build trust with their unmanned comrades only if they understand why the machine is doing what it’s doing.
Before trust in artificial intelligence can be established, soldiers must first feel they have ownership of their robots, and the technology must be transparent enough “to make sure that humans can understand and interpret why a machine is making the decision it’s making,” Ken Fleischmann, associate professor at the University of Texas at Austin’s School of Information, said recently at a Mad Scientist conference in Austin.
Fleischmann pointed out that while the development of artificial intelligence has yielded brilliant outcomes with chess and other games, it’s quite another thing to have a robot that will “behave ethically on the battlefield.”
“There will be plenty of cases that will be literally kill or be killed, in which case the choice of whether to shoot before knowing [if] someone is a threat” will come into play with potentially dire consequences, he said. Fleischmann put forth a hypothetical scenario in which robots fighting alongside soldiers on the same side “could cause their human comrades to be injured.”
The rules of engagement on a battlefield, he said, “are far more complex” than the current laws governing technology. Understanding why AI chooses to do one thing versus another “is critical to forming an informed trust judgment,” he said.
“If you don’t understand why something is telling you to do something, you are forced to arbitrarily choose one way or the other with no insights into what’s going on. That’s going from transparency to trust to human agency,” Fleischmann said.
The Mad Scientist Conference on future operating environments is part of a series of events organized by the U.S. Army Training and Doctrine Command and the U.S. Army Futures Command.