Mad Scientists Focus on Ethical Use of AI

Mad Scientists Focus on Ethical Use of AI

Automated combat vehicles
Photo by: U.S. Army/Jerome Aliotta

As the U.S. military continues to explore the use of artificial intelligence in weapons, efforts are underway to enact ethical principles that would bind their activity. 

A basic framework has been created requiring a human in the chain when using weapons in the future, experts said Feb. 9 during an Army-sponsored webinar that is part of the Army Futures Command’s Mad Scientist series. 

Following ethical principles is key to the effort, said Alka Patel, head of artificial intelligence ethics policy at the DoD Joint Artificial Intelligence Center. The military needs to trust technology and the people operating it, she said, warning that there is always a fear of moving too fast with a technology that is evolving and isn’t well understood, she said. 

There are long-term implications in the training process for soldiers who might be involved, she said. “The end user really needs to understand these technologies,” Patel said. Congress has passed a risk-assessment framework for use of artificial intelligence tools by the military, she said.  

Philip Root of the Defense Advanced Research Projects Agency said it can be difficult to separate friendly from unfriendly combatants, which complicates efforts. This is a problem today when soldiers are dealing with civilians on the battlefield. It is even more complicated with an autonomous system in combat, said Root, a retired Army lieutenant colonel who is deputy director of DARPA’s Defense Sciences Office. 

Sophisticated automated weapons will require the ability to make tailorable decisions for different situations, Root said. It will be important that commanders and troops using the systems understand the decision-making process, even if it happens far faster than a human can think, and to have trust in the decisions, he said. 

Machine-made decisions must be transparent, so they can be understood in after-action reviews, and the machines must be under controls in training and combat so that they follow guiding ethical principles and behaviors. There also must be a way to order them to disengage or deactivate in an instant, he said.