Panel: Army Shouldn’t Rely on AI Models Like ChatGPT

Panel: Army Shouldn’t Rely on AI Models Like ChatGPT

People at a panel discussion
Photo by: AUSA/Luc Dunn

As the U.S. Army works to become a more datacentric organization, the one thing it should not do, according to a panel of experts, is rely too heavily on artificial intelligence large-language models to get there.

“Don’t do it,” said Stephen Riley, a customer engineer on Google’s Army engineering team, in a discussion July 2 at the Association of the U.S. Army’s Hot Topic on cyber and information advantage.

“It’s the easy button,” he said, but using large-language models, or LLMs, such as ChatGPT or Google’s Gemini is akin to “boiling the ocean to make yourself a cup of coffee.” The main issue, he said, is the availability of sufficient computing power at the tactical level.

Instead of relying on LLMs, Google built a “knowledge graph” that mapped what Riley called “commonly accepted truths.” Comparing information in the knowledge graph to the search engine results that users found most useful enabled Google to fine-tune its search algorithm to deliver useful answers that “conform with generally accepted truth,” Riley said. Doing this required serious computing power but not the use of LLMs, he said.

Building an Army-centered knowledge graph would require significantly less computing power than trying to build an Army-specific LLM, Riley said.

Young Bang, the Army’s principal deputy assistant secretary for acquisition, logistics and technology and the panel’s moderator, asked how the Army could avoid allowing foreign malign influence to overwhelm that knowledge graph and artificially shift the window of accepted truths.

Riley emphasized the importance of human oversight. Google has “humans in the loop,” he said, who are watching trends and those potential shifts in what is acceptable to society. “We cannot abdicate human reasoning to the machines,” he said.

Edward Kao, an intelligence and decision technologies researcher at the Massachusetts Institute of Technology’s Lincoln Laboratory, said a human analyst offers more than simply confirming that content is accurate. Those humans provide “the coloring, the context and the mission interpretation” that we cannot expect of machines, Kao said. “Machines are really good at aggregating information,” he said, but humans provide interpretation “within the context of the overall information landscape.”

“The Army has always been forward-thinking,” said Michael DiOrio, senior vice president of global operations at Dataminr. The key to success, he said, is increased collaboration between the Army and industry.

“There’s nothing that we want to do with Army information that the commercial world hasn’t already done,” Riley said.