Harnessing the Algorithm: Shaping the Future of AI-Enabled Staff
Harnessing the Algorithm: Shaping the Future of AI-Enabled Staff

by MAJ Matt Tetreau, USA
Harding Paper 25-2, April 2025
In Brief
- The value of decisionmaking processes is in experiential learning for staff, rather than any output or product. These processes are critical to fostering shared understanding and adaptability.
- Artificial intelligence (AI) tools promise increased efficiency for commanders and staff as well as accelerated decisionmaking.
- Operational staff of the near future must retain some traditional staff functions to balance the value of the process with the AI-enabled speed of decision demanded by modern combat.
Introduction
Artificial intelligence (AI) promises to enable commanders and staff to make better informed, faster decisions. Integrating and analyzing the vast quantities of data available to commanders today all but necessitates reliance on AI tools. What’s more, as adversaries adopt AI planning tools, the speed of decision required to fight and win is likely to outpace the capabilities of human cognition. Despite the advantages afforded by AI, I propose that human staff should retain certain functions, even at some cost in efficiency. Just as AI is too critical for the future of mission command to ignore, some tasks are too critical for generating understanding to delegate to an algorithm. My arguments against AI performing specific functions focus primarily on the value that staff derive by executing those processes and the imperative to solve the right problems.
President Dwight D. Eisenhower’s dictum that “plans are worthless, but planning is everything,” underscores the value of process over output. While AI tools will likely produce serviceable plans, military professionals must grapple with the advantages in speed and manpower afforded by AI relative to the understanding and adaptability that result from deliberate planning processes. Decisions made by the Army in the coming years will set institutional norms, standards and approaches to harnessing this critical technology. Integrating AI into operational headquarters requires a deliberate and nuanced approach that synergizes the unique benefits afforded by both AI and humans. I propose a framework for an AI-enabled staff that retains and amplifies distinctly human competencies.
The Future Is Now
Over the past few years, developers have released multiple AI tools developed for, or in conjunction with, the DoD. These programs represent a spectrum of potential applications, from productivity tools (like ChatGPT) to those with the potential to support operational decisionmaking. Some of these, such as CamoGPT and NIPRGPT, are military analogs to popular chat-based, open-source text-generation programs. In another promising application, Marine Corps University uses large language models (LLMs) to support educational wargames and simulations for field-grade officers’ professional military education.[i]
AI-enabled planning is not a science fiction pipe dream, but a feature of the current state of the professional art for military planners. Perhaps the best example of an AI operational planning tool in development today is Course of Action GPT (COA-GPT). According to the algorithm’s developers, “COA-GPT leverages LLMs to swiftly develop valid COAs . . . Commanders can input mission specifics and force descriptions—in both text and image formats—receiving multiple, strategically aligned COAs in a matter of seconds.”[ii] If adequately matured, the benefits of such a tool are evident. The promises of accelerated decisionmaking processes and smaller headquarters footprints could prove advantageous, if not decisive, in future high-intensity conflicts.
Too Important to Be Left to the Algorithms
AI can produce plans, but can it replicate the benefits associated with the planning process? Planning is, after all, more than COA development or orders production. The value of decisionmaking processes, such as the Army Design Methodology or the Military Decision-Making Process, is in experiential learning for staff, rather than any output or product. These processes are critical to fostering shared understanding and adaptability. While AI may soon be able to perform many of the functions associated with military staff, I propose that, at least in the near-term, humans should retain those processes that foster shared understanding and facilitate adaptability.
Delegating higher-order cognitive tasks to AI risks sacrificing the shared understanding that results from rigorous collective analysis. In most cases, the processes associated with generating understanding are more important than any tangible outputs. For example, problem framing typically involves grappling with complexities of the political environment, policy nuance and the nature of the military problem.[iii] While problem framing outputs, such as a problem statement or lines of effort, may aid in summarizing relevant points about current and desired environments, the process creates a level of shared understanding among staff that is unlikely to be achieved as a result of reviewing any algorithmic output. Likewise, the process of mission analysis, rather than any product or output, arms staff to assess risk, contextualize emergent events and information and evaluate the effects of stimuli (enemy, friendly, weather, etc.) on the environment.
Problem framing processes often illuminate previously unperceived problems, relevant factors and relationships to a degree of complexity and nuance that a product, however thorough, is unlikely to replicate. In other words, staff learn by doing. As a result, planning staff can effectively adapt to emerging threats and dynamic events. As Williamson Murray, military historian and author of The Dynamics of Military Revolution, 1300–2050, suggests, “the most important attribute of military effectiveness is the ability to adapt to the actual conditions of combat and the conflict.”[iv] Adaptation requires not only response to dynamic battlefield conditions, but also reorientation of one’s understanding, assumptions and approach. This kind of reorientation is a weakness of many current AI models. While praising the speed and novel approaches of AI tools, Army of None author Paul Scharre warns that, even in relatively bounded environments, AI models “struggle to adapt to even modest change.” He concludes, “brittleness is likely to be a major detriment in real-world settings where the space of possible enemy actions is open-ended and the environment is not highly constrained.”[v]
Of equal importance, problem framing defines the problem facing staff. The criticality of tackling the right problem, and having that problem articulated as clearly as possible, is evident to anyone with staff experience. An algorithm can surely come up with the “right answer to the wrong problem” as well or better than a human, but can the algorithm frame problems to identify the essence of the military problem? Likewise, can it define a causal chain from the military operations it proposes to higher’s intent and, ultimately, to political objectives?
It appears more than likely that in the near future AI tools will be able to tell us how to maneuver from A to B as effectively or better than a human planner. The question, however, is whether AI can tell us if we should go to B, why we should go to B, or how going to B nests within our operational and strategic framework. Indeed, the question of “how” is one that tends to stymie even AI researchers—a phenomenon known as the “black box problem.”[vi] It is often unclear how an AI model reached a conclusion in response to a given problem.[vii] Without understanding the rationale behind a particular course of action, commanders may find it difficult to evaluate and trust AI-generated recommendations.
An AI-Enabled Staff Model
If we believe that the value of staff work and planning follows strictly from outputs, then we should hand these functions to AI programs as soon as the quality of the AI output exceeds that of staff. However, if we accept that there is value in the process of meaning-making that occurs naturally during quality planning, then we must determine how best to leverage AI to support that process. At one end of the spectrum of possible futures, AI programs replace most of the staff, while a few handlers remain to manage data or maintain hardware. At the other end, we eschew integrating AI into the planning process, regardless of technological advancement or advantages adversaries derive from the technology. Of course, between these extremes lies an approach that harnesses AI to enable staff.
Humans create understanding through the application of judgment to solve complex problems. AI, on the other hand, should accomplish tasks associated with well-defined problems, particularly when those problems are tedious, time-consuming or involve more data than humans can reliably process. For example, AI-enabled staff may gain efficiency at little risk to the analytical process by relying on AI tools to construct graphical products, such as a Modified Combined Obstacle Overlay. Staff inundated with sensor data may gain efficiency and prepare better decision-support tools by relying on AI to identify relevant data from a large data set.[viii] Likewise, commands may improve their COA analysis by wargaming iteratively to illustrate a range of potential outcomes.
Military planning and decisionmaking processes often become exercises in generating outputs to answer mission-specific questions. To some extent, AI tools should perform this function to enable faster decisionmaking. Particularly at higher echelons, however, asking the right question is at least as important as answering it. Because the output of any algorithm is unique to the problem it is tasked to solve, AI planning tools rely on humans to define the parameters of the problem. As AI researchers are fond of saying, AI models are only as good as their algorithms and the data they are fed.[ix]
In the not-so-distant future, commanders will confront the decision of which tasks they delegate to AI. Factors including individual leader background and education, program maturity, departmental and service policy and individual and institutional trust in the technology will influence these decisions. For the foreseeable future, however, I suggest that humans maintain a firm grasp on the responsibility to define and analyze problems. Only in this way can we harness the benefits of the planning process to build shared understanding, promote adaptability and ensure that we are solving the correct problem.
Conclusion
Our use of AI should be bounded not by the state of the technology, but by the points at which the tool no longer facilitates analytically sound human decisionmaking. War is a fundamentally human endeavor, and our use of AI should augment, rather than replace, the analytical force of our human commanders and staff.
AI is undoubtedly a critical tool for achieving decision advantage and will serve an important role in operational planning in the years to come. However, AI must not supplant deliberate planning at the risk of eroding shared understanding and adaptability. Integrating AI into our planning and decisionmaking processes requires conscientious recognition of the value of sometimes messy and time-consuming processes, just as it requires gaining efficiency by relinquishing some tasks to an AI model.
★ ★ ★ ★
Author Biography
MAJ Matthew Tetreau is an Army Strategist serving on the Army Forces Command Staff. He holds an MA from the Georgetown University Walsh School of Foreign Service and is a LTG (Ret) James M. Dubik Writing Fellow.
Acknowledgements
The author thanks Dr. JP Clark at the U.S. Army War College for encouraging him to write this article.
Notes
- [i] Brandi Vincent, “How Marine Corps University is Experimenting with Generative AI in Simulations and Wargaming,” Defense Scoop, 28 June 2023.
- [ii] Vinicius G. Goecks and Nicholas Waytowich, "COA-GPT: Generative Pre-Trained Transformers for Accelerated Course of Action Development in Military Operations,” in 2024 International Conference on Military Communication and Information Systems (Koblenz, Germany: 2024), 1–10.
- [iii] Department of the Army, Field Manual (FM) 5-0, Planning and Orders Production (Washington, DC: U.S. Government Printing Office, November 2024), 4-53–4-56.
- [iv] Williamson Murray, Military Adaptation in War: With Fear of Change (New York: Cambridge University Press, 2011), 362.
- [v] Paul Scharre, “AI’s Inhuman Advantage,” War on the Rocks, 10 April 2023.
- [vi] Lou Blouin, “AI’s Mysterious ‘Black Box’ Problem, Explained,” University of Michigan-Dearborn News, 6 March 2023.
- [vii] Raman V. Yampolskiy, “Unexplainability and Incomprehensibility of Artificial Intelligence,” Journal of Artificial Intelligence and Consciousness 7, no. 2 (2019): 4.
- [viii] “Sensor Proliferation Is Changing How We Wage War,” Stratfor Worldview, 11 April 2019.
- [ix] Katharine Miller, “Data-Centric AI: AI Models Are Only as Good as Their Data Pipeline,” Stanford Institute for Human Centered Artificial Intelligence, 25 January 2022.
The views and opinions of our authors do not necessarily reflect those of the Association of the United States Army. An article selected for publication represents research by the author(s) which, in the opinion of the Association, will contribute to the discussion of a particular defense or national security issue. These articles should not be taken to represent the views of the Department of the Army, the Department of Defense, the United States government, the Association of the United States Army or its members.