Army Modernization and Spatial Computing
Army Modernization and Spatial Computing
by MAJ Daniel Eerhart, USA
Land Warfare Paper 161, May 2024
In Brief
The Army Modernization Strategy does not adequately address spatial computing, despite its significant role in achieving modernization priorities, and research limitations are directly impacting major projects like the Integrated Visual Augmentation System.
Modern ubiquitous technologies, such as smartphones, self-driving cars and virtual meeting technology, rely upon spatial computing, yet civilian adaptation and innovation outpace the Army’s willingness and ability to integrate.
Each modernization and enabling priority relies upon spatial computing as a technology that achieves desired outcomes, such as object detection, spatial mapping and sensor fusion for autonomous or remotely controlled vehicles.
Introduction
In 2018, the Army submitted its Army Modernization Strategy (AMS) report to Congress, establishing the Army’s six materiel modernization priorities and envisioning the endstate for the future Army of 2035.1 To achieve the Army’s modernization goals, then Secretary of the Army Mark Esper announced the establishment of Army Futures Command (AFC) in Austin, Texas.2 The modernization approach integrated elements of doctrine, organization, training, materiel, leader development and education, personnel, facilities and policy (DOTMLPF-P) and aligns cross-functional teams (CFTs) within AFC to compress acquisition timelines from capability gap identification through operational experimentation.3 This paper contends that the Army is experiencing a risk in achieving its modernization mission through oversight of spatial computing research, which involves the integration of digital and physical worlds. The Army should, therefore, include spatial computing research as its tenth priority research area and allocate additional resources to bridge this seemingly overlooked gap within the AMS.
As the Army looks to future conflict in 2035 and beyond, its modernization strategy outlines signature efforts for each CFT. However, as each CFT works to develop progress toward its signature efforts, the AMS needs to include spatial computing. In multi-domain environments, where the lines between the digital and physical aspects of conflict become increasingly blurred, the Army must make a concerted effort to invest in technological research that brings together those environments. From eye-tracking technology in future synthetic training environments through object detection in autonomous future combat vehicles, spatial computing plays an essential role in military-technology integration; a lack of knowledge in this field has already brought some signature efforts, such as the Army’s augmented reality goggles, to a halt.
What is Spatial Computing?
Spatial computing is the “digitization of activities involving machines, people, objects and the environments in which they take place to enable and optimize actions and interactions.”4 The technology utilizes a variety of sensors, such as light detection and ranging (LIDAR), radar, or photogrammetry, to generate a digital three-dimensional (3D) model of a surrounding area.5 Spatial computing is ubiquitous among Internet of Things technologies and any technology that deals with physical space. Self-driving cars, smartphones and virtual meeting technologies are all built upon spatial computing platforms to enable users to operate seamlessly in their environments and have their behaviors translate into the digital world. But few within the Army understand the technology that drives spatial computing, or how essential it is for Army modernization. The four key components of spatial computing are computer vision, sensor fusion, spatial mapping and spatial user interface.
Computer Vision
Computer vision is the computer’s ability to process and analyze visual information from the sensors and cameras linked to the technology.6 In other words, it is simply a computer’s ability to see and interpret the physical world. It enables interpreting objects, faces, movement and proximity while developing a 3D environment model.7 The concept of computer vision began in 1960 when a PhD student at the Massachusetts Institute of Technology, Larry Roberts, envisioned extracting 3D geometric information from a two-dimensional (2D) perspective.8 In 1982, David Marr defined computer vision as “proceeding from a two-dimensional vision to a three-dimensional visual recognition.”9 Through low-level image processing algorithms applied to 2D images, Marr obtained a 2.5D sketch and a 3D model using high-level techniques.10 Today, the most ubiquitous computer vision application occurs in smartphones that use it to identify and authenticate users based on facial features.11 Devices can generate a 3D map of a user’s face and match it with stored data through infrared and visible light.12
Through cameras and sensors, today’s computers can take images of their environment and enhance or manipulate those images to improve the quality of their processing within the algorithms.13 Following image processing, identifying distinctive points in the images reduces the image’s complexity and simplifies matching and recognition.14 The computer then matches the features identified in the images taken with those within its system, creating an estimation of the scene it is trying to interpret.15 Each sensor and camera is continuously using machine learning, deep learning and neural networks to analyze and process visual data.16 Each sensor operates as part of a whole model, and this is where sensor fusion comes in.
Three computer vision techniques are image classification, object detection and object tracking.17 Image classification involves “classifying pixel and vector groups in an image by applying specific rules.”18 While image classification is among the most well-known techniques, research advances are required to solve problems related to deformation, light settings and changing perspectives, especially in a dynamic and asymmetric Army operating environment.19 Object detection is another technique for computer vision; it “allows us to determine the positions or movements of said objects in the scene and draws them with the bounding box.”20 Object detection occurs first during object tracking, then through deep learning applications; in all of this, the computer monitors object movement patterns.21
Sensor Fusion
Sensor fusion is a process where software algorithms, such as Kalman filters or Bayesian networks, amalgamate data from multiple sensing modalities to mitigate detection uncertainties and enhance the capabilities of individual sensors operating independently.22 Autonomous vehicles are the best context for explaining sensor fusion. In autonomous vehicles, the accelerometer measures the vehicle’s acceleration, cameras provide a visual of surrounding areas, radars, LIDAR, and ultrasonic sensors measure the distance to objects, and GPS uses satellite signals to determine location and speed.23 Each sensor has an individual job, but integrating each sensor’s data occurs through sensor fusion.
Sensor fusion utilizes three primary strategies: high-level fusion (HLF), low-level fusion (LLF) and mid-level fusion (MLF).24 HLF involves independent object detection or tracking by each sensor before fusion, while LLF integrates raw data from each sensor at the lowest level of abstraction. MLF, positioned between HLF and LLF, fuses multi-target features from raw sensor data for recognition and classification.25 During LLF, data from each sensor are integrated at the raw data level, potentially improving the accuracy of object detection.26 However, while LLF provides an opportunity for the most precise object detection, challenges in the ability to implement precise extrinsic calibrations of sensors and counterbalance the 3D motion of the system within the environment indicate that more significant research is needed prior to its more ubiquitous implementation.27 Research investments in deep learning and reinforcement learning approaches to enhance sensor fusion algorithms and enable more reliable object detection are essential future developments.
Spatial Mapping
Spatial mapping is sensing and interpreting the physical environment to develop a 3D representation.28 Through techniques such as stereo vision, time-of-flight, structured light and LIDAR, spatial mapping allows technologies such as augmented reality headsets to determine the placement of objects in the surroundings.29 Performing spatial mapping requires a device to have sensors that can capture the geometry and depth of its surroundings while having a processor that can interpret the sensor data, such as a cloud point or a mesh.30 Depending on the purpose of the spatial mapping, the device may also need substantial memory and storage capabilities to store and update 3D model data.31 A ubiquitous example of spatial mapping is the Apple iPhone’s Face ID technology, which uses camera sensor data and LIDAR to create a 3D representation of the user’s face.32 By combining depth information from the LIDAR with color and texture information obtained by the camera, the iPhone can verify the user’s identity (which is why holding up a photo in front of an iPhone will not unlock it).33
Spatial User Interface
Users must rely upon a spatial user interface when interacting with digital content in the physical world.34 Spatial user interfaces are typically associated with virtual or augmented reality or a combination of the two, which has recently been dubbed “XR,” that is, “extended reality.” They allow users to interact and manipulate their digital environment while being more immersive than on a traditional digital device.35 The device can use algorithms and sensors to track the user’s movements, voice commands or physical gestures and to interpret the desired function.36 Haptic feedback equipment, such as gloves, provides users with an additional experience where they may feel what they interact with within the digital environment.37 Spatial user interfaces integrate eye-tracking technologies to monitor the user’s gaze and enable interaction in the virtual world.38 For example, in the Apple Vision Pro, high-speed cameras and LEDs project invisible light patterns onto the user’s eyes for intuitive input.39
Computer vision, sensor fusion, spatial mapping and spatial user interface all work in conjunction to provide a comprehensive digital environment for user immersion. However, the ability of current technology to provide rapid processing and precise interpretations varies. As the Army concentrates on its modernization strategy, it is essential to address the technology gaps and to emphasize that the integration of technology augments and improves Army organizations. Civilian companies need more incentive to enhance spatial mapping technologies to adapt to unique Army requirements, such as in the field of long-range precision fires. It then becomes incumbent upon the Army to invest in the research domains that will provide the most considerable dividends toward its modernization goals.
Army Modernization Strategy: Modernization and Enabling Priorities
The AMS was published in 2019 to describe how the Total Army will transform into a multi-domain force by 2035.40 It outlined six materiel modernization priorities and two enabling priorities.41 The six materiel modernization priorities are:42
- Long-Range Precision Fires
- Next-Generation Combat Vehicles
- Future Vertical Lift Platforms
- Network Technologies
- Air and Missile Defense
- Soldier Lethality
While the two enabling priorities are:43
- Assured Positioning, Navigation and Timing
- Synthetic Training Environment
In response to materiel modernization priorities, AFC established the CFT framework, where each priority has a CFT dedicated to achieving a multi-domain ready force in its domain.44 Each CFT established signature efforts to advance its domain to support the Army strategy.45 Spatial computing advancements have the most significant impacts within the Soldier Lethality and Synthetic Training Environment CFTs, and they present the problem that the advancement of technology outpaces the willingness to adopt it.
Beyond the materiel efforts, the AMS outlines relevant, transformative research priorities.46 Unfortunately, spatial computing is absent from the Army Priority Research Areas. While “Autonomy” may support objectives for autonomous vehicles, it does not adequately address the necessity for the object detection or the spatial mapping required to meet requirements in the Air and Missile Defense CFT, nor the spatial mapping and sensor fusion requirements for augmented reality systems in the Soldier Lethality CFT. Each CFT relies upon an aspect of spatial computing, and establishing a tenth Army Priority Research Area that concentrates on spatial computing will rapidly advance each CFT toward its materiel objectives.
Long-Range Precision Fires
Long-range precision fires remains the Army’s top tactical modernization priority, yet it heavily relies upon significant advances in spatial computing technologies.47 The AFC Long-Range Precision Fires CFT works closely with the Army Combat Capabilities Development Command to establish prototype systems that support the AMS.48 Precision Strike Missiles and Autonomous Multi-Domain Launchers are two modernization platforms that rely heavily on spatial computing.
The updated Precision Strike Missiles have performed flight tests of 499 kilometers, 199 kilometers farther than the current Army Tactical Missile System.49 Current Precision Strike Missiles have demonstrated accuracy on stationary targets, but the Army desires missiles that can strike moving targets.50 Updated Precision Strike Missiles that can strike moving targets are not currently an option due to the limitations of spatial computing technologies.51 For missiles to strike static targets, GPS coordinates or inertial navigation is required; with any of those in hand, the missile can strike the pre-designated target.52 However, for moving targets, radar or infrared seekers are required.53 The missiles will likely utilize active radar homing as a guidance system to achieve the Army’s desired anti-ship capabilities. During active radar homing, the missile contains a radar transceiver and can track targets autonomously.54 Active radar-homing missiles must be capable of performing spatial computing calculations, such as sensor fusion and spatial mapping. If the missile uses infrared, it will likely utilize imaging infrared (IIR), where the infrared/ultraviolet sensor produces an infrared image.55 Charge-coupled devices in digital cameras function in much the same way. The missile will need the ability to perform rapid sensor fusion calculations to make in-flight adjustments and accurately strike the target. As jamming and missile defense technologies continue to advance, the necessity of missiles to perform rapid spatial mapping calculations also increases. To achieve the Army’s desired endstate of an anti-ship capable Precision Strike Missile, spatial computing advances must be made to ensure that this ability fits within the relatively small package of long-range missiles.
The Army’s autonomous multi-domain launcher (AMDL) provides the most obvious example of the necessity of spatial computing research.56 The vehicle is an unmanned launcher that performs autonomous waypoint navigation and allows for leader-follower autonomy or drive-by-wire operation.57 The vehicle fuses sensors with its single and triple situational awareness cameras, GPS, LIDAR sensors, autonomous steering module, blind spot radars and cameras and with a position navigation unit.58 The AMDL will face all the same spatial computing problems of major civilian car companies, with the disadvantage that the vehicle needs the spatial computing power to navigate cross-country movements in areas that may not have been pre-mapped by the onboard computers. While civilian vehicles may be able to get by with 2D object detection, a heavy military vehicle carrying missiles needs to have 3D object detection and be capable of rapidly mapping the environment to detect moving objects.
Next-Generation Combat Vehicles
While the Army’s Next-Generation Combat Vehicle efforts concentrate on four vehicle variations, the Robotic Combat Vehicle (RCV) program relies entirely upon advances in spatial computing to solve autonomous ground navigation problems.59 The RCV will function as a combat vehicle that can operate semi-autonomously or with operators controlling it remotely.60 The RCV program projects three variants: a light variant (RCV-L) which weighs under 10 tons, a medium variant (RCV-M) which weighs between 10 and 20 tons, and a heavy variant (RCV-H) which weighs between 20 and 30 tons.61 Each platform integrates onboard direct-fire weapon systems.62 Fixed-wing aircraft can transport all variants, and rotary-wing aircraft can transport the RCV-L.63
The RCV program faces issues similar to those of the AMDL, where a military vehicle with onboard weapons must accomplish tasks that even civilian vehicles are having difficulty accomplishing. Within the United States, autonomous vehicles are currently at level two of the Society of Automotive Engineers' (SAE) six levels, meaning that a vehicle can steer, accelerate and brake independently, but, for overall functioning, it still requires an engaged driver.64 The RCV proposes removing a driver and placing the vehicle in a dynamic tactical environment with asymmetric threats and unmapped terrain. The vehicle would need fast and precise spatial mapping technologies and comprehensive sensor fusion abilities. The additional integration of a remote operator means the vehicle must have a usable interface to communicate with the operator in the fully mapped environment. In essence, the vehicle itself would become part of the human-machine interaction.
Future Vertical Lift
AFC’s Future Vertical Lift (FVL) CFT concentrates on meeting the modernization of the aviation spaces as outlined in the AMS.65 The Future Tactical Unmanned Aircraft System (FTUAS) is an example of the necessity for spatial computing research to translate into the aviation space.66 Like the autonomous and unmanned vehicles developed by the Long-Range Precision Fires and Next-Generation Combat Vehicles CFTs, the FTUAS relies upon the spatial computing concepts of computer vision, spatial mapping and sensor fusion. It relies upon the sensor fusion process of electro-optical/infra-red (EO/IR) sensors, infra-red/laser pointer/laser designator/laser range finders, vehicle cameras and sensors to operate in an autonomous or remotely controlled manner.67
Like the other autonomous vehicles in the Army’s development, the FTUAS must overcome several research and technology hurdles before distribution across the Army. First, the vehicle needs advanced and robust object detection and the ability to interpret its surroundings. It needs to have the ability to operate in a variety of weather and light conditions if it is going to be helpful in supporting combat operations. It not only needs to be able to navigate itself undamaged to a target area but also to identify potential enemy combatants and to utilize the equipped payload. If the FTUAS is to operate in autonomous mode, it also needs to be capable of responding to unexpected encounters with precision. Dynamic combat environments necessitate more than rule-based programming due to the inability to predict every scenario in advance. Therefore, we need onboard systems that can accurately depict their environments and adjust as needed. Considering that the FTUAS will likely lack detailed pre-loaded maps of the areas they will be operating in, it is even more essential that the vehicle has accurate localization algorithms and can map its surroundings.
Network Technology
AFC’s Network CFT has four primary signature efforts: Unified Network, Command Post Common Environment, Joint Interoperability/Coalition Accessible and Command Post Mobility/Survivability.68 While spatial computing does not play an apparent direct role in the Network CFT, it likely plays a tertiary role relevant to the Tactical Network Testbed (TNT). A TNT allows for evaluating a military or tactical communication network’s performance, reliability and security.69 By examining the network’s bandwidth utilization, responsiveness, encryption capabilities and resilience, units can assess the network’s ability to effectively conduct data transfers, support communication and coordinate among military units.70 As the Army increasingly relies upon autonomous and semi-autonomous vehicles to support combat operations, the ability to evaluate data transfer rates, latency and performance of wireless communication is critical for mission success. Additionally, as advanced technologies utilize sensors and cameras to execute spatial mapping and computer vision protocols, protecting the security of the data generated is paramount, and the application of TNTs provides a controlled environment for testing and improving security measures.
Air and Missile Defense
The Air and Missile Defense CFT within AFC has been working on a variety of projects, including Army Integrated Air and Missile Defense (AIAMD) and its materiel solution, the Integrated Air and Missile Defense Battle Command System (IBCS); Maneuver Short-Range Air Defense (M-SHORAD); Integrated Fire Protection Capabilities (IFPC); and the Lower Tier Air and Missile Defense Sensor (LTAMDS).71 All of the air and missile defense systems developed by the AMD CFT integrate advanced object detection capabilities that are not easily comparable in the civilian sector. The systems need unique sensor fusion capabilities to develop a single air defense picture for commanders and must simultaneously interpret sensors, weapons and mission command systems. Current technological limitations indicate that the speed of sensor fusion must be commensurate with the speed of combat.
Assured Positioning, Navigation and Timing
The Assured Positioning, Navigation and Timing/Space CFT within AFC works on three signature efforts, one of which is the Tactical Space Layer (TSL).72 The TSL integrates data from a variety of sensors through sensor fusion to establish real-time data on the location and movement of objects on the ground.73 Commanders can use the data provided by the TSL to inform tactical decisionmaking and develop a more complete battlefield awareness. The TSL will integrate with the Tactical Intelligence Targeting Access Node and access aerial and terrestrial sensors while simultaneously enabling assured access to national and commercial sensors.74 The integration of so many sensors simultaneously and rapidly presents questions about what level of sensor fusion will occur to provide the real-time battlefield picture. Due to the wide range of data input sources, low-level fusion at the raw data level would be challenging to incorporate; however, it would provide a more accurate picture of the battlefield.
Soldier Lethality
The Soldier Lethality CFT’s Integrated Visual Augmentation System (IVAS) signature effort relies entirely on the system’s ability to integrate all spatial computing components precisely.75 The IVAS is an augmented reality headset being developed for the Army by Microsoft to improve situational awareness for Soldiers by integrating a spatial user interface and creating overlaying sensor information to ensure that Soldiers have enhanced operational knowledge.76 The current version of the IVAS has experienced a spatial mapping issue called “dynamic occlusion limitations,” where the device could not replicate how physical objects might block the view of virtual objects.77 The spatial mapping limitation has severely disrupted IVAS production and distribution and has resulted in no current solution to the problem.78 An IVAS performance report published in January of 2023 criticized the IVAS with numerous technical difficulties.79 Successful implementation of augmented reality goggles for Soldiers in combat environments requires active employment below the threshold of combat training environments. Early adoption and adaptation into the Troop Leader Procedures to enable robust rehearsals and thorough orders production will ensure that the adoption in combat environments is required to overcome fewer hurdles.
Synthetic Training Environment
The Synthetic Training Environment CFT within AFC concentrates on five primary signature efforts: Synthetic Training Environment Information System, Reconfigurable Virtual Collective Trainers, Squad Immersive Virtual Trainer, Squad/Soldier Virtual Trainers and One World Terrain.80 Establishing synthetic training environments relies heavily upon advances in spatial computing technologies because the digital world needs to provide a realistic representation of the physical world while establishing a usable spatial user interface. Regardless of the platform used for integrating a synthetic training environment, the system must be capable of providing Soldiers with the comprehensive scope required for realistic training. The Army’s One World Terrain program seeks to provide a fully accessible virtual representation of the physical Earth within the Army network.81 Achieving a complete virtual representation of the Earth requires the digital environment to expand upon existing 3D layers and to integrate accurate spatial mapping technologies, converting content from the physical world and ensuring it transfers into a high-resolution representation within the digital environment.
Conclusion
As the Army pursues a transformation toward multi-domain forces by 2035, as outlined in the AMS, it faces obstacles to implementation. Signature efforts within the AFC CFTs have faced insurmountable hurdles due to their inability to solve complex spatial computing problems, such as the disrupted development of Integrated Visual Augmentation System. As the development and production of autonomous and semi-autonomous combat vehicles expands, there needs to be a concerted effort to understand and solve problems unique to the mission set. The physical and digital realms will continue to blur as the Army transitions toward its modernization goals, and the ability to address the complex spatial user interface problems inherent in developing synthetic training environments and augmented reality combat support tools will be decisive in the Army’s ability to adapt and compete in the global power competition. Therefore, the Army must expand its research priorities and concentrate on the aspects of spatial computing that will ensure success as it strives toward a modernized Army capable of multi-domain operations.
★ ★ ★ ★
Major Daniel Eerhart is an Army Psychological Operations Officer currently serving as a Cyber Policy, Law and Strategy Research Scientist at the Army Cyber Institute. He previously served as a graduate student at the University of California, Los Angeles (UCLA), where he earned a Master of Public Policy degree specializing in Technology and Cyber Policy. He holds professional and graduate level certifications in Cybersecurity and Data Analytics.
- Department of the Army, 2019 Army Modernization Strategy (Washington, DC: U.S. Government Printing Office).
- Joe Lacdan, “Establishment of Army Futures Command Marks a Culture Shift,” Army News Service, 27 August 2018.
- 2019 Army Modernization Strategy.
- PTC Reality Lab, “What is Spatial Computing,” PTC Industry Insights, n.d.
- Dweepna Garg, Bhavika Patel, Radhika Patel and Ritika Jani, “Spatial Computing: Next Big Thing of Physical and Digital World,” in ICDSMLA 2021, ed. Amit Kumar, Sabrina Senatore and Vinit Kumar Gunjan, Lecture Notes in Electrical Engineering (Singapore: Springer Nature, 2023), 211–219.
- Richard Szeliski, Computer Vision: Algorithms and Applications (New York City, NY: Springer Nature Publishing, 2022).
- Richard Szeliski, Computer Vision.
- Kemal Gökhan Nalbant and Şevval Uyanik, “Computer Vision in the Metaverse,” Journal of Metaverse 1, no. 1 (2021): 9–12.
- Nalbant and Uyanik, “Computer Vision in the Metaverse.”
- Nalbant and Uyanik, “Computer Vision in the Metaverse.”
- Yong Hwan Lee, Woori Han, Youngseop Kim and Bonam Kim, “Facial Feature Extraction Using an Active Appearance Model on the iPhone,” in 2014 Eighth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (Birmingham, United Kingdom: IEEE, 2014), 196–201.
- Yong Hwan Lee et al., “Facial Feature Extraction.”
- J.R. Parker, Algorithms for Image Processing and Computer Vision (Hoboken, NJ: John Wiley & Sons, 2010).
- Roberto Brunelli, Template Matching Techniques in Computer Vision: Theory and Practice (Hoboken, NJ: John Wiley & Sons, 2009).
- Roberto Brunelli, Template Matching Techniques in Computer Vision.
- Roberto Brunelli, Template Matching Techniques in Computer Vision.
- Nalbant and Uyanik, “Computer Vision in the Metaverse.”
- Nalbant and Uyanik, “Computer Vision in the Metaverse.”
- Nalbant and Uyanik, “Computer Vision in the Metaverse.”
- Nalbant and Uyanik, “Computer Vision in the Metaverse.”
- Nalbant and Uyanik, “Computer Vision in the Metaverse.”
- J.Z. Sasiadek, “Sensor Fusion,” Annual Reviews in Control 26, no. 2 (January 1, 2002): 203–228.
- De Jong Yeong, Gustavo Velasco-Hernandez, John Barry and Joseph Walsh, “Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review,” Sensors 21, no. 6 (January 2021): 2140.
- De Jong Yeong et al., “Sensor and Sensor Fusion Technology in Autonomous Vehicles.”
- De Jong Yeong et al., “Sensor and Sensor Fusion Technology in Autonomous Vehicles.”
- De Jong Yeong et al., “Sensor and Sensor Fusion Technology in Autonomous Vehicles.”
- Yong Zhou, Yanyan Dong, Fujin Hou and Jianqing Wu, “Review on Millimeter-Wave Radar and Camera Fusion Technology,” Sustainability 14, no. 9 (January 2022): 5114.
- Coursera Staff, “What Is Spatial Computing? Definition, Applications, and Careers,” Coursera, 11 December 2023.
- D. Scharstein and R. Szeliski, “High-Accuracy Stereo Depth Maps Using Structured Light,” in 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003, Proceedings, CVPR 2003: Computer Vision and Pattern Recognition Conference (Madison, WI, USA: IEEE Computer Society, 2003), I-195–I-202.
- Haekyung Park and Dongkun Lee, “Comparison between Point Cloud and Mesh Models Using Images from an Unmanned Aerial Vehicle,” Measurement 138 (1 May 2019): 461–466.
- Scharstein and Szeliski, “High-Accuracy Stereo Depth Maps Using Structured Light,” I-I.
- Y. H. Lee, W. Han, Y. Kim and B. Kim, “Facial Feature Extraction Using an Active Appearance Model on the iPhone,” 2014 Eighth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing (Birmingham, UK: 2014), 196–201.
- Y. H. Lee et al., "Facial Feature Extraction."
- Arun Kulshreshth, Kevin Pfeil and Joseph J. LaViola, “Enhancing the Gaming Experience Using 3D Spatial User Interface Technologies,” IEEE Computer Graphics and Applications 37, no. 3 (May 2017): 16–23.
- Kulshreshth, Pfeil and LaViola, “Enhancing the Gaming Experience.”
- Khadidja Chaoui, Sabrina Bouzidi-Hassini and Yacine Bellik, “SUIL: A Modeling Language for Spatial User Interaction,” Journal of Reliable Intelligent Environments 9, no. 2 (1 June 2023): 161–181.
- Severin Engert, Konstantin Klamka, Andreas Peetz and Raimund Dachselt, “STRAIDE: A Research Platform for Shape-Changing Spatial Displays Based on Actuated Strings,” in Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New York: Association for Computing Machinery, 2022), 1–16.
- Tolegen Akhmetov, “Industrial Safety Using Augmented Reality and Artificial Intelligence,” Nazarbayev University, 10 November 2023.
- Stephen Warwick, “Apple Vision Pro Privacy: Optic ID, Eye Tracking, and EyeSight Protections Explained,” iMore, 18 January 2024.
- 2019 Army Modernization Strategy.
- 2019 Army Modernization Strategy.
- 2019 Army Modernization Strategy.
- 2019 Army Modernization Strategy.
- 2019 Army Modernization Strategy.
- 2019 Army Modernization Strategy.
- 2019 Army Modernization Strategy.
- Sydney J. Freedberg, Jr., “Army Will Field 100 Km Cannon, 500 Km Missiles: LRPF CFT,” Breaking Defense, 23 March 2018.
- Sydney J. Freedberg, Jr., “Army Will Field 100 Km Cannon, 500 Km Missiles: LRPF CFT.”
- Maureena Thompson, “Army Programs Promote Strength, Agility of Long Range Precision Fires,” Army News Service, 1 June 2022.
- Sydney J. Freedberg, Jr., “New Army Long-Range Missile Might Kill Ships, Too: LRPF,” Breaking Defense, 13 October 2016.
- Sydney J. Freedberg, Jr., “New Army Long-Range Missile Might Kill Ships, Too: LRPF.”
- Sydney J. Freedberg, Jr., “New Army Long-Range Missile Might Kill Ships, Too: LRPF.”
- Mehmet Cem Demirci, “How Do Missiles Locate Their Target?” Naval Post, 9 April 2021.
- Mehmet Cem Demirci, “How Do Missiles Locate Their Target?”
- Mehmet Cem Demirci, “How Do Missiles Locate Their Target?”
- Matthew Beinart, “Army Demonstrates Autonomous Multi-Domain Launcher Concept,” Defense Daily, 17 June 2021.
- “Soldier Touchpoints Guide Successful Autonomous Launcher Demo,” Army News Service, 19 January 2024.
- “Soldier Touchpoints Guide Successful Autonomous Launcher Demo.”
- Latashia Bates, Army Readiness and Modernization in 2022, Association of the United States Army, Land Warfare Paper 146, 15 June 2022.
- Congressional Research Service, The Army’s Robotic Combat Vehicle (RCV) Program, Congressional Research Service Reports, 14 July 2021.
- Congressional Research Service, The Army’s Robotic Combat Vehicle (RCV) Program.
- Congressional Research Service, The Army’s Robotic Combat Vehicle (RCV) Program.
- Congressional Research Service, The Army’s Robotic Combat Vehicle (RCV) Program.
- SAE International, “SAE Levels of Driving Automation Refined for Clarity and International Audience,” SAE International Blog, 3 May 2021.
- Maureena Thompson, “Army’s Future Vertical Lift Eyes Future Skies,” Army News Service, 15 August 2022.
- Aviation Program Executive Office, “Future Tactical Unmanned Aircraft System (FTUAS),” Program Executive Office, Aviation, 3 September 2020.
- Aviation Program Executive Office, “Future Tactical Unmanned Aircraft System (FTUAS).”
- 2019 Army Modernization Strategy.
- Alex Bordetsky, “Testbed for Tactical Networking and Collaboration,” International C2 Journal 4, no. 3 (2010).
- Scott Gourley, “SOF Tactical Network Testbed Highlighted at SOFIC,” Defense Media Network, 9 June 2013.
- Maureena Thompson, “AFC Cross-Functional Team Tackles Modernization of Air and Missile Defense,” Army News Service, 13 July 2022.
- Jaspreet Gill, “Army Approves Tactical Space Layer A-CDD,” InsideDefense, 19 April 2021.
- Jaspreet Gill, “Army Approves Tactical Space Layer A-CDD.”
- Jaspreet Gill, “Army Approves Tactical Space Layer A-CDD.”
- 2019 Army Modernization Strategy.
- Fredrick Shear, “Army Accepts Prototypes of the Most Advanced Version of IVAS,” Army News Service, 1 August 2023.
- Ashley Roque, “Army’s Pricey IVAS Goggles Meet a Training Obstacle: Doors,” Breaking Defense, 3 August 2023.
- Joaquim Jorge, Rafael Kuffner Dos Anjos and Ricardo Silva, “Dynamic Occlusion Handling for Real-Time AR Applications,” in Proceedings of the 17th International Conference on Virtual-Reality Continuum and Its Applications in Industry, VRCAI ‘19 (New York: Association for Computing Machinery, 2019), 1–9.
- Nickolas Guertin, FY 2022 Annual Report from the Director, Operational Test & Evaluation, January 2023.
- 2019 Army Modernization Strategy.
- Kristin Cody, “Maxar’s Vricon Awarded Phase 2 of U.S. Army’s One World Terrain Contract,” Business Wire, 24 February 2021.
The views and opinions of our authors do not necessarily reflect those of the Association of the United States Army. An article selected for publication represents research by the author(s) which, in the opinion of the Association, will contribute to the discussion of a particular defense or national security issue. These articles should not be taken to represent the views of the Department of the Army, the Department of Defense, the United States government, the Association of the United States Army or its members.