Skip to content
April 30:The Fall of Saigon51yr ago

How AI Hit 15 Targets in One Hour During an Army Test, and What It Means for the Future of War

Alex Carter · · 11 min read
Save
Share:
U.S. Army soldiers testing the Advanced Targeting and Lethality Aided System during Project Convergence at Fort Irwin
Alex Carter
Alex Carter

Modern Warfare & Defense Technology Contributor

Alex Carter writes about modern warfare, emerging military technology, and how doctrine adapts to new tools. His work focuses on what changes in practice -- command, control, targeting, and risk -- when systems like drones and autonomous platforms become routine.

During a U.S. Army test in the California desert, an artificial intelligence system identified, prioritized, and generated firing solutions for 15 separate targets in a single hour. Fifteen targets. One hour. The same process, performed by human intelligence analysts working through the traditional kill chain, would have taken somewhere between 12 and 24 hours, and required a staff of dozens.

That gap, between what a machine can process and what a human team can manage, is the single most important number in modern military planning. Not because the AI was perfect. Not because it eliminated the need for human judgment. But because it demonstrated something military leaders have suspected and adversaries have feared: the side that can compress the kill chain from hours to minutes will have an overwhelming advantage in the next major conflict, and artificial intelligence is the only technology capable of achieving that compression at scale.

What Happened at Project Convergence

The Army's Project Convergence series, which ran from 2020 through 2024, was the most ambitious test of AI-enabled warfare the Pentagon had ever conducted. The experiments took place primarily at Fort Irwin's National Training Center and at other test ranges across the southwestern United States, bringing together soldiers, prototype technologies, and AI systems in scenarios designed to simulate high-intensity combat against a peer adversary.

The headline capability tested was what the military calls "sensor-to-shooter" integration, the ability to detect a target with one sensor, process that detection through an AI system, match it to an available weapon, and generate a firing solution, all within minutes rather than hours. The AI system at the center of this process was built around the Army's Advanced Targeting and Lethality Aided System (ATLAS) and integrated with broader JADC2 (Joint All-Domain Command and Control) architecture designed to connect sensors and shooters across all military branches.

U.S. Marine Corps officers reviewing tactical information on multiple screens inside a Tactical Air Control Center during operations
Officers review tactical data inside a Tactical Air Control Center during a command and control exercise. AI systems are designed to process the same information that fills these screens, radar tracks, signals intelligence, imagery, in seconds rather than the hours human analysts require.

In the 15-target demonstration, the AI system ingested data from multiple sensor types simultaneously: satellite imagery, drone video feeds, signals intelligence intercepts, and ground-based radar tracks. It fused this data into a single coherent picture, identified individual targets within that picture, classified each one by type and threat priority, cross-referenced the target locations against available weapons systems, calculated engagement solutions, and presented completed firing packages to human operators for approval, all within minutes per target.

The human operators retained the authority to approve or reject each engagement. But the analytical work that preceded that decision, the hours of intelligence fusion, target development, and weapons-target pairing that traditionally require entire staff sections, was compressed into a process measured in minutes. The AI didn't replace the humans. It replaced the bottleneck.

How the Kill Chain Compression Works

To understand why AI changes warfare so fundamentally, you need to understand how the traditional kill chain functions, and where it breaks down.

The conventional targeting process follows a sequence known as F2T2EA: Find, Fix, Track, Target, Engage, Assess. Each step requires different intelligence disciplines, different analysts, and different communication channels. A satellite detects something that might be a mobile missile launcher. An imagery analyst confirms what it is. A signals intelligence team cross-references the detection with intercepted communications. A targeting officer assigns it to a weapon system. A fire direction center calculates the firing solution. A commander authorizes the strike. A battle damage assessment team evaluates the results. The entire process can take 24 to 72 hours, during which the target may have moved, making the entire effort worthless.

Soldiers and engineers testing the Advanced Targeting and Lethality Aided System during Project Convergence 2022 at Fort Irwin
Engineers and soldiers work together during Project Convergence 2022 to test the Advanced Targeting and Lethality Aided System. ATLAS integrates AI-driven target recognition with existing Army fire control systems, compressing the kill chain from hours to minutes.

AI compresses this sequence by performing multiple steps simultaneously. Instead of sequential human analysis, one analyst looks at the imagery, passes it to another who checks the signals intelligence, who passes it to another who does the weapons pairing, an AI system ingests all available data at once, runs classification algorithms against the combined dataset, and produces a targeting recommendation in seconds. The find, fix, track, and target steps that took a human staff section hours happen in the time it takes the AI to process a single computational cycle.

The Army's Maven Smart System, which evolved from the controversial Project Maven that originally used AI to analyze drone footage, now integrates machine learning algorithms that can process multiple intelligence streams simultaneously. The system identifies patterns that human analysts might miss, the correlation between a particular signals signature and a specific type of vehicle movement, for instance, and flags high-priority targets for human review. During Project Convergence exercises, Maven-derived systems demonstrated the ability to detect and classify targets from raw sensor data with accuracy rates exceeding 90 percent, and to do so thousands of times faster than human analysts working the same data.

JADC2: The Network That Makes AI Targeting Possible

Kill chain compression only matters if the targeting data can reach the right weapon system in time. That's the purpose of JADC2, Joint All-Domain Command and Control, the Pentagon's overarching architecture for connecting every sensor, every command node, and every weapons platform in the American military into a single networked system.

JADC2 is not a single program. It's a concept that each service is implementing through its own systems: the Army's Project Convergence, the Air Force's Advanced Battle Management System (ABMS), and the Navy's Project Overmatch. The goal is interoperability, an Army radar detecting a target that an Air Force fighter engages using targeting data processed by a Navy AI system. The F-35's sensor fusion capabilities make it a natural centerpiece of this architecture. The AI makes this possible by translating between different data formats, prioritizing across different targeting queues, and optimizing weapons-target pairings across service boundaries that have historically been bureaucratic dead zones.

F-35 Lightning II aircraft from the U.S. Air Force, U.S. Navy, and Republic of Korea Air Force flying in formation over the aircraft carrier USS Carl Vinson
F-35 Lightning IIs from the U.S. Air Force, U.S. Navy, and Republic of Korea Air Force fly in formation over the USS Carl Vinson during Freedom Shield 2025. The F-35's sensor fusion capabilities make it a key node in AI-enabled targeting networks, sharing data across platforms and services in real time.

During the Army's testing, AI systems demonstrated the ability to match a detected target to the optimal available weapon automatically, routing a time-sensitive target to a nearby artillery battery rather than a distant aircraft, or selecting a precision munition over an area weapon based on the proximity of civilian structures identified in the imagery data. These decisions, which traditionally required a targeting board of officers discussing options around a map, were made by algorithms in seconds.

Ukraine: The Real-World Accelerant

While the Pentagon was conducting carefully controlled experiments in the California desert, the war in Ukraine was providing a messier but equally important laboratory for military AI. Ukrainian forces, outmanned and outgunned in conventional terms, turned to AI-enabled systems out of necessity, and the results accelerated global military AI development by years.

The most visible application has been AI-guided first-person-view drones. Ukrainian developers built machine learning systems that help FPV drones identify and track targets autonomously during the final seconds of their attack run, the phase where electronic jamming is most likely to sever the operator's control link. If the drone loses its video feed in the last moments before impact, the AI takes over, guiding the weapon to its target using onboard image recognition. The success rate of these AI-assisted attacks has reportedly doubled compared to purely manual FPV drone operations.

Ukraine also deployed AI for intelligence processing at scale. With thousands of drone sorties generating hundreds of hours of video footage daily, Ukrainian intelligence units couldn't manually review even a fraction of the available imagery. AI systems trained to identify Russian military equipment, specific vehicle types, defensive positions, logistics nodes, automated the initial screening, flagging footage that contained high-value targets for human review. What would have required hundreds of imagery analysts was accomplished by a handful of operators supervising machine learning algorithms.

The defense technology companies that have emerged from the Ukraine experience, alongside established American firms like Palantir, Anduril, and Shield AI, are now feeding lessons learned directly into Pentagon programs. Palantir's AI Platform, which was used by Ukrainian forces for battlefield management, has been adopted by the U.S. Army for JADC2 integration. Anduril's Lattice operating system, which connects autonomous drones with ground-based command systems, won a significant Army contract based partly on its demonstrated performance with autonomous targeting in field conditions.

The Ethics Problem That Won't Go Away

The Pentagon's 2012 directive on autonomous weapons (DoD Directive 3000.09, updated in 2023) establishes that "autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force." The operative words, "appropriate levels", leave enormous room for interpretation, and that ambiguity is intentional.

Sailors conducting maintenance on an F-35C Lightning II aboard the aircraft carrier USS George Washington in the Pacific Ocean
Sailors maintain an F-35C aboard the USS George Washington. The F-35's advanced avionics generate massive volumes of data that AI systems are designed to process, from sensor readings to maintenance diagnostics, far faster than human technicians working alone.

In practice, "human in the loop" can mean very different things depending on the tempo of operations. When an AI system presents a completed firing solution and a human operator has 30 seconds to approve or reject it before the target moves, how meaningful is that human oversight? The operator doesn't have time to independently verify the AI's analysis. They can't personally review the raw intelligence. They're effectively rubber-stamping a machine's recommendation under time pressure, a dynamic that computer scientists call "automation bias," the documented human tendency to trust machine output over one's own judgment.

Critics, including prominent AI researchers and some retired military officers, argue that kill chain compression is gradually eroding meaningful human control. As the time between detection and engagement shrinks from hours to minutes to seconds, the human's role inevitably shifts from decision-maker to supervisor, and eventually to spectator. The fear is not that militaries will deliberately remove humans from targeting decisions, but that the operational tempo enabled by AI will make human involvement a bottleneck that commanders choose to bypass.

Defenders of military AI counter that the alternative is worse. In a conflict against China or Russia, the adversary will use AI to compress their own kill chains. If American forces maintain human-speed decision cycles while the enemy operates at machine speed, the result isn't ethical warfare, it's defeat. The argument is pragmatic: AI-enabled targeting isn't a choice the military is making, it's a requirement the threat environment is imposing.

What Comes Next

The Army's 15-targets-in-one-hour demonstration was a proof of concept, not an operational capability. Turning that experiment into a deployable system that works reliably in the chaos and ambiguity of real combat, where sensor data is incomplete, targets are mixed with civilians, communications are degraded, and the AI must function under conditions it wasn't specifically trained for, remains an enormous engineering and institutional challenge.

But the trajectory is clear. The Army's exploration of systems capable of processing up to 1,000 targeting objectives per hour represents the next increment, not a thousand lethal strikes, but a thousand individual detections, classifications, and threat assessments performed by AI, with humans approving engagements at the speed the machine enables. The kill chain that once measured days is compressing toward minutes, and the only question is how fast militaries around the world will push that compression, and who will set the boundaries.

The 15 targets were a demonstration. The implications are a transformation. What the Army proved in the California desert is that artificial intelligence doesn't just accelerate existing military processes, it fundamentally changes the calculus of combat by making the most time-consuming element of warfare, the human analysis of information, faster by orders of magnitude. Whether that transformation makes warfare more precise and controlled, or more rapid and uncontrollable, depends entirely on the decisions humans make about how much decision-making they're willing to hand over to the machines.

Share this article

Share:

Recommended

Ace of The Skies: Can You Identify These Military Aircraft Throughout The Years?
Test Yourself

Ace of The Skies: Can You Identify These Military Aircraft Throughout The Years?

Can you identify these aircraft?

Take the Quiz

On This Day in Military History

April 10

Loss of USS Thresher (1963)

The nuclear-powered submarine USS Thresher sank during deep-diving tests 220 miles east of Cape Cod, killing all 129 crew and shipyard personnel aboard. A leak in a silver-brazed pipe joint caused cascading system failures. The disaster led to the creation of the SUBSAFE program, the most rigorous submarine safety protocol in history.

1862, Siege of Fort Pulaski: Rifled Artillery Revolution

See all 11 events on April 10

Get Military News & History in Your Inbox

Join thousands of readers receiving our weekly digest of military technology, history, and analysis.

Test Your Knowledge