How Ukraine Became the World’s Most Recorded War—and a Laboratory for AI-Driven Combat
Author(s): Scott Douglas Jacobsen
Publication (Outlet/Website): International Policy Digest
Publication Date (yyyy/mm/dd): 2025/12/04
Samuel Bendett is a leading analyst of Russian military technology, with a focus on drones, robotic and autonomous systems, and artificial intelligence. He serves as an adviser in CNA’s Russia Studies Program, is an adjunct senior fellow with the Technology and National Security Program at the Center for a New American Security, and is a nonresident senior associate with the Eurasia Program at the Center for Strategic and International Studies. His work is frequently cited in defense reporting and policy discussions. Bendett holds an MA in Law and Diplomacy from the Fletcher School at Tufts University and a BA in Politics and English from Brandeis University.
In this interview, he outlines how the war in Ukraine is speeding up the development and use of unmanned systems and battlefield AI. Russia introduced fiber-optic-guided UAVs in 2024, and Ukraine quickly adopted the technology to counter jamming and strike at longer ranges. Interceptor drones, signal-relay UAVs, and uncrewed ground vehicles have advanced rapidly, even as most systems still rely on human operators to function amid heavy electronic interference. Ukraine now produces an estimated two million hours of frontline footage, which feeds training pipelines for target recognition and tactical analysis. Early forms of AI-enabled autonomy are in use, Bendett notes, but they adapt poorly to fast-changing conditions. He also points to global supply chains that help Russia work around sanctions, shifting procurement patterns, and the ethical risks that arise as targeting decisions move closer to machine control in conflicts where civilians and combatants increasingly intermingle.
Scott Douglas Jacobsen: One of the clearest shifts in the Russian-Ukrainian war has been the rapid evolution of both the software and the hardware that drive unmanned and drone systems. From your vantage point watching these changes unfold, what stands out to you as the most significant advances on each front—technological and mechanical?
Samuel Bendett: In the war in Ukraine, technology has evolved to fit the current battlefield. Both sides are trying to break out of essentially positional fighting across much of the front, identify incremental advances, and interdict the opponent’s gains. Both are also working to deny the other the ability to conduct intelligence, surveillance, and reconnaissance.
This has driven large-scale use of fibre-optic-controlled UAVs—first fielded by Russia in 2024 and then adopted by Ukraine—which are resistant to electronic jamming and now strike far into the rear. It also spurred the rapid evolution of interceptor UAVs that hunt fixed-wing drones and heavy combat UAVs, along with continued scaling of signal-repeater UAVs and repeaters in general to extend range. Meanwhile, uncrewed ground vehicles have taken on logistics, resupply, and casualty evacuation roles once handled by foot soldiers or conventional cars.
On the software side, Ukraine has publicly claimed programs that quickly sift through battlefield data—drone and satellite imagery, ground footage, and open-source posts—and feed it into decision-making. There have also been reports of limited, small-scale uses of AI-enabled UAVs by both sides, with drones able to execute parts of missions based on preloaded data or onboard processing, while maintaining human oversight over engagement decisions.
Jacobsen: With such an enormous volume of data now available—not just online, but generated directly by the conflict and fed into military systems—how extensively are artificial intelligence tools being used to sort, filter, and prioritize that information, and to guide autonomous navigation based on those priorities? You’ve mentioned reports of limited use, but across a 1,200-kilometre front, how much AI is actually involved in the sifting itself?
Bendett: As of December 2024, Ukraine publicly stated that systems aggregating frontline drone feeds had collected roughly two million hours—about 228 years—of battlefield video, a number that has likely grown since. That trove supports the training of AI models for tasks such as target identification, tactical analysis, and assessing weapons effectiveness.
The battlefield data—hours upon hours of imagery, video, audio, and other content—is publicly acknowledged by Ukraine. You can look it up online. Any artificial intelligence or algorithm that requires training must have data for that training.
Imagine training your drones to navigate a highly complex battlespace like the Ukrainian war zone. You would want as varied a dataset as possible, and nothing is more varied than hundreds of years’ worth of data from the most active battlefield in the world today.
This does not mean that the mechanism being trained to behave or operate in a certain way will always do so correctly. Even humans, with their natural intuition, are often unable to orient themselves in this environment or act in a way that maximizes their outcomes. A machine may not perform better, but its chances of success — whether it’s a UAV or UGV — likely increase with the amount of data used in its training.
That was as of December of last year. We are now nearly a year further into the conflict, so there are undoubtedly additional datasets. Ukraine announced at the end of last year that it intended to begin using AI-enabled UAVs in 2024. There has been limited deployment of technologies such as Swarmr, but Ukrainian reports also indicate concern among operators. AI-enabled systems do not constantly adapt quickly to even minor, rapid changes on the battlefield. Human intuition can often guide a better outcome or a more appropriate set of actions, while an onboard computer may not respond optimally.
The majority of UAVs operated in Ukraine—and by the Russian military as well—are still human-controlled. However, the more data that exists, the greater the potential advantage. The Ukrainian battlefield is full of countermeasures—both physical and electronic—designed to target adversarial assets. Minimizing the connection between a machine and its operator is currently the most viable approach in a conflict where operators themselves are targets, and where piloting systems in a countermeasure-rich environment is complicated.
Both sides are likely using battlefield data to train their systems. The fact that we have not yet seen fully autonomous UAV swarms or widespread AI-enabled systems operating independently reflects the harsh and complex nature of this environment. Both the Ukrainian and Russian militaries operate in a battlespace saturated with countermeasures of every type.
Jacobsen: In text-based artificial intelligence, especially with large language models, once the available body of human-produced text is exhausted, these systems begin generating synthetic data to continue training themselves—producing new material, assessing it, and refining their performance in a self-reinforcing loop. We’ve seen versions of this process before, from chess engines to early machine-learning systems.
In the context of warfare, is there a plausible path toward something similar with battlefield data? If roughly 200 years’ worth of real combat information is now available, could it be expanded through synthetic generation into something on the order of 2,000 years—two centuries of real data supplemented by many more centuries of simulated material? And if so, would that kind of synthetic expansion help produce more resilient autonomous systems?
Bendett: I would not discount anything, but that question is better posed to someone who works directly with AI technology. Can extrapolations be made from that 200-year dataset for better training? Absolutely. How that’s done, I don’t know—and I haven’t seen it firsthand.
Once Ukraine announced that it had developed such systems, there were a few incremental updates about their progress. However, neither side is publicly promoting their developments as much as they might have expected, mainly due to operational security and the desire not to reveal capabilities to the adversary.
Before 2022, I conducted extensive research on Russian AI and autonomy, and there was a significant amount of publicly available information. Now, there is far less—especially when it comes to military AI. In 2022, Russia’s Ministry of Defence announced the creation of a Military AI Center intended to serve as a clearinghouse for data and initiatives related to artificial intelligence in the armed forces. Since then, we’ve heard nothing more about it.
You really have to search far and wide, sometimes even using VPNs or accessing semi-restricted sources, to find new material. Before 2022, Russian military publications were far more open in discussing how they viewed and intended to use AI. That’s not to say there aren’t any current Russian academic papers on the topic, but they’re generally superficial.
These days, Russian military articles tend to focus on the adversary—the West. For example, a paper about military AI might devote three-quarters of its content to summarizing or translating Western sources on U.S. or NATO AI programs, leaving only a small portion for Russian developments. As a researcher, that’s frustrating, though some valid data can still be found.
We have to extrapolate and assume that if Ukraine is pursuing these initiatives, Russia is certainly doing so as well. That’s consistent with their own prior statements. Before 2022, Russian defence officials repeatedly said they intended to use AI for data analysis, decision-making, and the orientation and operation of unmanned systems. They also said AI would support more effective weapons development and deployment.
As long as they have enough soldiers to continue assaults, they will do so. But if they reach a breaking point where the workforce becomes unsustainable, they will likely shift toward a higher-tech approach. What that will look like, we can’t say.
And I say that because before 2022, no one anticipated that this is the kind of war Russia and Ukraine would be fighting. Before then, FPV (First-Person View) drones were barely discussed—they weren’t a factor in weapons testing or battlefield planning.
The threat of thousands of UAVs anywhere on the front at any second of the day is real. Looking ahead to how AI might be used, there are many unknowns—just as many unknowns as when analyzing Russian military intent in 2020 or 2021. Make sense?
Jacobsen: We know that North Korean soldiers have fought alongside Russian forces, and reporting suggests that some Indian nationals were misled or coerced into serving as well—a story that appears far more complicated than the initial accounts. On the hardware side, Chinese components remain widely embedded in Russian drones. During a site visit in August–September 2024, I saw firsthand that many of Russia’s current systems still rely heavily on Chinese parts.
Bendett: Regarding the Russian AI UAV commonly discussed in Ukrainian reporting—the V2U—Russians are not publicly acknowledging it; what we know comes mainly from Ukrainian sources. Reportedly, it contains U.S. components and would not operate without NVIDIA microchips, which complicates the sanctions and supply-chain picture. There are also Iranian Shahed drones in the mix.
So, how many countries are feeding into the Russian military production line? There are official and unofficial relationships. Officially, Iran is a major supplier, with China playing a significant role as well—there were mil-to-mil contacts and acquisitions before and shortly after the full-scale invasion. Unofficially, Russia has built a diverse, robust set of networks and supply chains to replace what it lost after being cut off from Western trade in 2022.
Russia’s ability to acquire microchips, microprocessors, microelectronics, and other components has been quite resilient despite Western sanctions. Many willing partners continue trading with Russia. Turkey figures high on that list: officially a NATO country that has aided Ukraine in some ways, Turkey is also a transit hub and destination for Russian tourists, and a lot of sanctioned goods and services have transited or been laundered through Turkish routes.
Other former Soviet states, as well as countries in the Caucasus and Central Asia, plus Gulf states like the UAE, India, and several African, South, and Southeast Asian countries, have been implicated—willingly or unwittingly—in trade that feeds Russian supply chains. That partly underpins the Russian narrative that it is “impossible” to fully isolate Russia: many covert or semi-covert partners remain willing to cooperate.
The West has tried to disrupt these networks, but we haven’t identified or enforced the full scale and scope of legal and illegal channels. Even when networks are recognized, not every actor is willing to implement sanctions because some depend economically on illicit trade. There were many sanction-evasion busts in 2022–24 and ongoing reporting about expanding transport infrastructure—Georgia’s highways and connections north into Russia, for instance—which have increased north–south trade flows.
Jacobsen: How should Western military procurement adjust to the speed of innovation emerging from the battlefield? What we’re seeing in this war reflects a broader shift linked to post–Moore’s-law scaling in AI, with Ukraine and Russia effectively serving as an accelerated test case for that trend.
Bendett: Western procurement needs to move faster: cut through bureaucracy, get technology into the hands of warfighters, and let those warfighters have real input on requirements and rapid iterations. The Hegseth memo on drone dominance captures that idea: streamline acquisition, empower users, and prioritize adaptability.
In what they need and how they select weapons and systems, allowing a lot more smaller, more nimble companies to compete and collaborate with the military. This is true globally—large defence contractors often develop systems agreed upon years earlier, which go through extensive certification and testing that naturally take time. That’s normal, and it’s the same for the West—and for Russia. It’s one of Russia’s biggest complaints.
But in Ukraine, systems can change rapidly. Soldiers on the ground are directly communicating what needs to change and how. Real-time iteration has become essential. The United States isn’t fighting a war for survival—Ukraine is. If our back were against the wall, we’d innovate and iterate much faster.
A good example is the rapid development of the P-51 Mustang: it went from blueprint to flying prototypes in about 5 months in the 1940s, an unprecedented feat at the time. That aircraft became one of the most successful of World War II and beyond. But that was a war for survival. The U.S. today is not in that psychological, financial, or existential state. As a result, there’s still inertia from before 2022 in how we think about force design and procurement.
Some in the U.S. military look at Ukraine and say, “That’s not how we’ll fight. Our war with China, or with non-state actors, will look different.” But wars rarely unfold as expected. When we entered Afghanistan, we had an overwhelming advantage—on paper and in practice—and yet twenty years later, the Taliban and its networks remained.
So we don’t know how future wars will unfold. And adversaries get a vote. Non-state actors and criminal organizations are already learning from the Ukraine war, arming themselves with FPV drones, quadcopters, VTOL drones, and fibre-optic UAVs. The technology and know-how are spreading quickly. The U.S. military’s overwhelming advantage before 2022 may look quite different in the future—whether facing a peer like China or decentralized armed groups.
The knowledge and experience from Ukraine have proliferated globally. Militaries and irregular forces alike are absorbing lessons, experimenting, and iterating. That’s not to understate U.S. power—America can still bring overwhelming force to bear—but close-quarters, urban combat may look very different from Iraq or Afghanistan.
Jacobsen: What are the ethical and legal challenges of autonomous or semi-autonomous warfare?
Bendett: The main ethical challenge is allowing a system to decide what is or isn’t a target. In a gray zone like Ukraine, where the distinction between combatants and civilians isn’t always clear, a military AI could make the wrong choice.
There are also security concerns about the independence of such systems—whether they could be infiltrated, corrupted, or hacked by an adversary, even to the point of turning against their own operators.
Many countries, including the United States, as well as international organizations and NGOs, are actively debating the ethics of military AI. The central issue always comes down to this question: if a system independently decides to engage a target in an environment where civilian and military roles are blurred, who bears responsibility if the strike is wrong?
Jacobsen: Samuel, thank you very much for your time today. I appreciate your expertise and the discussion. It was very nice to meet you.
Last updated May 3, 2025. These terms govern all In-Sight Publishing content—past, present, and future—and supersede any prior notices. In-Sight Publishing by Scott Douglas Jacobsen is licensed under a Creative Commons BY‑NC‑ND 4.0; © In-Sight Publishing by Scott Douglas Jacobsen 2012–Present. All trademarks, performances, databases & branding are owned by their rights holders; no use without permission. Unauthorized copying, modification, framing or public communication is prohibited. External links are not endorsed. Cookies & tracking require consent, and data processing complies with PIPEDA & GDPR; no data from children < 13 (COPPA). Content meets WCAG 2.1 AA under the Accessible Canada Act & is preserved in open archival formats with backups. Excerpts & links require full credit & hyperlink; limited quoting under fair-dealing & fair-use. All content is informational; no liability for errors or omissions: Feedback welcome, and verified errors corrected promptly. For permissions or DMCA notices, email: scott.jacobsen2025@gmail.com. Site use is governed by BC laws; content is “as‑is,” liability limited, users indemnify us; moral, performers’ & database sui generis rights reserved.
