
At a secret US military base located about 50 miles from the Mexican border—exact location: classified—the defense contractor Anduril is testing a remarkable new use for a large language model. I attended one of the first demonstrations last year. From a sun-bleached landing strip, I watched as four jet aircraft, codenamed Mustang, appeared on the horizon to the west and soared over a desolate landscape of boulders and brush. The prototypes, miniaturized for the demo, fell into formation, their engines buzzing as they grew near.
The sun burned my eyes, so I turned to a nearby computer monitor under a dusty tarp. With a few keyboard clacks, a fifth aircraft appeared on the edge of the screen, its outline looking suspiciously like that of a Chinese J-20 stealth fighter. A young man named Colby, wearing a black baseball hat and sunglasses, gave the order to deal with the computer-simulated bogey: “Mustang intercept.” That’s when AI stepped in. A model similar to the one that powers ChatGPT parsed the command, spoke with the drones, and then responded in a dispassionate female voice: “Mustang collapsing.” Within about a minute, the drones had converged on the target and then, with minimal fuss—and virtual missiles—destroyed it.
Anduril’s demo illustrates how eagerly the defense industry is experimenting with new forms of AI. The startup is developing a larger autonomous fighter for the US Air Force, designed to fly alongside crewed jets, through a project called Fury. Many of these systems are already autonomous, thanks to older AI tech, but the idea is to incorporate aspects of LLMs into the chain of command, to relay orders and surface useful information to pilots. Sergeant Chatbot at your service.
It’s kind of weird. But then, defense tech always is. We spend and spend, on good stuff and a lot of crap. Here, the promise is efficiency: Kill chains are complicated, and AI, in theory, streamlines them (a euphemism for makes them deadlier). And whoever controls that technology, the four-star American strategists say, will dominate the world. The mantra is why the United States is so keen to curb China’s access to cutting-edge AI, and also why the Pentagon intends to ramp up spending on it in the coming years. The plan is striking but unsurprising. The war in Ukraine, with its ubiquitous, low-cost, computer-vision-equipped drones, has demonstrated the value of autonomy on the battlefield.
The generative AI boom, meanwhile, has multiplied interest. A 2024 Brookings report shows that funding for AI-related federal contracts grew 1,200 percent from August 2022 through August 2023, the vast majority of them coming from the Department of Defense. That was before President Trump’s return to office. His administration is now pushing for even more strategic AI: Its trillion-dollar 2026 defense—or rather, “war”—budget includes the first-ever dedicated allocation for AI and autonomy, at $13.4 billion.
This means that AI companies themselves have a lot to gain by making big promises about what they can do at war. This year, Anthropic, Google, OpenAI, and xAI were all awarded AI-related military contracts worth up to $200 million each. It’s a major about-face from 2018, when Google famously pulled out of Project Maven, an effort to use AI to analyze aerial imagery. Emelia Probasco, who studies AI military use at Georgetown University, says Project Maven—now run by Palantir—has become, in the form of Maven Smart Systems, one of the military’s most widely used AI tools. It makes sense, she says: Large language models are good at intelligence gathering because they excel at parsing large quantities of information. They’re also well suited to cyber offense because of their ability to write and analyze code. “The ambition that is a bit scary is that AI is so smart that it can prevent war or just fight and win it,” Probasco says. “Like some sort of magical fairy dust.” For now, today’s models are still too unreliable, error-prone, and inscrutable to make battlefield decisions or be given direct control of any hardware.
A key challenge for these players, then, is how to deploy AI in ways that both play to its strengths and minimize the risks. In September, Anduril and Meta together bid on a US Army contract worth up to $159 million to develop yet another AI-infused application: a rugged augmented reality helmet display for soldiers. Anduril says the system, which will deliver mission-critical information to warfighters while also sensing their surroundings, will use a new generation of more capable AI models that are better able to interpret the physical world in real time.
And what about fully roboticized soldiers? I called up Michael Stewart, a former fighter pilot who led the US Navy’s disruptive capabilities office and was involved in pushing for AI experimentation by the Fifth Fleet back in 2022. Stewart now runs a consulting firm and talks to military planners around the world. He expects the future of war will be heavily automated. “In 10, 15, or 20 years, you’re going to have robots that are pretty autonomous,” he says. “That’s where you’re going.” And assuming these systems have LLMs for brains, they won’t just become a new kind of witness to the horrors of war. They’ll be able to explain, in their own words, which actions they took, and why.
Services Marketplace – Listings, Bookings & Reviews