How Project Maven Put A.I. Into the Kill Chain
In February, reports emerged that the operation to capture the Venezuelan President, Nicolás Maduro, had not been a strictly human affair. The extrajudicial caper had somehow involved Claude, Anthropic’s large language model. The military had recourse to Claude via a drop-down menu in a workflow package, the Maven Smart System, which gathers, synthesizes, and streamlines intelligence. The government procures M.S.S., as it is called, from Palantir, the sphinxlike defense-tech contractor co-founded by Peter Thiel and an eccentrically jingoistic philosopher named Alex Karp. Claude’s deployment seemed to come as something of a surprise to its parent company, and an Anthropic executive reportedly reached out to a Palantir counterpart to clarify what, exactly, Claude had done in Caracas. When this inquiry was relayed to the Trump Administration, one Administration official told me last month, it was interpreted as a signal that Anthropic, which was then renegotiating its own contract with the federal government, was perhaps a faithless partner. (Anthropic disputed that characterization of events.) This suspicion was confirmed when Anthropic, citing fears of domestic mass surveillance and autonomous weaponry, refused to allow the Pentagon “all lawful uses” of its products. This dispute culminated in the Secretary of Defense Pete Hegseth’s designation, by outraged tweet, of Anthropic as a supply-chain risk—a standing peril to national security.
This ban, however, was not effective immediately. The Pentagon apparently needed Claude for one last job. Twelve hours later, the White House began to bomb Iran. Among the casualties of Operation Epic Fury’s first day were more than a hundred and seventy-five people, most of them little girls, at the Shajareh Tayyebeh primary school, in the southern city of Minab. Claude’s potential culpability in this and other potential war crimes was a subject of widespread speculation, not only in the media but in Washington. Congressional Democrats sent a letter to Hegseth demanding a detailed account of how A.I. was being used in the Iran campaign. In an essay for his Substack that was republished, in slightly different form, by the Guardian, the technology scholar Kevin Baker wrote that almost none of the attendant coverage (including mine) “had any relationship to reality.” Maven had only recently added L.L.M.-based functionality, but the program had been around for a decade. Claude, in Baker’s view, was a MacGuffin. It only served to draw attention away from the centrality of Maven as an automated targeting system. He continued, “The real question, the question almost nobody was asking, is not about Claude or any language model. It is a bureaucratic question about what happened to the kill chain, and the answer is Palantir.”
Discover notable new fiction, nonfiction, and poetry.
The veteran journalist Katrina Manson, who now covers defense tech for Bloomberg, spent much of the past few years asking precisely that question. Her new book, “Project Maven: A Marine Colonel, His Team, and the Dawn of AI Warfare,” is an unflaggingly well-reported and well-sourced account of the ongoing reconfiguration of the U.S. armed forces for a new technological era. The book was completed months before Anthropic’s redlines generated new interest in autonomous-drone swarms and killer robots, but even then the writing was on the wall. Dystopian carnage isn’t coming, she warns at the end of her introduction. It is “already here.”
“Project Maven” is structured as an intellectual and professional biography of Drew Cukor, a Marine Corps intelligence officer largely responsible for the eventual “success” of this military transformation. The narrative begins shortly after September 11th, when Cukor finds himself among the first troops on the ground in Afghanistan. His first mission, as part of an expeditionary unit sent to seize Kandahar Airport from the Taliban, finds him inside a blacked-out helicopter where the place of a lance corporal has been taken by a bulky P.C.—the paleolithic version of Claude en route to Venezuela. The computer was loaded with state-of-the-art tools to assist Cukor and his unit in target assessment, threat detection, mission planning, and commander briefings: “Excel, Word, Google Earth, and PowerPoint, and some in-house military software none of them liked.”
The problem, as Cukor saw it, was not that American forces lacked facts. They were drowning in intel about hideout caves, weapons stashes, and enemy movements, some of which was culled from surveillance or signals intelligence and some of which came from detainee interrogations. But the Marines had no way to put it all together. Al Qaeda targets were listed in Excel. PowerPoint was for mapping network connections. Word was for writing things up. Google Earth was for zooming in and out. This wasn’t wholly ineffective—as one artillery officer later told Manson, “We’ve killed more people on Office than you’d ever imagine”—but precision munitions were only precise if you knew precisely where to point them. Cukor had no methodical way to “divine the patterns of war.” Over the ensuing decade, he watched soldiers and civilians die over and over because of a lack of organized, integrated information. The military, he’d long thought, required “something ‘vastly different’ from the status quo”: he dreamed of a “single digital grid” that gave a “highly accurate battlespace picture” in real time, a vision of white dots that moved legibly across an “aspirational single pane of glass to clear up the fog of war.”
The realization of this digital pane, which ultimately manifested itself as Project Maven, is one of two stories that Manson tells. It describes the halting development of the substance of what Cukor wanted. In parallel, she recounts the procedural, against-all-odds-ish story of how he went about achieving it. This is the story of Cukor’s private war against a stodgy Pentagon bureaucracy. Cukor, as she portrays him, is a cartoonishly gruff, ball-breaking pain in the ass who overworks himself and mistreats his subordinates and alienates his superiors. He patterns himself after Hyman Rickover, the notoriously bullheaded admiral who single-handedly called into being the Navy’s nuclear-submarine fleet. At the same time, he’s something of an intellectual romantic: his favorite novel, Manson discovers, is “Don Quixote,” which provides her with a ready-made narrative template in which a “tragicomic and misunderstood hero pursues a doomed quest for an idealized version of the world that does not exist, forever trying and failing to save the world and right wrongs.”
Manson, despite herself and to her credit, clearly comes to like Cukor, or at least begrudgingly admire him. Her personal sympathy clears the space for her to take seriously his passion for a world made better and safer by A.I. warfare. In this alternate future, flesh-and-blood soldiers are replaced by drones and robots (and, much later, militarized unmanned Jet Skis); the lives of innocent civilians are spared by reliable systems with instantaneous and total information awareness; and A.I. superiority provides an even more effective deterrent than nuclear capabilities. Manson points out that there are precedents for this fantasy of war’s obsolescence: in the years before the First World War, one contemporaneous observer wondered if the mass-produced rifle would lead to such unfathomable carnage that no commander in his right mind would be willing to risk combat.
But Cukor insists that Maven was never supposed to be a weapon. He frequently defends the project as nothing more than an integrated data platform, which will afford its human users a dramatically increased capacity to make wise and careful decisions. With this positive vision in mind, Manson makes it at least intermittently possible to root for Cukor—as one roots for the insouciant Maverick in the “Top Gun” films—as he struggles with computer-vision models that don’t work, colleagues who jealously hoard their data, users who prefer the systems they know, a top brass set in its old kludgy ways, and peacenik tech workers. In 2018, Google employees staged a massive walkout to protest the company’s work on a primitive iteration of the project.
In the aftermath of the Google fiasco, Cukor turns to Palantir (in addition to Microsoft and Amazon) to make Maven a reality. The contract, Manson notes, almost certainly rescued an otherwise ailing Palantir from corporate oblivion. It also may have rescued Maven, which ultimately overcame the bitter skepticism of the defense establishment. Manson’s story culminates with the war in Ukraine, in which Maven has helped mitigate Russia’s advantages; the conflict became an inflection point for comprehensive national adoption. The Pentagon’s current contract ceiling for Maven is $1.3 billion. Former Mavenites have assumed positions of great power and influence in both the Trump Administration and a closely allied faction of the tech sector, which grew bored with mindless consumer apps in favor of a muscular military-industrial complex. Our allies, too, have been convinced: NATO now has its own Maven contract with Palantir, and that prompted ten member nations to pursue one, too. At any given time, thousands of people are logged in, monitoring thousands of information flows distilled into a clean user interface that recalls the cinematic touchscreens of “Minority Report.”
The Maven Smart System has become a global surveillance apparatus—it can keep track of forty-nine thousand airfields all over the world—but its current work is hardly limited to intelligence provision and analysis. A “single click,” Manson reports, “could send coordinates through a tactical data link to a specific weapons platform so that it could fire at the target.” The entire process, from target identification to target destruction, is four clicks. In 2023, one source told her that he could sign off on eighty targets in an hour: “Accept. Accept. Accept.” The old system could hit fewer than a hundred targets a day; the new system can hit a thousand, and with the recent integration of L.L.M.s that number has risen to five thousand. It was crucial in the “precision” mass-bombing in Iran. Officials told Manson that Maven was “accelerating operations and ‘enabling lethality’ at combat headquarters around the world.” It is also, predictably, being repurposed for border control and drug policing at home.
And Maven is only one part of the A.I. tool kit. Manson uncovers evidence of two clandestine killer-robot programs, one aerial and the other aquatic, which are being developed in haste. Should China make a move against Taiwan, the straits between them will resemble, as one U.S. commander had it, a “hellscape” of armed automata. For the first time, the Pentagon’s proposed budget contained a line item for comprehensively self-directing systems, requesting an allocation of more than thirteen billion dollars. A machine can shoot, Manson reports, up to “ten times faster than an assassin.” This gives the “autonomy hawks” something like an erotic frisson: one source says that “there’s really nothing quite like seeing a machine aim,” explaining their sense of “an alien aspect, some otherworld[ly] feeling, I don’t want to say ‘religious,’ that’s not the right word.”
But Cukor, who hit his thirty-year up-or-out deadline without getting a star, had long since been removed to lucrative work in the private sector. Manson catches up with him at the beach, near his home in Los Angeles. “He always foresaw a union between human and machine, not a machine takeover,” she writes. He’d once told her that the problem with war was that humans are “materially corrupt, inefficient, and they get tired.” Their weaknesses could be balanced with machine strengths. “ ‘If you get these things tuned up the right way, they can perform better than humans,’ he insisted. AI might help assail the inevitable problem: ‘War is fraught with human error.’ ”
“So was America,” she writes. “We’re flawed,” he says.
Cukor, too, is flawed. He might prefer to believe that Maven was only ever supposed to provide reliable intelligence to inform human decision-making, but Manson repeatedly points out that this was always somewhere between wishful thinking and deliberate obfuscation. Cukor’s interest in operations was such an open secret that it scarcely counted as a secret. Alex Karp, the C.E.O. of Palantir, once described him as the “founding father of A.I. targeting.”
In an important sense, neither Project Maven nor the book that it inspired was ever about A.I. per se Cukor may have been the crew-cutted colonel who bulldozed the project into existence, but he wasn’t the one who set it in motion. In 2014, halfway through the second Obama Administration, the Secretary of Defense, Chuck Hagel, and his deputy, Robert Work, proposed what they called the “third offset strategy.” An “offset,” as Kevin Baker, the technology scholar, describes it, “is a bet that a technological advantage can compensate for a strategic weakness the country cannot fix directly.” The first offset was the development of nuclear weapons, which secured American dominance over a Soviet Union that could rely on mass mobilization. When the Soviets developed their own atomic bombs, the U.S. staked its superiority on precision munitions, like long-range guided missiles, and stealth technology.
The “third offset,” once Russia and now China had caught up, had less to do with a particular technology than with the effort to revamp the military for sheer speed and agility. What we now call “A.I.” was, at the time, still an obscure cat-identification device. Autonomy was nevertheless a bedrock component. At a public gathering in 2015, Work said, “I’m telling you right now, ten years from now if the first person through a breach isn’t a fricking robot, shame on us.” As he put it to Manson, “I do not want it to just be about intelligence” but about “some type of direct warfighting applications.” Cukor pitched Work on a demo to prove that drone feeds could be better monitored by algorithms than distractible airmen; according to one of Manson’s sources, Work was “super psyched,” and dispatched him to Silicon Valley. Cukor visited Tesla, Waymo, and Uber.
In the spring of 2017, Work inaugurated the secretive Project Maven and appointed Cukor its chief. Their work was only ever couched as an intelligence program, not a munitions or weapons platform. When Manson asked an early Mavenite if targeting and offensive strikes were an unspoken component, he said, “Yah, of course. It’s not like we’re doing it for kicks. The goal of the intel is to take out high-value targets.” Manson continues, “Speaking to me years later, Cukor made no bones about it either.” What was the point of all this speed if you needed to wait for cumbersome human supervision? If the machines could identify the targets, couldn’t they also pull the trigger to rain death from all angles?
From this perspective, Cukor wasn’t exactly waging a war on a definitionally bad thing called bureaucracy. What he identified as sclerosis might more properly have been described as the deliberative process by which our rashest impulses were kept in check. One could certainly “optimize” the decision-making apparatus by ridding it of any opportunities for individuals or committees to exercise discretion. But, Kevin Baker writes, this “friction is also where judgment forms. Clausewitz observed that most intelligence is false, that reports contradict each other. The commander who has worked through this learns to see the way an eye adjusts to darkness, not by getting better light but by staying long enough to use what light there is.” He continues, “This ‘staying’ is what takes time. Compress the time and the friction does not disappear. You just stop noticing it.” Humans are in the loop for a reason. We are there to slow things down.
Manson can’t quite make up her mind about the value proposition of institutional inertia. When she’s in a credulous mood, and disposed to accept Cukor’s appeal to A.I. warfare as an enhancement that will save precious lives, bureaucracy is like an old brick wall for Cukor to bust through like the Kool-Aid Man. When she instead assesses Cukor as a squirrelly pitchman and an all-around bad-faith actor, bureaucratic regulation looks more like Chesterton’s fence—something you don’t demolish unless you know precisely why someone put it there in the first place. Baker, for his part, sees no real distinction between the starchy, old-school Pentagon and its new A.I.-disrupted iteration. They are rather points along a continuum of increasing proceduralism, structures designed to limit the scope of independent action and accountability. Cukor and his ilk might think they’re furnishing service members with new means to rise to the occasion, but what they’re really doing is usurping human flexibility and freedom: “Karp thinks he is destroying bureaucracy,” Baker writes. “He is encoding it.” With Maven, he continues, “what Karp eliminated was the discretion the institution could never admit it depended on. What remains is a bureaucracy that can execute its rules but with no one left to interpret them. Bureaucracy encoded in software does not bend. It shatters.”
One argument in favor of the machines tends to pit the omniscience, mathematical rationality, and tirelessness of A.I. at its best against the weakness, hypocrisy, delusion, and bias of humans at their worst. The flip case of this line of thinking pits the best of humanity—situations in which humble, reflective, and wise people model meaningful discretion—against the worst of A.I.’s routinized brutishness. Neither of these is particularly satisfying, but then again this is just another version of the dilemma that the German sociologist Max Weber pointed out more than a century ago: legalistic bureaucracies, in which everyone follows the same rules for the same reasons, seem like the fairest and most even-handed way to arrange a collective in pursuit of shared values and goals. They might, in fact, be the only way to do so. But insuring that everyone hews to a common procedure is never going to help us hash out what our values and goals ought to be in the first place. Bureaucracies are efficient, but they cannot determine what ends our efficiency ought to serve. Baker has a point when he says that comprehensive automation is the final consolidation of the bureaucratic spirit. But that doesn’t mean we have no choice.
Neither Manson nor Baker, understandably, seems to have much patience with this argument. The A.I. boosters—especially in warfare, but in general—use it cynically, to evade responsibility: we are in the simple business of fulfilling objectives, they proclaim, and if you don’t like those objectives you’re free to take it up with policymakers. We build the tools; it’s up to all of us to decide to use them wisely. Setting aside the fact that these same people have done everything within their power to stifle regulation, this is self-evidently true. It’s also not much consolation. It seems absurd to expect prudence and restraint from figures like Pete Hegseth, who has written, of the Geneva Conventions, “Our boys should not fight by rules written by dignified men in mahogany rooms eighty years ago.”
At the end of the book, Manson tells Cukor that when all is said and done she just doesn’t buy the idea that A.I. will ever be contained by cautious oversight. In the context of an exchange about Israel’s reliance on near-indiscriminate A.I.-enabled killing in Gaza, she says that “the AI targeting machine makes possible the policy decision, enabling operational speed and volume.” Cukor, who has made the policy argument in the past, now concedes: “This is correct.” Still, he affirms, “I’d do it again, in the same way.” ♦