Loading...
Loading...

Welcome back to the AI Policy Podcast. I'm Gregory Allen and today we've got something
that is a genuine delight and special moment for me. We're talking with Katrina Manson,
who is the author of the new book, Project Maven, a marine colonel, his team and the dawn
of AI Warfare. Now I'm just going to skip right to the end, which is if you listen to
this podcast and you do not buy this book, I think less of you. Moreover, if you want
to talk about military AI in the year 2026 and you either have not worked for the military
or you have not worked for a military contractor working on military AI and you have not read
this book, I'm taking away your permission to talk about military AI and claim that
you're doing so intelligently. If this book does not win financial times, best book of the
year or one of the finalists for best book of the year, I will not think less of this book,
I will think less of the financial times. So with that, Katrina Manson, thank you so much
for coming on the podcast. Thank you for having me. I guess you read it, then that's good.
I did. I did. I would have read it in one sitting, but I have children and they're
feisty, but okay, so let's just start a little bit at the beginning, which is with you.
So you're a journalist. Who are you? How'd you get interested in this story?
I am a journalist. I am with Bloomberg now for the last three, four years covering AI
and national security, cyber, tech, emerging tech, defense, those kinds of things. Before
that, I was with the financial times for 11 years. I was the US foreign policy and defense
correspondent. So I was a Pentagon correspondent from 2017 to 2022 and covering the intelligence
community and state department and national security council. So I got to kind of see that
Nexus. Before that, I was based in Kenya and I spent five years as East Africa correspondent
for the financial times. And before that, I was writing in Congo as a writer's correspondent
in Sierra Leone and Bikina Faso. So I got to America eventually.
Well, wonderful and we're glad we did. So as you say in the subtitle, this book is about the
dawn of AI warfare. So how'd you get interested in this story? What made you decide it was worth a book?
I think I just want to know what's happening next. And when I found out there was this project
about the future of war and this claim that was made that it would involve AI and then that it
shut down and couldn't be foiled, which is Freedom of Information Act. So there was no obligation
for the Pentagon to respond to journalist requests. That made me from that moment on. Think, well,
what's happening? And why do they care so much and why do the Google workers care so much who
are protesting against this? What does it actually mean? And I became frustrated with the debate
because the debate about AI warfare is fascinating, important, powerful. So is the debate about
how you protect military operators at war and civilians. So that is so much to care about.
Add in this unpredictable black box technology, I just wanted to understand what everyone was thinking
and anyone who is animated by passions as I discovered it, everyone to do a project may even
is animated by passion, whether they're for it or against it. Really, I wanted to get under the
skin and understand their motivations. One part and the second part was, well, what has it done? What
are the concrete uses of AI in warfare and do they work or do they go wrong? And nothing really
about project moving as much as has been written about it talks about the actual quality of the
algorithm, the way it was put into the workflow, whether operators wanted it or didn't. And that
really changed over time. And how it's changed over time. Yes, yes, and where it's got to. And
you know, those things that the campaigners were worried about at the beginning, they said,
you know, Google protesters said we don't want to be involved in the business of war. Okay,
that's a kind of ethical position. But the human rights campaigners who were really concerned
about project moving said, we think bringing in AI at this point could help lead to autonomous
lethal weapon systems, which is not what they even said it was. And so I wanted to pull on that
thread and see, were they right to be worried about that? Were they overegging it? Did people inside
the Pentagon always plan that? And so I think a lot of it for me was also about you a little bit
of accountability and transparency if I could get there was it was not simple. So as you said,
and as I've intimated, you know, this really does go from the dawn of project maven, including
some of the eureka moments that gave some of the key protagonists in the story, the desire for
something like project maven through its years of struggle all the way to middle of 2025,
where it is easily the most impactful AI capability in the US military. One source you interviewed
described maven smart system sort of in current incarnation of where we are in the story as quote,
the Microsoft Windows of war fighting, which I think is a lovely quote. So I want to give our
audience, I want to start our discussion by giving the audience a sense of just how impactful
maven has been. And I think you've already done the perfect job of it, which is on page seven
of your book. So could I actually invite you now to read from your book this passage on on page seven?
Sure. Um, 10 years since Kukor started his effort, the AI decision making systems developed
under Maven and some of the Pentagon's 800 other AI projects are used on the battlefield.
Maven smart system, MSS, a software platform that develops targets with the help of AI
is now deployed in every branch of the US military and all over the world, incorporating more than
150 data feeds and the work of more than 50 companies. NATO started using a version of the system
in the spring of 2025 and I would learn in October 2025 that 10 NATO members were lining up to use it
for their own militaries. Maven has already sped up the pace of war. I learned from an official
at the National Geospatial Intelligence Agency that with the help of computer vision,
the US went from being able to hit under a hundred targets a day to being able to hit a thousand.
In combination with large language models, LLMs, integrated into the Maven platform,
that number has risen five hold to five thousand targets a day. The AI algorithms developed
under Maven now deploy in submarines and in space operations. They're in subsea
sonar systems belonging to America and two of its closest intelligence allies, the US and Australia
designed for nuclear deterrence. They're fielded on autonomous drone boats. I learned the AI
targeting systems living at these two highly secretive systems, one aerial and one aquatic
that could surveil, select and kill targets entirely on their own intended for the defense of Taiwan.
Okay, so anyone who heard that and doesn't understand why this is a vitally important story to
tell is clearly clueless. That's where Maven ended up in the middle of 2025 and actually at the
end of this, I do want to talk about where we are in terms of the war in Iran, but its origins,
which you tell in this book, are much more humble. It begins with the trauma of one marine intelligence
officer, Drew Kukor serving in Afghanistan in a series of deployments that began only two months
after September 11th. So who is Drew Kukor and what was it about his experience in Afghanistan that
planted the seeds for what would ultimately become project Maven? He's the marine colonel
who is chief of project Maven from its inception until late 21, I think I've got that right,
maybe early 22, and he is really described to me by many people before I ever meet him as the
driving force of Maven, and also something of a difficult boss in that he is very...
You described him as a noun of verb and an adjective all in one, which is amazing,
many people talk about getting Kukor and to Kukor, and some points, if you are working hard,
sleeping not very much, and being very exacting of everyone who works for you, that certainly was
one way he was described. So he is sent into Afghanistan in October 2001, after 9-11,
as the first set of expeditionary Marines, and he sets up and is carrying this hefty computer,
along with them, was actually Dave Spurk, his colleague who is carrying it, and the two of them
are part of the intelligence cell for those early operations, and he's very angry and frustrated
by the inability to really bring intelligence to the front lines. So information exists to some
extent, maybe written down about past Taliban hideouts that the US might have known. Of course,
the US was very underpowered in terms of intelligence at the beginning of that war, and some would argue
throughout, just because it was so different. It was a very different place to operate in different
language, all sorts of different community networks that Flynn later makes extremely
a public, and it unleashes this evisceration of the failure of US intelligence.
And Drew Kukor is really one of those people operating in this intelligence vacuum as an
intelligence officer, and he simply just wants data. He wants to, the Americans start facing
terrible, improvised explosive device attacks, and he wants to start logging where they've been,
how often they come, who makes them, under what weather conditions, and all this sort of
information. Eventually, he gets to meet Palantir, and he works with them and the US military
bureaucracy to get that Palantir system delivered to the Marines in 2011. So a lot later, by the
then he's gone up the Pentagon bureaucracy. And this is a system that he says argues, save lives,
almost overnight, and becomes a hit because the previous systems, which exist, it's not that none
of this has been thought about, but the previous state integration systems are chunky. Maybe they
just simply don't work very well. The data feeds aren't working. The access to them isn't there,
and of course, I'm sure you're listening at least, or no, the big fight that Palantir has with the
army over trying to become a program of record. So he is really part of that effort to get Palantir
to US troops from the outset. And that initial experience of his bringing software and modern
software engineering techniques forward deployed with end users who can complain and actually have
a company take that feedback and make changes to try and make the software better. Contrast with
his experience, basically using Microsoft Word and Microsoft Excel to log intelligence information
and having terrible analytic tools, having terrible dissemination tools, and his experience of
bringing technology and improving intelligence is a formative one for him that he then brings to
later in his life as he's thinking about artificial intelligence. So I want to now jump to the,
you know, that's Kukor solving one flavor of the intelligence problem. But if you fast forward
to the later era of those wars, there's a different flavor of intelligence problem,
which is about having too much data compared with the analytical capacity. And this is especially
acute in the challenge of processing, exploiting, and disseminating information. So can you talk a
little bit about that flavor of intelligence challenge and what was going out of the time?
This really becomes the launching pad for Project Maven. This is the pilot project, if you will,
that most people are familiar with. Project Maven ends up being much more than this. But it starts with
this problem that they believe they can tackle using AI, which is that the drones the US has
unleashed in the global war on terror. The GWAT are collecting information that is not actually
being looked at. They're running so many videos the entire time and they have screeners looking
at it or not looking at it as the case may be. And there were complaints as high military,
as senior military officials would go out to Iraq and elsewhere and realize that no one was actually
then observing, analyzing, taking this data from the drone feeds for any kind of actionable
intelligence. Because they're collecting years and years and years worth the video footage per hour.
Or whatever the case may be. And so the number, the analyst community is just overwhelmed by what
it is. And so CooCore, and I guess also Will Roper, have this insight that maybe AI could be
helpful and solve in this problem. Sort of what sparks that insight for them? So Will Roper already
has a project by the time we kind of joined him in the story, looking at satellite imagery and
trying to use AI on satellite imagery. CooCore, and that gets through Congress and gets called
Project Maven, and he gets some money for that. CooCore has been attending these breakfasts in
the Pentagon that really reignite that kind of cold war effort to try and get one up on Russia.
This time it's reignited under Bob Work, then Deputy Defense Secretary to look at how the US
can catch up with what it perceives as a risk that it will fall behind China's technology.
And so they're trying to come up with ways of knitting together emerging technology, the commercial
sector, and what the Pentagon perceives as the China challenge. And CooCore comes up with this idea
to bring AI to drone footage rather than satellite footage at this time. And he pitches it, and the
Deputy Defense Secretary loves it. And they then develop over a series of months, a pilot, and
very interestingly, Dave Spurk again, works with some early algorithmic companies and says,
can you detect this? Can we run an algorithm over an image and just say what's in it?
This was already stopped and existed in the commercial world, but they wanted to see if they could
do it with military objects for which there was much less data available commercially. But of course,
the Pentagon inside had lots of data on these objects, but it wasn't in the right place,
it wasn't in the right format. And they just didn't know if it could work. And together they take
Will Roper's name, Project Maven, they apply it to this. They take as money too, don't forget that.
That is in the story. Yes, CooCore was always very pleased about that, he told me. And they go to
Congress and they start getting more money as well. And Will Roper was a supporter of Project Maven.
Yeah. Okay, so they have this challenge of basically infinite data, very finite analytical
capacity, and it's costing lives effectively in the warranty and they say, can we make the life
of these analysts easier? And I think the original technical capabilities and even the original
technical aspirations were quite humble, counting the number of people in an image, counting the
number of cars in an image. These are pretty humble tasks, but they actually are on the job jar
of these analysts. And so if you can make their life easier, you can increase the analytical
capacity of the entire workforce. But as you were just hinting, CooCore really wants to bring
leading AI companies, Google, Palantir, Amazon to work with the Pentagon. This was a time when
tech workers were pretty skeptical of the Department of Defense. How did CooCore persuade them to
come work with the Department of Defense? Two things, just to pick you up on quite humble,
the reality was quite humble, but the ambition of CooCore from the outset was enormous.
He had this thesis that talked about the need for white dots that you can click on and send
as a target. He really had spent years developing this idea. So although analyzing an image just
for what's in it and the effort to do that when kind of almost comedically wrong, instantly because
it was so hard, they got better at it. His scope, his vision was always for something that could
automate or could could bring intelligence to the front lines. Yeah, it's a little bit like Jeff
Bezos. He's like, I'm going to start with selling books online, but my dream is to ultimately sell
everything online. And this is the logical starting point based on the maturity of the technology
and infrastructure where we are right now. So you're totally right, his ambitions were much broader.
So one of CooCore's priorities was getting, well, actually, sorry, finish answering. I interrupted
you before you got to the second part of answering questions. The second part is how did he get the
commercial sector on board? So he studied the papers. He looked at who he wanted. I researched
papers that were published online. Yeah, I researched papers. He was reading about Microsoft. He
was doing a lot. It wasn't just him. There was a D.I.U. Your listeners will know defense
innovation unit. I got that right. And various other parts. But eventually, companies started
gathering. They also had a prime contractor ECS who was also kind of understanding who was at
the cutting edge, but he wanted to go to the cutting edge. And one of the companies CooCore
wanted was Google. They used Google Earth already, whether Google Earth knew it or not,
they were using Google Earth for their military operations to help. To make that legitimate,
to bring that in, to make it as a kind of platform, that was part of the vision. And then he needed
really good algorithms. So he wanted DeepMind. He didn't get them. He wanted
another part of Google, Google Brain. He didn't get them, but he did get Google Cloud.
And then at the same time, he was trying to find the algorithm makers. And there was this one startup
called Clarify, who really was just getting going with an award winning leader who had become
very interested in computer vision and had started winning these contests for how many images.
Let me just jump in here. Folks on this podcast will have heard about the 2012 image net
competition, which was basically the marriage of modern GPU technology with some neural network
approaches to computer vision. And they blew everyone out of the water. And that was the aha moment
for Silicon Valley that like the time is ripe for an AI revolution. Was that image net
competition? And that one of the Clarify founders was on the winning team of that 2012 competition.
So when Kuku got him probably felt pretty good about himself.
Yeah, and it took, it took a few visits. So Matt Zyler, the boss, he began to waver. He wasn't quite
sure if this is what he wanted to do. At the time, the way he was making income was he was using
computer vision for wedding blogs. So his algorithms were really good at identifying a bridal veil
or a groom suit or the tears of a wedding cake. So to repurpose that and he was based in New York,
he knew that some of his staff might not be comfortable with that. He didn't have a background
in the military. He told me, you know, he the most senior about the military was watching all
war movies with his grandpa. He had one friend in the military. He was grew up in Canada.
So it really wasn't part of his, it wasn't, it wasn't on his bingo card. And Kuku made these
visits to see him. Zyler was kind of at least in chance a little bit by just how much of a
kind of military cliche Kuku seemed. You know, he was meeting a real life marine colonel.
But they also had these long discussions. And Kuku put to him, according to Zyler, Kuku
did not confirm this account that he kind of laid out the scenario where US military operators
were somewhere in Africa and had got, it were concerned that they were getting attacked and
were potentially getting roughed up. And it turned out it was by cattle. And he put to
Zyler that if they just had AI that could be analyzing and observing what those concerns were,
AI would help. And for Matt Zyler, this was a very compelling potential use case that he then
used to put to his staff and say, we can help save lives rather than take lives with AI. And
this is a direction I want to go in. And he did lose some people over that. And then subsequently
after the Google protests, he had further, even more public protests. But he has continued
down this road. Yeah. And I'll just reveal my bias here to his credit has stuck with it.
And I'm grateful to him. So Google was one important relationship. It obviously blows up with
Google now basically exiting the contract and working on Project Maven. At the time, it was seen
as a devastating blow to the military's AI efforts. But ultimately,
Palantir and other companies, Microsoft, AWS come in. So what actually was the aftermath of the Google
exiting? If you ask Kuku, he says everything was fine. And all it did was bring other companies
coming forward to say, we're here to help. I think that is, I do think it's fair to frame it
as a big fault line, a big rupture. And the way you trace this history is you look at the reaction.
The claim that AI was needed to face off against China becomes a much more substantial part of
the argument from here on in. You see in Congress a discussion, a public discussion for public
consumption that AI is needed. And if China has AI, the US will fail in any potential conflict.
You see the emergence of Eric Schmidt's group, the attempt to really try and analyze what role
AI can play in war at a time that publicly, it seems quite unpalatable. You also have
this outreach to an interest from AWS and Microsoft. And then Kuku himself rings Palantir and says,
hey, you got him in it. And he goes and pitches what would have been his Google Earth to for war,
to to Palantir, which is not necessarily on board at the time. And they want the contracts,
but they don't believe in AI at the time that he pitches it. And they also don't see themselves as
a user interface company. And in the book, I report that they, some of them were put out by this
idea. They want to do the data management, the data integration. They don't want to be a fancy
user face. And also that's not what AI is. So Kuku begins to encounter problems even from his own
team about the idea of trying to create a kind of everything app as it were, rather than develop
the algorithms to be good enough, which at that point really weren't delivering. But he does get
on board other parts of the commercial world. But you also get this fracture. So you have Google
workers saying, we don't want to be part of this. You have this kind of moral backlash that I think
does make the Pentagon very wary. And of course, you have the formation of the Jake, which you know,
I want to hear from you about your experience in the Jake. But you have Jack Shanahan, who was director
of Project Maven. So Kuku was boss moving over to then lead the Jake. And he goes on a listening
tour. And I learn enough from enough people in this book that listening tour means, you know,
people who are a bit also soften the blows, perhaps even apologize. Palantir at one point is
counsel to go on a listening tour. They call it a listening tour, others call it an apology tour.
And really, this is Shanahan reaching out to try and temper the water on AI, deliver the public
acceptability of AI and warfare. And that's, I think, why one reason why the Jake focuses very
much on ethics that development of this ethics code, the development of responsible AI, which
annoys some people in the Pentagon because they want to go faster. And the use of AI for non-lethal
military applications, such as warfighter health, predictive maintenance, humanitarian assistance,
disaster relief. There was all of these workflows that were very much intended to say like, look,
there's a lot of ways companies can help the Department of Defense other than things that go boom
or things that are one step removed going boom. Yes, they talk about wildfires as well.
But if you look at the statements of Jake and Jack Shanahan from the very outset, he is talking
about designing AI into weapons and saying no weapon should be designed in the future without AI
in it. So he somehow was trying to straddle both and moving into this much more operational
field further in public, at least the Maven was. And then meantime, the people in Maven were not
focused on ethics in any way. They were developing testing and evaluation, trying very hard to do
that. But at the time, they were judging algorithms sometimes just by eye by looking at the screen
and trying to work out which algorithm was better at identifying things than the others. They then
embark on this kind of growing up process of trying to improve that. But Maven was more operational
at the time and less focused on the ethics and responsibility. And Jake took on this very
public focus to say, we need this because China is going to get it. And we will be responsible
for it. But there actually wasn't very much integration between the two efforts.
You tell me integration. I mean, number one, I was the policy guy at the Jake. And that ethics
portfolio fell to me. And for me, it was perfectly natural that Maven didn't have a big ethics
component because they were in the intelligence, under the intelligence directorate, they're not
a policy shop. We were a policy shop. So we wrote ethical guidance that applied to them, as opposed
to them writing ethical guidance that applies to us. And there was some sharing of relevant infrastructure.
So for example, the computer development environment Sunnet, which was a mechanism for labeling
images, that infrastructure was actually important for our humanitarian assistance and disaster
relief efforts, even though it had not its origins in project Maven, but an important growth phase
under project Maven. So, okay. But one little, one little tweak on that. So, yes, Maven was
intelligence directorate, but it was focused on getting it out to operations. And I think that's
the thing the public didn't see at the time. To learn philosophy, right? Yeah.
Yeah. And well, let's talk about that field to learn philosophy because you have, you know,
these stories of Colin Carroll. And I think it's Brian Ward, you know, going off to these far
flung places, trying to persuade people to plug in these boxes to try out project Maven. And
oftentimes in the early stages, being really disappointed. So how did we go from this story of like
the AI is super broken and junky to like, AI is this world conquering colossus flavor of capability
in such a short period of time. Like, what is your take on the fielding to learn hypothesis?
First of all, the jury still out on whether it is this world conquering, you know, capability. And
I think there's still a lot of debate inside the Pentagon about how good computer vision really is,
for example, and the way in which LLMs should reliably be called upon. So we can maybe get to that
later. But the effort to get operators to try this out was highly fraught. Many of the services
that project Maven tried to seduce and say, hey, try out this algorithm. Didn't want to play.
And it ended up falling back on on past relationships. Colin Carroll had a previous relationship with
a commander in Somalia. And so they were able to partly through that relationship field, the first
algorithm to Somalia. It didn't go very well. That was even in the first year. So they worked
incredibly hard to get the algorithm out there to have some system to integrate it into. And then
the operators put it on and it flashed a lot. It was identifying too many things. The boxes were
flashing up a lot. And if you imagine a second video footage requires multiple still frames.
And if the AI detects it on some frames and not on others, which can happen because it's not
consistent, you just get a flickering. And so the operators turned it off. Sometimes the detection
boxes, I think at one point they were using blobs rather than boxes and the blobs would obscure
what was in someone's hands. And you couldn't even then, you know, there were lots of sort of
learning efforts going on during the first two, three years that they could get the software
updated much more quickly than hardware. But still operators complained that it wasn't being
updated quick enough. And they then relied on sending out people to the operational zones to
essentially coach the operators. Hey, please try this out. We've got to think about China.
Okay, it doesn't work now. I know that, but we're trying to get it ready. And even Kukuo
described the AI as a bag of potato chips, which took me quite a lot of time to try and unpack.
But I was told in the end it meant, of course, the algorithms are no good. We don't care about
the algorithm so much as the system that this is going to sit inside the way we're going to change
operations. And of course, it's competing with existing systems. So NRO has its own system
that does intelligence. Kukuo told me that he tried to work with them. But in the end,
Maven went faster, but you have to imagine every single system has its competitive spirit.
It's cheerleaders. It has its own budget line. And so Maven was beginning to,
I suppose, threaten in the view of project Maven, other efforts. And also it was leaning
further towards operations. And Kukuo, I reported the book was advising people to say don't talk
about targeting. That gets into operations. We don't have buy-in for that yet. Even though it was
in his mind. Yeah. So at least for me in the book, one of the key stories that you tell that
made me think of as like AI went from mostly doesn't work to is actually delivering some pretty
extraordinary capabilities was the story you tell of project Maven in Ukraine. So when Russia
invaded Ukraine in February 2022, Maven had already been around for about five years. What state
was the program in at that point? And then what happened in the weeks and months after the
invasion when the US military decided to bring Maven to bear in support of Ukraine? Two things,
bureaucratically, Maven was actually in a place of flux. It was supposed to move to NGA. There had
been this kind of almost catastrophic fights within the department about whether it went to NGA
or to the Jake or to the Jake successor. Yeah. And that had been very fraught. What what Congress
it was meant to go to NGA in the end, but it hadn't happened yet. So it was in a period of transition.
And Kukur had left by February 2022 as well. So you had Joe Larson leading it and also Cameron
Stanley, who is today's CDEO. So both of them playing a very prominent role. Now it had been tried
out in Afghanistan during the withdrawal as a means to see what was happening at the airport
just to give a greater clarity at a time where obviously the US had very little clarity and things
went very wrong. But it was at a moment where they could see Christopher Donahue at the time,
General Donahue walking around on base. And they could try and identify, can't people in the crowd?
That moment when Kabul Airport was really being deluge by people trying to leave,
Maven became a tool for working out how many people were there so that people could advocate
internally to say, hey, there's a situation developing or already developed. At that point,
senior people in the Pentagon started getting accounts to Maven. And so by the time it was
time for Ukraine, the 18th Airborne Corps, which had been for two years running experiments,
exercises with Maven, to try and link up a computer vision detection, potentially to a weapon
through link 16 and be able to fire against that weapon. So really a massive shift into quite
explicitly AI targeting, the one that Kukor had always envisioned. It was the 18th Airborne
Corps that took that forward. They were deployed to Veezbaden in support of Ukraine. They didn't
really know what support to Ukraine would look like. So some of the first efforts were to count
people leaving Ukraine to get a sense of refugee flows. Some of them were to try and assess the
state of the Russian capabilities and where they were, but very quickly after the war started,
the invasion started, they started trying to use computer vision to assess where were Russian
things that Ukraine means might be able to hit. And again, it didn't go well. The algorithms had
been trained on the Middle East. They had been trained on desert. Ukraine, there was snow,
there were tanks. What they did was they moved in satellites. They took new pictures and remember
all those tanks were lined up on the road to Kiev. They took that new data. They fed it back to
the algorithm makers. And overnight they would be retraining the models trying to get them better.
And then they developed this sort of, for some, it was a gray area. For other, it was clearly
allowed. For others, they were terrified, quite frankly, of breaking their own protocols. But they
started developing points of interest that they could share with Ukrainians. Almost everyone
I spoke to at some point slips and calls it a target. But they were making sure that they weren't
calling it a legal target. They weren't telling the Ukrainians what to hit. They weren't sharing
America's classified intelligence data with the Ukrainians. But they were drawing on all of that,
using computer vision to identify where objects were and then share it with the Ukrainians.
Now that computer vision improved really only because it was then cross-referenced with
signals intelligence and other forms of intelligence. And so they could get a hold on what things might
be. But they got really good in that first year in their view. They began to spot things that they
could get to the Ukrainians within a matter of seconds the Ukrainians could hit. And the trust,
which is the other big part of what it is to experiment with AI and develop it, became such that
the Ukrainians, in one case I was told about, couldn't tell what they would be hitting from their
own intelligence sources. And the Americans could say, trust us, hit it. And so it might just look
like a rectangle or a truck to the Ukrainians, but to the Americans, they were able to see that it
was, I think, in this case, it was a tell. Yeah, it was a tell. Yeah.
So, I mean, to me, the story that you're telling as somebody with a background in the
Department of Defense, what you're saying and kind of like a blase way or a matter of fact way,
I was probably more accurate phrase, is just like miracle after miracle after miracle, right?
So the traditional story of military technology over the past 20 years is they make a bunch of
promises. It never works. The program is canceled. That's the default outcome. Then the next,
you know, the next tier up is something finally shows up, but it's super duper broken and they promised
they'll fix it. And then they never do. And then if you get above that, it's like, okay,
a few years later or something that's halfway useful shows up. So the alternative story that you
just told where Maven shows up. And initially, because their algorithms have training data from
the Middle East, which has sand as a background and is going after the types of activities that go on
in Iraq and Afghanistan, not the types of activities that were going on in Russia, Ukraine,
doesn't work very well. But the fact that they were able to collect an enormous amount of
training data, the fact that they were able to label an enormous amount of training data,
and the fact that they were able to then retrain those algorithms and then redeploy them
relatively quickly and ultimately get to something really, really powerful. And you say, you know,
mostly when combined with signals intelligence, but I think it's, you know, to me, you can't
understate how valuable directing the attention of the analyst really is in that first part of
the story, right? Because the satellite image might cover hundreds of square miles or more, right?
And so telling the analyst look at this spot first is a very, very, very powerful thing. Now,
ultimately, you know, the computer vision model may have gotten it right and say, hey, I think this
is bad guys. And then the computer analyst, sorry, the imagery analyst looks at it and says,
yes, it is or no, it isn't. Or it probably is, but I'd love to have other data points to
collaborate. You know, this is to corroborate this information. You know, you're, you're shrinking the
workflow of assessing, you know, what is actually out there by half by three quarters by 90%. This
is a huge, huge increase in productivity. And you're right. It wouldn't be that valuable if there
weren't other parts of the system that are important, you know, to corroborate what the AI is
doing. But as sort of a first directing of the attention of the analyst, it's really powerful.
And that's why when you wrote that the peak in terms of passing targets to the Ukrainians
was 267 in a single day, that is like an unprecedented story in intelligence sharing. You know,
other other military campaigns where the US has tried to support allies don't look like that. And,
you know, just based on the feedback that you cite from Ukrainian officers in terms of their own
sense of just how good the support they were getting from the Americans is, to me, it looks like
something really special was going on. I think it's really interesting because it slows down
in some subsequent years. But you make it sound like a political story as opposed to a technology
story. I think it is, I think it is about policy rather than politics. In this case, or certainly
the way it was put to me, that and also trust and bonding, you know, that first team
developed that relationship with the Ukrainians to the extent that they could say, trust us
hit this, which might not pass muster by policy, but they did it anyway under pressure of war
and experience. The new team that came in didn't have the same affinity. It was
debatable for some about whether that passing of points of interest was okay or not. And whether
the two systems were becoming meshed in such a way that America's classified systems would be
put at risk. Now, it is very hard for me to get super close to that. But for what I've, you know,
there's a debate. There was a debate at the time. And obviously, I spoke to lots of people who said,
no, we were fine. We did exactly the right thing. But how that intelligence sharing happens
for other partners around the world becomes key. And of course, the project Maven team said,
we could do this for Taiwan. You know, we let's brief on how we can support a partner to share
without being a direct participant. And then with Iran today, maybe an obviously is a direct
participant because it's a US led operation. But when it slowed down, even under the new team,
they became confident that Ukrainians told me a Ukrainian officer told me that they were good at
getting dynamic targets, which is one of the difficult things to get because that's something that
isn't already pre-vetted. It might be on the move. They were very happy with it. But then the
permission for that relationship to continue in some way or something changed and it didn't
continue. Ukrainians have found other ways now. They've got their own drones that are getting
passed. But at the time, they couldn't see, you know, past six kilometers. Yeah. So, you know, by
early 2024, Ukraine had destroyed more than 2,600 Russian tanks and nearly 5,000 armored vehicles.
You describe a transporter erector launcher, which is like a mobile missile platform being destroyed
just 18 minutes after detection by American AI. So, I have two questions for you here. How much
credit do you think Project Maven deserves, especially in the early phase of the war for the success
of Ukrainian resistance? And then I have a follow-up question, which is what to what extent do you think
the magic of Maven is in the AI versus to what extent is the magic of Maven just in the graphical
user interface and the integration of all of these data feeds, whether or not AI is a part of
analyzing that data? I don't think anyone wants to hear from me about credit or magic. That's just
not, you know, my analysis on that is not worthy. But I think as a reporter, the data integration
and the role it would play was very contested within the Project Maven team. And several people
told me, we could change out this user-face tomorrow. We don't need Pantheir. We're agnostic on who
we use, but we do need to use it interface. So, other people told me it's Pantheir that's
delivering this. So, I think, you know, that I didn't, it wasn't clear to me that anyone won on
that except at Pantheir and Maven's smart system are getting all the contracts so that they're
expanding. Clearly, when you've seen other countries, you know, NATO is using it. I do have an
example where I think another country tries it out. It is dependent on the data feeds. If you don't
have the good data coming in, there's nothing to analyze. And even in some of the early examples
of Project Maven where it's not going very well, there are examples of data that's been sabotaged.
There's a kind of expletives and not by foreign adversaries, but by disgruntled training data
laborers. Yeah. Exactly. Who got fed up and decided to swear curse all over the training data
and just didn't play balls. So, that, of course, has an impact or would have an impact if they'd
continued to use that on the quality of the algorithm. So, getting the data in such a way that it is
well labeled and entering the system and then being able to cross check against others,
that clearly is the complicated thing that they've got to grapple with. And Maven is kind of
increasingly at that stage. But even in Yukon, I discovered that Maven wasn't loading because the
networks it was based on were delayed. So, the network was so complex. It was an amazing thing to
hear. So, in the first few weeks and months, they started complaining that they needed more cloud
in Yukon. And so, the Project Maven team tried to respond to this and unravel it. AWS got involved.
I discovered and it turned out that packets of data were criss-crossingly Atlantic
twice or even four times. And so, they could lose packets of data that way and certainly slow things
down. And then you have, in order to have the classified systems running, you need to use
encryptors. For some, they argued that that created a bottleneck. So, just to get the information
through, you just needed a bigger encryptor to get a bigger encryptor. You need to call someone
very, very senior within the intelligence community to get permission to move an encryptor
because they're certified by the NSA. So, you have all these things that had to come together
and weren't coming together. And so, people... A way to think about that is on the entire intelligence
workflow on the entire digital infrastructure workflow, Maven is trying to dramatically increase
the analytical capacity by using a great deal more data, by using a great deal more computation.
And so, as they eliminate bottlenecks in one area, they start encountering all other bottlenecks
in all the other areas. And so, they basically start demanding that the entire rest of the
military digital ecosystem reform itself to be compatible with their needs.
To your point about credit and magic, the thing that Google had always wanted was to deliver
intelligence to the people who actually fight wars, to bring it down to that battlefield level.
And he really, although his whole life is now about AI, it wasn't AI itself. And some people
have argued to me, through that Ukraine phase, that it showed that computer vision doesn't work.
So, you really can continue to encounter vastly different perspectives on what is happening.
And I think that's partly because the US still hasn't got to see Jad C2, this effort to
integrate sensors and shooters. Maven is almost a stepping stone in this effort. And it was quite
late. I'd already found it out by the time I wrote the book, but it was only in I think 23-24
that I realized that Maven was very key to this idea to creating Jad C2. It was
they had decided they should pick a platform and run with it. And you get into all the kind of
bureaucracy of, well, do we have vendor lock-in, then can we change? How are we going to deal with that?
Should we have a consortium? That is all the kind of congress bureaucracy, money controversy,
and just that effort to deliver those contracts looks very different from the effort to deliver
usefulness to people on the ground. And to deliver one product brings with it its own
group of people who will disagree. Now I want to bring us to the present moment,
which is the war in Iran. And this includes not just the work you've done in this book,
but also your reporting at Bloomberg as a correspondent covering these types of issues.
So what is your sense of how project Maven is being used in Iran and what type of impact it's having?
I've reported that project Maven, number one, Maven Smashtom is being used
that Claude and Tropics AI tool the LLMs is part of that. And the piece that you asked me to read
out at the beginning reflects that LLMs can help with speeding up processes. So it doesn't get to
the legal decision or the commander's decision to fire, but it helps with those processes. I've
been told and also can help crunch some of those data feeds and overlay them.
You have a sense of the mechanism of LLMs impact. I mean, you wrote that
they can increase the rate of targets per day from 1000 to 5000. And the LLMs were a part of
that acceleration. But what actually is it that the LLMs are doing and what part of the life of
these war fighters does it make easier that accelerates that? I was told that it was speeding up
the processes. So the process of that you would need to get permissions, not the actual permission
itself, but that almost that ferrying of paper backwards and forwards. It's the admin side of the
targeting cycle that it was helping with, not actually in that sense, the identifying side.
I think things have moved on because you also now have reasoning
where they are or the different data feeds. You'll have to just give me a bit more time to get
to that. I don't really look forward to your next column. I'll look forward to your next podcast
on that. But I think what I've definitely determined is, Claude is being used. It's still being
used. Maven Smart System is being used. And then what St Com has publicly said is the spokesperson
said to me, they're using a variety of AI tools and that these are helping to generate points of
interest and help make smaller decisions faster. And then the commander has publicly said that
it is helping bring AI is helping bring down processes from days and hours, sometimes to
as little as seconds. And he's taking time to say that in the middle of this war. So their focus,
their belief in leaning forward into AI is fascinating. Don't forget that senior
St Com people in 2023 were awarding Maven a grade C in public. So it was a tool they wanted
but was still frustrated by. And they may well, I don't know what public grade they would
award Maven today. It's a question I'd love to ask. But just because this tool exists, I'm sure
we would find people who are still frustrated with it who want much more. But it is a tool
clearly that I think we can confidently say St Com says it is helping them speed up.
Yeah. Well, I certainly see a pretty dramatic impact and understand this as a pretty remarkable
effort. Now, I'm obviously biased, but I want you to not worry about hurting my feelings.
So Maven is the original AI Pathfinder in the Department of Defense. There are these other
initiatives going on. We've talked about DIU, the Defense Innovation Unit. We've talked about the
Joint AI Center. Obviously, the NRO has as AI initiatives underway, the various services
have AI initiatives underway. Sometimes they overlap with Project Maven, sometimes they don't.
It seems from you picking this book that you thought that Maven was a special story. One reason
why you might think that is that it was more successful and more impactful than those other
organizations. Maybe you just thought the personalities were more interesting than those other
organizations. But I'm curious to get your sense of was Project Maven unique and special compared to
those other organizations. Was it more impactful than those other organizations? And if so, why do
you think that was? And again, don't worry about hurting my feelings. I made of steel.
Nobody's made of steel. I think really what drove me to pick it was because it was so public
in one way, but so not public in another. And so in that sense. So the most fun to be an investigator
of? Well, I had so many questions I wanted the answers to. I wanted to know how algorithms
fare. And actually, a lot of the examples in the book are examples where algorithms are not
successful, but it's part of a learning curve. Trying to get into the internet. And so in
Valley, they often advise startups to fake it to you make it. And I think that's that's a little
bit of the story of Project Maven, which is they talked a big game, but eventually they became
big game, but there was a long time where they were talking more than they could accomplish.
I think in terms of kind of success from the Pentagon's perspective, often from Project Maven's
perspective, obviously, you have this platform that is named after Maven that is used by every
command that has more than 25,000. Yeah, which has more than 25,000 accounts.
Palantir has this big role in it. The licenses keep flowing. So in that sense, Project Maven has
created a product that is a success for Project Maven. Now, the key debate inside Project Maven
was should they be creating that platform or should they be developing AI? And someone like Colin
Carroll felt very differently. He, who was on the team, he thought that the digital interface
was a distraction from the pursuit of AI. And that actually AI should be on the machines themselves.
And so in some ways, that was a failure. And when he writes to Kuku, I find his letter,
he's very disappointed in what Project Maven has achieved. And so he loves Project Maven.
And since he loved the hard work, he loved the dedication, all of those things. He loved
how unleashed they were as a team to go up against other people. But you know, you are definitely
offering a particular perspective to say Maven is this big success story. It has created one tool
that still has competitors, you know, the NRO still has its own intelligence platform.
I've also heard that Maven is not succeeding at the operator level, which is exactly the level
it was hoped for because of bandwidth problems. And I spoke to Alex Miller, the CTO of the army
about this. He believes in Maven. He wants more AI. He's someone who's leaning forward. He was
described to me as a frenemy of Project Maven in the early days. But now in his current role,
obviously the army is leaning very heavily into Maven. But it is still trying to untangle
problems. And this effort to link up sensor and shooter brings with it all sorts of questions
of accountability, data flow, all the rest of it. So to me, it isn't as clear a picture as the one
you're presenting. And then of course, we get onto some of the very explicit problems of Maven
that I recount in terms of AI getting onto drone platforms. This is still also, I mean, the very
thing that campaign is worried about is the thing that I uncover in the book that Maven tried to do.
So get AI onto drone platforms to be used as automatic target recognition. And that process
stumbled. It was fantastically successful at collecting data of Chinese vessels at sea.
And so it created algorithms that could try to identify those if you could imagine AI sitting
on side a drone boat or an aerial drone. But what happened is the integration was difficult.
The algorithm makers often don't get the kind of feedback that they would really welcome,
because they're at arms lengths from the operators. Numerous times I spoke to algorithm vendors
who said we want the same level of access as talented. Now I'm told that's a common complaint.
Everyone wants the same levelers of access as talented. But the argument from the AI perspective
was unless we're sitting with the vendors with the users, we don't know how to tweak our algorithm
quite what they want. And it gets lost in translation. So I think there are still for those who support
this technology, a huge number of lessons to be learned about how to get that technology
out to people and develop it in a way that it can be relied on. And then there were just
technical problems like a splash from the ocean could interrupt the tracking ability of the algorithm.
Now that's not AI's fault, but there are all sorts of logistical hurdles that stand in the way
of smooth running AI that can select, you know, to take it. So one person whose story you tell
in the book is Adam Whitworth who starts who starts out as I think it's fair to say
one of the biggest skeptics of Project Maven and ends up in charge of it and one of its greatest
true believers. So what do you make of that evolution and how it happened? It's so fascinating
history. I mean, he's someone who was so involved in the targeting cycle. So exactly the
and and on the intelligence side. So exactly the kind of people that Drew Kukuo saw himself as
going up against the J2s really to deliver intelligence to operations rather than to intelligence.
He argued that if he had stuck with intelligence, it would just Maven would just be another failed
intelligence project. Admiral Whitworth was very concerned about accountability, about who would
defend this in front of Congress if AI contributed to a targeting error. And also whether it was going
around taking shortcuts around the targeting cycle. He already being briefed on Maven and given
quite a tough examination before it was clear that he would run an NGA and that he would get
Project Maven. So when he did get it, many of the Maven folks were concerned that he might even
kill it off. What he told me about his conversion is that he found that Maven was able to update
and respond to the realities of war quicker than anything he'd ever seen. And it was that
that pliability of the software that ability to respond to what US operators needed that really
converted him. And he also worked hard, I think, at NGA to develop a way of assessing models.
So by then, especially as it's spreading, the reliability of a model in this technology that
makes mistakes and hallucinates and all the rest of it, he wanted to sort of characterize the ways
in which these models may be successful or may fail and make sure users understood that. So he
did some of that, I suppose others might call it the growing up of Maven. I must say there are
also complaints that then Maven slowed down, but the proliferation of it certainly expanded wildly
under his watch. And he regularly spoke to commanders to say, here's what I've got this new for
you this month. What would you like? And finally, Maven had had terrible difficulty cracking in
Depecom, which is this great irony given the whole of Maven was meant to be to go up against
provide AI that could be useful in a fight against China. Should there ever be one?
Admiral Whitworth, I think under his watch, had the most success of breaking into in Depecom.
And now in Depecom is a great cheerleader of Maven and Admiral Paparo, who leads it, hosts AI
summits and wants more from industry. And is a is a big cheerleader of the use of LLM
technology. Yes, yes, yes, as well. Yeah, I'm told his he's very fond of Florida. I'm told
yeah. Well, Katrina, I could easily keep you on for another hour, but maybe that would be unfair
to you. And hopefully not to our audience, hopefully they would eat it up. I think they would.
But what they should really do is go buy your book, because this is by far the most
exhaustive form of the story. I think one thing that's just worth calling out is at multiple times
Cucor and other people on the project, Maven team, who were famously reluctant to talk to the press,
do say, oh, this all end up in a book someday. And here is the book. Here is the book. So
congratulations to you on on writing it. You have, I think you say more than 200 interviews or
people you interviewed who were a part of this story at various stages. There's a ton of details.
And as I said, the minimum bedrock of knowledge for anybody to intelligently participate in the
military AI conversation going forward. So Katrina, thank you so much for coming on the AI
policy podcast. Thanks thanks for having me. Thanks for listening to this episode of the AI
policy podcast. If you like what you heard, there's an easy way for you to help us. Please give us a
five star review on your favorite podcast platform and subscribe and tell your friends.
It really helps when you spread the word. This podcast was produced by Sarah Baker,
Sadie McCullough, and Matt Mann. See you next time.
The AI Policy Podcast
