Loading...
Loading...

This message comes from Total Line and More.
College Basketball's biggest tournament is here,
and watch parties are underway.
Shop Total Line and More in-store or online to get ready today.
Spirits not sold in Virginia and North Carolina.
Drink responsibly, be 21.
This message comes from Carvana,
who makes car selling easy.
Enter your license plate or VIN, get a real offer in minutes,
and have your car picked up from your door.
Sell your car the easy way with Carvana.
Pick up V may apply.
This is Fresh Air, I'm Tanya Mosley.
America's first AI field war is unfolding right now.
Over the last three weeks,
the US and Israel have launched strikes against Iran,
hitting 1,000 targets, and the first 24 hours alone.
Nearly double the scale of the 2003 shock
and all campaign in Iraq.
The system helping to enable much of this
is called the Maven Smart System,
and running inside of it is clawed
from the company Anthropic,
an AI model that millions of people interact with every single day.
On the very first day of the war,
a US Tomahawk missile struck a girls elementary school
in southern Iran,
killing more than 165 people,
most of them school girls.
A preliminary military investigation
found the strike likely resulted from outdated intelligence.
And while the role of AI has not been confirmed,
the Pentagon is still investigating
whether Maven played any part.
At the center of this story,
is a little no marine colonel named Drew Koukour,
who spent decades fighting to bring AI to the battlefield
and whose obsession has quietly changed the future of war.
My guest today has been reporting on Koukour for years
and how we got here.
Katrina Manson is an award-winning Bloomberg reporter
who covers cyber, emerging tech, and national security.
Her new book is Project Maven,
a marine colonel, his team, and the dawn of AI warfare.
Katrina Manson, welcome to Fresh Air.
Thanks for having me.
You have been reporting on this Maven smart system for
a couple of years now,
and now you're watching it used in a real-time war.
Take us a little bit into how the Maven smart system actually
works and specifically what Claude's role is inside of it.
How do those two things work together?
If you imagine looking at something like Google Earth,
you begin to have an idea of the display
that US military operators will be looking at.
Some people have described this to me as Windows for War
or an operating system for war.
It's essentially a digital map.
What makes that map special from the US military point of view
is the number of intelligence feeds that are coming into it.
At one public event, it was made clear that is more than 160
separate intelligence feeds.
Now, to crunch that data,
they're using digital data analytics,
but they are also using a few other tools that rely on AI.
There's computer vision to analyze some of the objects
that are showing up on the maps that could be potential targets,
also where US forces are.
And then Claude is doing something different,
that is not computer vision,
that is an AI tool based on a large language model
that can crunch data.
And what I've been told before is that Claude and LLMs
inside Maven smart system help speed processes.
So the sorts of processes you need to get sign off on a target,
everything short of sign off Claude can help with.
And it can also help plan courses of action,
help pair weapons to targets.
It can assist everything that the US military needs to do
when it comes to making a decision
short of actually making the decision.
On the very first day of the war,
this missile struck the girl school,
and there is some reporting about this case
that the United States was likely responsible.
There is no indication yet that AI had a role to play in here,
but the coordinates they used were more than a decade out of date.
What does that specific incident tell us about some of the lapses
in data keeping and potentially what could be
a challenge for AI models as they move through
when are used more often in war?
Adherence of AI warfare regularly emphasized to me
how important accountability is.
In every war, there are bad strikes.
Whether the US is prepared to investigate it
and make public what has gone wrong in this case
if the US is responsible will be a real test
for those claims of accountability.
AI is meant to make warfare more auditable.
Now, whether this is a case that the school was on a targeting list
that predates AI and wasn't updated
and whether AI drew from that targeting list,
all of that will be important to reveal.
Any system, particularly one that uses AI,
will only ever be as good as the data that feeds it.
And if they are drawing on a database that is old,
the AI, if it's set up that way, can't do anything about that.
And in numerous occasions, I've found examples
of poor, weak, or flagrantly erroneous data
that have fed systems.
If this is a US attack, it won't be the first one
against a mistaken civilian target.
In 1999, the US struck the Beijing Embassy in Belgrade.
And that case, the CIA came out in public
and said, we had the map labeled wrong.
And if a map is labeled wrong, which we don't yet know
is the final analysis of what happened here.
But if that girl's school was in a database,
no AI can beat that unless you start using AI
in other places.
If Google Maps, for example, showed that it was a girl's school,
it would be quite simple to draw from that information, potentially.
If there were a way to analyze other location data
that might indicate there were children in the area,
and an additional factor will be where are the checks
and balances on an old database?
And what role could AI play in checking work
and in cross-referencing other data if, indeed,
the girl's school is labeled on something
as accessible as something like Google Maps?
I want to talk about some news this week that is coming
to bear because of a court case.
The Pentagon blacklisted anthropic for refusing
to allow claw to be used in autonomous weapons.
And within hours open AI stepped in,
they publicly then announced the same exact restrictions
anthropic was punished for holding.
Is that an accurate way to describe this?
It's one way it's been described, but not in my reporting.
The open AI deal I've reported is slightly different.
It's not clear if it maintains exactly the same safeguards
as anthropic.
And anthropic also, of course, it's really important
to frame, really lend in to working on classified cloud
for the Pentagon.
They were the first AI company to decide to offer AI
on a classified platform.
And from my reporting, it is not possible for them
to know every use case, every specific example
of the way their AI tool is used in classified operations.
And the classified level is where America fights its wars.
So that decision to lean in to what the American military calls
war fighting was already a very significant decision.
Open AI had not taken that decision.
It was not on classified cloud.
It now will be.
It does seem to have allowed more open acceptance
of how its tool could be used.
But I think we'll have to see because it's
a very politicized divide.
When you have the president calling anthropic, left wing,
not jobs, calling them a radical left company,
even though they were working on classified cloud,
clearly there's a technical debate, there's a policy debate,
but there is also a political flavor to this falling out.
Can you explain, maybe in layman's terms,
how a classified cloud actually works?
Almost.
If you imagine the cloud that we all use for, let's say,
our email or for documents that are loaded up into the cloud,
the same can be done for military data
and it can be accessed and shared.
Now, for a US military or for the intelligence services,
they don't want that information to get hacked.
And so there are a number of safeguards
that are introduced that can uphold a higher classification.
So information that the US system deems and secret
or top secret or compartmentalized information
that only a few people can access,
even at that top secret level.
And each has its own network that can, in theory,
secure that information so that it can't be hacked,
penetrated, ruined in some other way.
Of course, multiple times in history that's gone wrong.
All the time, those systems are under strain from hackers,
potentially also from insider threats.
So the US is constantly trying to safeguard its information.
I was reading about some researchers at King's College London
who recently put cloud and chat GPT and Gemini
into simulated nuclear crisis scenarios.
And 95% of the time, the AI reached for tactical nuclear weapons
as a strategic option.
You have spent years inside of this world
of these people who are building these systems for war.
And I just am curious, what do you think
when you hear a finding like that?
Well, I also reach for the word terrifying.
Clearly, that kind of tool is one that you really need
to put safeguards around.
So the US has said it doesn't want to put AI
into the nuclear controls.
So that's one step.
But there will be pressure on that system
and decision making is already speeding up.
But I've certainly spoken to US military advisors
who've brought me similar information.
They emphasize that AI can be escalatory as you just describe.
And also almost a more problematic issue, sycophantic.
There is a tendency to agree with the person
asking the question, so shall I go to war?
Would it be a good idea to launch this missile?
If the question is asked in that way,
assuming an intent or an action,
there is a tendency within AI also to buttress that opinion.
So as a check on opinion forming,
you need to consider AI in a really careful way.
Now, the US military knows this.
This was a very advanced computer scientist telling me this.
And he had been an advisor to US Central Command,
the very command that is now using these chatbots.
What he and others have told me
at the National Geospatial Intelligence Agency
is that they're aware of these risks
and they are trying to add in checks and safeguards
what they call underneath the hood.
So if a commander said,
shall I strike this now, is it a good idea?
Even if they were to prompt the chatbot in that way,
the claim was made to me
that the chatbot runs through a very fast series of checks.
It red teams the question, which is to say,
it pretends it is an attacker.
It checks for escalation bias.
It checks for a number of different things
and by the time it spits out the answer,
all of those potential problems have been factored in.
Now, I haven't seen that happen in real life
and I've certainly come across a lot of people
who are very frustrated by the answers that chatbots give
even within the military,
sometimes fabricating attacks that haven't even happened.
And if you can imagine the US needs to respond to attacks,
if they're responding to an attack that was fabricated,
there is constantly this risk for escalation.
And in that sense, it's always about that critical thinking,
that framing, what question are they asking of AI?
Can I win quickly if I start a war with Iran
or what are the risks that this could proliferate,
that US service members will be harmed,
that civilians will get hit?
What are the chances of achieving regime change
if I seek a quick war?
How many quick wars become medium term wars and long wars?
Is there still that human hubris where AI is put
will only ever be as good as the data and the question?
And there, there may still be a gap.
And all of this testing is happening
during an active war right now.
A lot of this testing that I've reported on happened before then,
but even at the time in February 2024,
I was able to report that the US did use this system
to narrow down some of the 85 targets
that the US military struck in Iraq and Syria.
This was in reprisal for the death of three US military personnel.
And that is the first large scale up until today's operations.
Case I know of of US central command using this system
to try and bring speed and scale to war.
It had been used before to assist others.
It had been used on a piecemeal scale
for US special operations command,
but they tend to be much smaller.
Getting into the big army, the big military formations,
this really is war at a very joined up
and connected scale involving every service.
And as we speak today,
Senkham has hit more than 9,000 targets.
And that certainly has relied on the system, Maven Smart System.
Katrina, there's a man at the center of your book
in this story that most people have never heard of,
a marine colonel named Drew Kukor.
Tell us who he is and why this moment basically exists
because of him.
Drew Kukor is this very absorbing retired marine
who I met, who was chief of this project called Project Maven.
He wasn't the director, he was the doer,
the leader of this effort to bring AI to the way
that America makes war.
And it started publicly at least as a very narrow effort.
The idea was to bring AI to rifleing through drone footage,
copious video that the US was taking
in various countries around the world
as part of what many military operators
called their G-Wat, the global war on terror.
Now, Drew Kukor had a long and frustrating career
inside the Marine Corps as an intelligence officer.
And he was repeatedly fed up with the tools
that he had to go into battle and to support
other military operators.
He was in Afghanistan in October 2001, after 9-11,
lugging around a large computer.
He felt that he couldn't support the US military operators
that intelligence was meant to keep safe.
And they simply weren't able to get frontline troops,
the kind of information they needed,
as these very rudimentary, unsophisticated,
improvised explosive devices started to name
and kill American service members.
And so there was a constant frustration
that the US could bring to bear enormous firepower,
precision firepower, but couldn't put it in the right place.
And you see, as you see in all wars,
what's known as friendly fire, allied fire,
so the US mistakenly harming their own
harming partners and allies,
and also harming and killing civilians by mistake.
And this number of problems he began to feel
could be solved with better intelligence.
And if there was a way to reduce that loss
when he was in Afghanistan, there were Marines dying
the whole time when he was in Iraq,
there were hundreds of Marines dying.
And he simply felt not that AI so much was the solution,
but better information.
And in the modern world, better information
has come to mean AI.
And in 2011, he worked on an effort to bring
technology from a company named Palantir Technologies
to Afghanistan to start to track
where these improvised explosive devices had been before.
So we're 10 years into this 20-year project
that Kukur envisions.
He has always said that he feels the Department of War,
which during the time of your talking
was the Defense Department,
needed to function more like a software company
than a weapons factory.
But looking at Iran right now,
the scale and the speed is this war he envisioned.
There's no doubt that this is an AI-infused war.
And the other element of safety, accuracy, scope, scale
is people are claiming that AI makes war more efficient.
Often, what happens when things are more efficient
is you can simply do more of it.
And to hit a thousand targets in the first day,
now 9,000 targets, and not yet have finished the war,
the Iranians are still continuing.
The Strait of Hormuz is closed.
There is a question about overconfidence,
about how much you can rely on these systems,
and if expanding the pace of war gets you there.
And this is a long-term debate.
If you go back to 1899,
there was a Polish banker, Ivan Blok,
who brought out a paper called Is War Now Impossible,
because he looked at these claims for mass-produced rifles,
that now the ways of killing was so industrialized
at such scale no one would dare declare war
against someone else.
And instead, he argued long before World War I started,
that actually the mass production of weaponry
would lead to stalemate, human harm,
long wars.
And it raises this idea of, is there ever a way
to deliver palatable killing?
Our guest today is Bloomberg journalist Katrina Manson.
We'll be right back after a short break.
I'm Tanya Mosley, and this is Fresh Air.
This message comes from Total Wine and More.
College Basketball's biggest tournament is here,
and watch parties are underway.
Total Wine and More has options that appeal to everyone,
plus a few choices for cocktails,
with low prices and a wide selection,
everything can be found in one place,
whether hosting or bringing something to share,
shop Total Wine and More in-store or online
to get ready for the big tournament today.
Spirits are not sold in Virginia and North Carolina,
drink responsibly, be 21.
This message comes from Comcast.
Nothing brings people together quite like Team USA
at the Olympic Winter Games,
from NBC Universal's iconic storytelling,
to the innovative technology across Xfinity and Peacock,
Comcast brings the Olympic Games home to America,
sharing every moment with millions.
When Team USA steps on to the world stage,
people are not just watching, they're cheering together.
This winter, everyone's on the same team.
Comcast, proud partner of Team USA.
This message comes from Jerry.
Are you tired of your current Sharon's rate going up,
even with a clean driving record?
That's why there's Jerry,
your proactive insurance assistant.
Jerry compares rates side by side from over 50 top insurers
and helps you switch with ease.
Jerry even tracks market rates
and alerts you when it's best to shop.
No spam calls, no hidden fees.
Drivers who save with Jerry could save over $1,300 a year.
Switch with confidence.
Download the Jerry app or visit jerry.ai slash NPR today.
This message comes from BetterHelp.
As a dad, BetterHelp president Fernando Madera
relates to needing flexibility
when it comes to scheduling therapy.
I have kids under 18, so time is very limited.
That's why at BetterHelp or Therapist try to have sessions.
Sometimes at night, depending on the therapist
or during the weekend.
So I think that's what we need to tell the parents,
you are not alone, we can help you out.
If a flexible schedule would help you,
visit betterhelp.com slash NPR
for 10% off your first month of online therapy.
This is fresh air, I'm Tanya Mosley,
and my guest today is Bloomberg journalist Katrina Manson.
She's written a new book titled,
Project Maven, a marine colonel, his team,
and the dawn of AI warfare.
The book traces how marine colonel Drew Kukor
became instrumental in the decade-long creation
of America's AI warfare capabilities,
which are now being used in the act of war in Iran.
I wanna talk to you a little bit about Kukor's relationship
with Palantir.
It seems to be one of the most complicated threads
in your book, and Palantir for those who aren't familiar
is a data analytics company.
It helps organizations make sense of massive amounts
of information, and Kukor became one of the most powerful
internal advocates for Palantir.
How did that relationship begin?
And why was it so controversial?
Kukor learned about Palantir in the late 2000s
when it was really quite a young company,
and he was looking for this data analytics solution
that could bring data together
and deliver him a picture of war.
As he said to me, it's just a very hard question
to know where the enemy is and where your own people are.
And this for him became a tool that he really believed in.
And others in the defense tech world
who in their military service relied on Palantir
have spoken favorably of it as a tool to me.
He continues this relationship, and he flies over to see them
and he explains his entire vision
for what becomes Maven Smart System,
a digital map and operating system with white dots,
with coordinates that ultimately can pair a target
to a weapon and shoot it.
And at the time Palantir doesn't really want to do this
because he's asking them to do two things
they don't see themselves as.
One, AI and two, to create a user face
and they didn't see themselves as creating pretty user interfaces.
They saw themselves as the data analytics,
the crunching of that aspect.
But they went along with it and a very senior person
at Palantir, Aki Jane, told me that it really is Kukor himself
who convinced Aki Jane to what he said is revisit his priors.
He had a bias against AI, so did all of Palantir.
And they begin to listen to Kukor to understand
how AI might support their data analytics.
In addition to that, Palantir was already controversial
within the Pentagon.
They had actually sued the army in 2016
to gain access to a contract.
This is a time where you really have young, hungry companies
beginning to say give us a contract.
There's this sense that contract awards in the Pentagon
are very old fashioned, it's in function too slowly.
So Palantir has succeeded in getting a foothold in the Pentagon
but was seen as very arrogant by many
because it had sued and it continued to claim
its tech was the best.
Whether that was true or not,
the manner in which they said this
irked several people.
And Kukor himself guided them not only on AI
and what he wanted, but also on the manner
in which they should conduct themselves.
He said, we think you're great, but you need to turn it down.
How would you characterize Palantir in this story?
Is it an honest actor in it?
I think it's really fair to see it as a very divisive company.
You have got people who cheerlead for it with great passion
who feel that Palantir's tech saved their lives.
You also have people who think they are arrogant,
risk being monopolistic, charging too much
and simply make tech that is good,
but not as good as everyone makes out.
Even as late as 2023, a senior commander
who was using Maven Smart System
awards at a grade C+.
So right the way through, you have problems of Palantir
and multiple members of the military lined up to tell me,
okay, we're using Palantir,
but if something else better comes along, we'll switch.
I want to talk a little bit about some other active wars,
particularly the war in Ukraine.
It seems the way that you've been writing about this,
that that's where AI warfare kind of became real at scale.
When Russia invaded back in 2022,
the US deployed Maven in support of Ukrainian forces,
but it almost immediately fell apart.
What happened and how did they fix it?
The computer vision had been trained on the Middle East,
think hot, think of sand,
and suddenly it was being asked to identify Russian tanks
in the snow in Ukraine.
So it wasn't delivering the detections
that the US wanted to rely on.
Secondly, the system wasn't loading.
I found out there were often eight second delays,
which in a war is a lifetime.
And that was because after a lot of investigation,
it turned out that the networking just wasn't up to it.
It was in fact criss-crossing the Atlantic,
sometimes as much as four times.
So that created delays
and sometimes even packets of data
could fall off the network
and you might miss crucial information.
So they really needed to work on the networking,
the sort of arteries of information.
And they also needed to very quickly gather up imagery
of Russian equipment and retrain the algorithms.
And that was going on at a very fast pace.
People complained about getting phone calls
two in the morning, others welcomed them
in order to be part of this effort to support Ukraine.
You know, in reading about that from you,
one of the maybe illegal lines in warfare
is kind of this difference between supporting an ally
and fighting their war for them.
And you report that the US, as you said,
was passing targeting coordinates directly to Ukraine,
sometimes through signal,
sometimes literally printed paper and walking it across.
By that measure, how close was the United States
to actually being in that war?
I suppose that becomes a diplomatic question.
And certainly the US wanted to frame itself
as being a supporter, but not a direct participant.
And that knife edge is really in the eye of the beholder.
Does Russia choose to see it that way?
Or does Russia say you've gone too far?
And so the US was very, very, very sensitive to that.
And the actual project may even operators
and those in the army who were using this system
were even more sensitive because some people
among their groups said we are going too far
and others said we have to help Ukraine
with everything we have.
And at the time, that debate was not public.
There's also some elegant language,
which is this term point of interest.
So rather than saying we're sharing targets,
we're passing targets to Ukraine,
they settled on this language of we're passing
points of interest to Ukraine.
Everything short of the decision to target,
which was a Ukrainian own decision.
But as even some of the people I spoke to for the book framed it,
it was almost a sort of Pinocchio like relationship.
The Americans potentially pulling the strings
on Ukrainian decisions.
And it got tighter and tighter and tighter.
One reason the Pinocchio metaphor isn't fully fair
is because also both sides have emphasized
to me in interviews that they really developed trust.
And so the Americans ultimately were finding
pieces of military equipment
that on Ukrainian information just looked like a truck.
But on US information, they were able to say, trust us.
Hit it.
And it was, in fact, a transporter erector launcher,
essentially a mobile missile launcher.
And that relationship got faster and faster and faster
until at one point, the US identified a target
in one example I'm told about.
And 18 minutes later, the Ukrainians were able to hit it.
Let's take a short break.
If you're just joining us, I'm talking with Bloomberg
journalist Katrina Manson,
whose new book, Project Maven,
Traces How the United States built its AI warfare capabilities.
And how those capabilities are being used right now
in an act of war in Iran.
We'll be back after a break.
This is Fresh Air.
This message comes from an PR sponsor, total line and more,
with so many great bottles to choose from.
It's easy to find your favorite Cabernet
or a new single barrel bourbon to try
with some help from one of their friendly guides.
And with every bottle comes the confidence
of knowing you just found something amazing.
Find what you love and love what you find,
only at Total Wine and More.
Curbside pickup and delivery available in most areas
visit TotalWine.com to learn more.
Spirits not sold in Virginia and North Carolina,
drink responsibly, be 21.
This message comes from Jerry.
Are you tired of your car insurance rate going up,
even with a clean driving record?
That's why there's Jerry,
your proactive insurance assistant.
Jerry compares rates side by side from over 50 top insurers
and helps you switch with ease.
Jerry even tracks market rates
and alerts you when it's best to shop.
No spam calls, no hidden fees.
Drivers who save with Jerry could save over $1,300 a year.
Switch with confidence.
Download the Jerry app or visit jerry.ai slash NPR today.
This message comes from Bombas.
When you're playing sports, you're focused.
Your socks should be too.
Bombas engineer socks to fight sweat
and cushion impact for every sport.
Visit bombas.com slash NPR and use code NPR
for 20% off your first purchase.
This message comes from Carvana
who makes car selling easy.
Enter your license plate or VIN,
get a real offer in minutes
and have your car picked up from your door.
Sell your car the easy way with Carvana.
Pick up V may apply.
This is Fresh Air, I'm Tonya Mosley
and my guest is Katrina Manson,
a Bloomberg journalist and author of Project Maven.
We've been talking about how the United States
built its AI warfare system.
Let's talk about Gaza for a moment.
Israel reportedly used AI targeting systems,
gospel and lavender in its campaign there.
What does Gaza tell us about where the guard rails
on AI warfare actually are?
Some defend the AI saying the way it is used
is down solely to policy.
And others have suggested that the way the IDF
was prepared to potentially accept collateral damage,
meaning civilian harm,
and that speed would not be palatable to the US.
To the US, it just isn't the way that the US
currently operates.
And I should say the IDF defends its action
saying they have not broken the law of war.
They have been proportionate and discriminate.
That's their position.
There are also these very stark numbers of 70,000 dead.
For me, a key question was to understand,
was this defense of AI?
Was it fair to try and separate AI from policy?
So for those who've expressed concern
at the way the IDF pursued targets and civilian harm,
they've blamed policy rather than AI.
So for several of the experts I've spoken to
who make the distinction that they're totally separate,
the tech and the policy.
Many others are there arguing that the more you have
an AI-infused killing machine, the more you can use it.
Which brings up something else for me.
You report that the US has already built weapons
that can fly and select their own targets and kill
without a human making the final call.
So autonomous weapons.
And you name these two classified programs in the book,
goalkeeper and whiplash.
Can you tell me briefly what they are
and what does it mean that they already exist?
These are efforts to bring drones in the air
and on the sea into life.
And this is for a very different conflict scenario.
This is the US thinking about the defense of Taiwan.
So if China were ever to attempt an invasion of Taiwan
and if another big if the US were to decide
to help defend Taiwan, there could be a very different
scenario from the one that Ukraine is facing in Russia
because of jamming.
So the fear is that China would disrupt US satellite
communications such that it couldn't control its own drones
and the drones that would protect and defend Taiwan
against a maritime onslaught would need to be able
to function autonomously without any internet connection.
And so the US has been developing these drones
in a pursuit of autonomy for several years.
Whiplash is an effort to put weapons on a jet ski
that can move autonomously.
And goalkeeper is an effort to weaponize drones
and have them fly about and be able to select
and hit a target under its own steam.
It exactly what campaigners from human rights watch
argued against at the dawn of Project Maven
and what UN Secretary General has called a pursuit
of something morally repugnant and politically unacceptable.
That is the pursuit of lethal autonomous weapon systems.
Well, I mean, what is standing in the way
then of any meaningful international regulation?
Because what does it actually mean
that we're already at war while these particular conversations
are still happening?
That's such a fascinating tension.
There have been discussions at the UN, a UN body
for more than 10 years now.
And they are still trying to define
what is an autonomous weapon system.
And the US position has been let's make it first
and then let's work out what we need to regulate.
That of course speaks to a fear that China might get there first.
The US has wanted to dominate this technology
and to be the ones who could deliver it
in a way that they felt they could use it and win.
But there is a push now to turn some of that work into a treaty.
And a treaty would, by all accounts, not include
the likes of the US or China or Israel or Russia.
Katrina, tell me if I'm right in this.
I mean, everything that we've been discussing may then
the autonomous weapons, this arms race between tech companies
to supply the Pentagon.
I mean, all of this exists in large part
because the US is preparing for a potential conflict
with China over Taiwan.
So what does this moment tell us about
whether we're actually ready for that?
The US has assessed that China wants to be capable
of taking Taiwan by 2027.
So next year, so this date has become this sort of drum beat
for the US to make sure that if it wanted to,
it could fend off a Chinese invasion of Taiwan
as soon as next year, but any time after that.
And there's been an increasing focus since 2018
on the prospect of China being a potential adversary,
not just a competitor on the global stage,
but also a military adversary.
And you see now senior US military commanders
saying quite clearly, China is rehearsing
for an invasion of Taiwan.
And how the US could prevent that
or help partners and allies prevent that
is a subject of some anguish within those
quite tight military circles that look at this.
There's a group that has really pushed for autonomy
to say there's no way we can defend Taiwan
without it, we need to do much more.
And I was told that often pentagon officials reassure allies
and say, look, there is nothing inevitable
or imminent about a Chinese invasion of Taiwan.
And if there is, we'll make sure we're ready.
But then they drop their voice in the corridors
of the Pentagon and whisper, we're not ready.
And so there is this constant concern
that the US needs to go faster in developing autonomy
that could withstand the sort of onslaught
that might be involved in an attempt to take Taiwan.
One of the other things we're all kind of asking
is whether we are the best custodians of this technology.
And after everything that you've reported,
what is your feeling?
What do you come down to?
I know you're a journalist, but you're also greatly informed
and you have all of these facts in front of you.
When you meet people whose business is the business of war,
your perspective changes
because there is so much risk
and there is such a long tail of experience
of these forever wars.
Many of the people involved in project Maven
were involved in the forever wars in Afghanistan, Iraq,
and saw their friends die.
And they put this trust and belief in AI
that that could save their friends,
that could save them, that could save America,
and it could prevent if AI were big enough and bad enough,
China from ever daring go to war with America.
So there's this deep belief in AI
as some kind of panacea.
I think for me, it raises the question of,
what is this idea of a costless war?
If you can make killing more remote,
is that more palatable?
We know that drone operators
and drone screeners, drone analysts
also experience post-traumatic stress.
And AI won't have those same reactions to watching the go.
So there is that argument that you can protect operators.
I question whether you also can protect civilians
by pursuing that notion of remote war.
And the bigger question I have,
is does remote war make war more possible, more likely?
Does it mean that war option
will, someone will press play on it, not understanding
the long, deep impacts?
So for me, there is a lot more to be done
by the people who advocate for AI
to use it in this way they claim it can be used
to deliver a better outcome.
Katrina Manson, thank you so much for your reporting
and thank you for this book.
Thank you.
Katrina Manson is a reporter for Bloomberg.
Her new book is Project Maven,
a marine colonel, his team, and the dawn of AI warfare.
This is fresh air.
This message comes from an PR sponsor,
total line and more, with so many great bottles to choose from.
It's easy to find your favorite Cabernet
or a new single barrel bourbon to try,
with some help from one of their friendly guides.
And with every bottle comes the confidence
of knowing you just found something amazing.
Find what you love and love what you find,
only at Total Wine and More.
Curbside pickup and delivery available in most areas
visit TotalWine.com to learn more.
Spirits not sold in Virginia and North Carolina,
drink responsibly, be 21.
This message comes from Jerry.
Many people are overpaying on car insurance.
Why?
Switching providers can be a pain.
Jerry helps make the process painless.
Jerry is the only app that compares rates
from over 50 insurers in minutes
and helps you switch fast with no spam calls or hidden fees.
Drivers who save with Jerry could save over $1,300 a year.
Before you renew your car insurance policy,
download the Jerry app or head to jerry.ai slash NPR.
This message comes from Bombas.
You need better socks and slippers
and underwear because you should love what you wear every day.
One purchased equals one donated.
Go to bombas.com slash NPR and use code NPR for 20% off.
This is fresh air.
The rise of AI has had seismic implications for Hollywood.
Movie scripts can be written by bots
and one AI company has even created
a computer-generated actor.
But amid this transformation,
one director has created an art installation
that harkens to the old days of cinema.
In 2000, an unknown Mexican filmmaker made waves at can
with a film about a car crash titled Amores Peros.
The director Alejandro Inyadi too
has now turned the film's extra footage
into an art installation.
Contributor Carolina Miranda reviews the show.
Walk into the first floor gallery
at the Los Angeles County Museum of Art
and you'll be forgiven for thinking
that you've wandered into the building's machine room.
The clatter of industrial appliances
makes normal conversation a challenge
and the room is hot, even a bit steamy.
But move deeper into the space
and you'll find that you're actually
in the middle of a movie.
Large projectors display looped scenes
on six screens staged around the room,
all featuring snippets from director Alejandro Inyadi's
first film, Amores Peros,
which debuted to much acclaim in 2000.
On one screen, you catch a piece
of one of the movie's brutal dog fights.
On another, a hand reaches up a woman's skirt,
a car chase ensues, and a brutal crash.
Then that same crash plays again from another angle.
This is Sueño Perro, devised by Iñarito
with the help of a robust production team.
The installation takes the unused scraps
of his groundbreaking film
and transforms them into an environment
that not only plunges the viewer right into the movie,
but into the act of filmmaking.
You see slates marking the beginning of the action.
You see takes and retakes.
Occasionally, the strips of colored film
at the end of a real come into view,
casting an orange light on the room.
Sueño Perro, in Spanish, translates to dog dream.
And Iñarito's installation certainly feels
like a dream of the original movie.
Fragmented, chaotic, out of order.
At times you hear the convulsive explosion
of the film's climactic car wreck,
sometimes that same crash occurs in eerie silence.
Like an actual dream, it's then up to the viewer
to make sense of what the bits might mean.
Like any movie, the image is also function
as a timestamp of the past.
The old sedans look dated.
One of the film's stars, Guy Elgarcia Bernal, is still a teenager.
And the Mexico City of the film is one that
has not yet been gentrified by the digital nomads
of the 21st century.
As Iñarito writes in a book about the project,
a film is made of time and light.
But what makes Sueño Perro truly remarkable
is its analog nature.
Amores Perros was made before digital cameras
had completely transformed moviemaking.
Iñarito, a storyteller who embraces excess,
shot a million feet of film to make the movie.
But the final cut, which clocks in at about two hours
and 30 minutes, used only about 13,000 feet of that footage.
That left about 187 miles of film on the cutting-room floor.
In an era when the word movie has come to mean a video
you can shoot and edit on your phone,
Sueño Perro is a reminder that films once carried physical weight,
a 35-millimeter reel weighs about five pounds,
and the average film was about two reels long.
The use of celluloid film also involves photochemical processing
and displaying the work requires large projectors
that generate heat and noise.
Making a movie is a creative process.
It used to be an industrial one, too.
Sueño Perro makes this industrial nature visible and visceral.
In the gallery, massive reels rotate on the large format projectors
typically used in old movie houses.
Long strips of 35-millimeter film travel through elaborate looping systems
that reach a height of more than six feet.
In addition, the designers have pumped a small amount of fog into the gallery,
making visible the beams of light projected onto each screen.
To enter the space isn't simply to be surrounded by the images of Iñarito's movie,
but the mechanics that make it possible.
It's a reminder of all the physical things that have been lost to the immaterial pixel.
Vinyl records have given way to streaming, newspapers to websites and apps.
Directors used to haul around heavy reels to display at film festivals.
Now at most, they carry a small hard drive.
And as acts of creation have been turned over to artificial intelligence,
Sueño Perro stands as a reminder of what could go missing when you take out the human touch.
The physical world, full of love and pain, can be a really enthralling place.
Carolina Miranda reviewed Sueño Perro on view at the Los Angeles County Museum of Art through July 26th.
If you'd like to catch up on interviews you've missed,
like our conversation with Riz Ahmed on starring in the new series Bate,
as a British Pakistani actor whose audition to play James Bond,
sends his life into a spiral or with human rights lawyer Brian Stevenson
about reflecting on the harsh truths of our nation's history.
Check out our podcast.
You'll find lots of fresh air interviews and to find out what's happening behind the scenes of our show
and get our producers recommendations on what to watch, read and listen to.
Subscribe to our free newsletter at w-h-y-y.org slash fresh air.
Fresh air's executive producer is Sam Briger.
Our technical director and engineer is Audrey Bentham.
Our engineer today is Adam Stanishevsky.
Our interviews and reviews are produced and edited by Phyllis Myers,
Roberta Shorot, Anne-Marie Baldenado, Lauren Crenzel, Theresa Madden, Monique Nazarib,
Susan Yacundi, Anna Balman, and Nico Gonzales Whistler.
Our digital media producer is Molly CV Nesper.
They are a challenger directed today's show.
With Terry Gross, I'm Tonya Mosley.
Support for this podcast and the following message come from Green Chef.
Tired of locking in for another New Year's trend?
Take control of your health without the stress.
Green Chef's meal planning is low prep, low mess meals for every lifestyle.
Right now, go to greenchef.com slash fresh grasa and use code fresh grasa to get started with
50% off green chef and free grasa olive oil set in your second and third boxes.
This 50% off offer is only available for a limited time.
Greenchef.com slash fresh grasa.
This message comes from an PR sponsor, Charles Schwab.
When is the right time to sell a stock?
How do you protect against inflation?
Financial decisions can be tricky, and often your own cognitive and emotional biases
can lead you astray.
Financial decoder and original podcast from Charles Schwab can help.
Join host Mark Reepy as he offers practical solutions to help overcome the
cognitive and emotional biases that may affect your investing decisions.
Download the latest episode and subscribe at Schwab.com slash financial decoder.
This message comes from Alexa Plus.
The all-new Alexa Plus will do more so you don't have to.
Chat naturally while Alexa learns your style,
anticipates what's next, and makes it happen.
Free with prime on your Amazon devices.
Learn more at amazon.com slash Alexa Plus.
Fresh Air



