Loading...
Loading...

This is what President Trump had to say about why the United States is at war with Iran.
We sought repeatedly to make a deal.
We tried.
They wanted to do it.
They didn't want to do it.
Again, they wanted to do it.
They didn't want to do it.
They didn't know what was happening.
Not the best explanation for a war of choice, sir.
I'm personally a do my own research kind of guy, but let's ask AI why we're at war with
Iran.
The United States attacked Iran in 2026 because it claimed Iran posed an imminent threat,
particularly due to Iran's advancing nuclear program and missile capabilities, and aimed
to reduce Iran's ability to project power in the region.
Wow, that was a better explanation.
Thanks, chat.
Fitting that AI was more clear than the president of the United States because it turns out the
United States is using AI to fight the war in Iran.
The future of war is AI, and that future is now here.
I'm Sean, back as firm, and that's coming up on today's explain from Vox.
You're going to prison one time, and suddenly it's all the jokes.
RJ Jekker, a series premiere, Tuesdays on ABC, and stream on Hulu.
Kayak gets my flight hotel and rental car right, so I can tune out travel advice that's
just plain wrong.
Stop taking bad travel advice.
Start comparing hundreds of sites with Kayak, and get your trip right.
Kayak got that right.
Paul Shari knows a lot about AI and how our military is using it.
He's the author of four battlegrounds, Power in the Age of Artificial Intelligence.
We've seen a trajectory of the military adopting AI tools over the last decade, as AI has
continued to progress.
What's newer are large language models, like chat GPT and Tropics Claw, that it's been
reported the military is using in operations in Iran, and so that's a pretty significant
development that we're seeing.
The people want to know how Clawed or chat GPT might be fighting this war, do we know?
War in Iran?
That's a great idea.
Let me help you with that.
Well, we don't know yet.
We can make some educated guesses based on what the technology could do.
AI technology's really great at processing large amounts of information.
I literally love processing.
The US military's hit over 1,000 targets in Iran.
As you see very well, they have no Navy, it's been knocked out.
They have no Air Force, it's been knocked out.
They have no Air Detection, it's been knocked out, their radar has been knocked out.
They need to then find ways to process information about those targets.
So satellite imagery, for example, of the targets they've hit, it's about everything's
been knocked out.
Looking at new potential targets, prioritizing those, processing information, and using
air to do that at machine speed, rather than human speed.
Human so slow, mo-ha-ha, cheers.
Do we know any more about how the military may have used AI in, say, Venezuela on the attack
that brought Nicholas Maduro to Brooklyn of all places?
Because we've recently found out that AI was used there too.
So what we do know is that Anthropics AI tools have been integrated into the US military's
classified networks.
And so they can process classified information, the process intelligence to help plan operations.
From writing emails to raiding enemy capital cities, the Wall Street Journal reports that
the Pentagon used Anthropics AI model Claude as part of its operation to capture Venezuelan
President Nicholas Maduro.
There's no suggestion that Claude was actually firing any of the missiles or manning any
of the machine guns.
Yeah, we've had this sort of tantalizing details, okay, that these tools were used in
the Maduro raid.
We don't know exactly how.
So we've seen AI technology in a broad sense using other conflicts as well in Ukraine,
in Israel's operations in Gaza to do a couple different things.
One of the ways that AI is being used in Ukraine in a different kind of context is putting
autonomy on to drones themselves.
The drone now flies on autopilot mode using our software.
We assigned it with a mission and it built its own flying route, giving the munition instructions
on where it needs to go and what it needs to look for.
And so when I was in Ukraine, one of the things that I saw Ukrainian drone operators and engineers
demonstrate is a little box like the size of a pack of cigarettes that you could put
on to a small drone that would enable that once the human locks on to a target, the drone
can then carry out the attack all on its own.
And that has been used in a small, small ways, not necessarily widespread use in Ukraine
today.
So we're seeing AI begin to creep into all of these aspects of military operations in intelligence,
in planning, in logistics, but also right at the edge in terms of being used where drones
are completing attacks.
Okay, so we know a little bit more about how this technology was used in Ukraine.
How about with Israel and Gaza?
So there's been some reporting about how the Israel Defense Forces have used AI and Gaza
and certainly large language models, but machine learning systems that can synthesize
and fuse large amounts of information, geolocation data, cell phone data, and connection,
social media data to bring this together, process all of the information very quickly
to develop targeting packages, particularly in the early phases of Israel's operations.
Which suggests specific possible targets, possible immune nations, warnings.
This system produces targets in Gaza faster than a human can.
So it raises thorny questions about human involvement in these decisions.
And one of the criticisms that had come up was that humans were still approving these
targets, but that the volume of strikes and the amount of information they needed to be
processed was such that maybe human oversight, in some cases was a little bit more of a rubber
stamp.
The question is where does this go?
And are we headed in trajectory where over time, humans get pushed out of the loop and
we see down the road fully autonomous weapons that are making their own decisions about
whom to kill on the battlefield?
That's the direction things are headed.
So no one's unleashing the swarm of killer robots today, but the trajectories in that direction.
And maybe I'll make a comparison here to self-driving cars where car companies can map the environment
down to the centimeter.
They know the height of the curbs.
They know where the stoplights are.
They can test self-driving cars in the actual environment they're going to be in.
And when they're doing something weird that doesn't work, they can update the algorithm.
We don't know where future words are going to be fought.
It's an adversarial environment.
We don't know what the enemy's going to do.
I mean, the US military is finding this out right now in its operations against Iran.
They're retaliating against US bases against Gulf states against Israel using drones and
missiles.
And now we're in a phase in the Iran conflict where things become super unpredictable.
People do an okay job of adapting their unpredictability.
AI is not so great and sometimes does some strange things.
Because you drew a parallel to self-driving cars, we've made an episode about self-driving
cars before in which I think our guests said something like, well, if you're worried about
self-driving cars, you know what you should really be worried about is humans.
Speaking about Iran, we saw reports that a school was bombed in Iran where maybe 160
were killed.
A lot of them young girls, children, presumably that was a mistake made by human.
Do we think that autonomous weapons will be capable of making that same mistake or will
they be better at war than we are?
This question of will autonomous weapons be better than humans or not is like one of
the core issues of the debates surrounding the technology because the proponents of autonomous
weapons will say, look, people make mistakes all the time.
And machines might be able to do better.
Part of that depends on how much the militaries that are using this technology are trying really
hard to avoid mistakes.
If militaries don't care about civilian casualties, then AI can allow militaries to simply
strike targets faster, in some cases even commit atrocities faster.
If that's what militaries are trying to do, I think there is this really important potential
here to use the technology to be more precise.
And if you look at the long arc of precision-guided weapons, let's say over the last century
or so, it's pointed towards much more precision and warfare.
So if you look at the example of the US strikes in Iran right now, it's worth contrasting
this with the widespread aerial bombing campaigns against cities that we saw in World War
II for example.
Where whole cities were devastated in Europe and Asia because the bombs just weren't precise
at all.
And so air forces dropped just massive amounts of ordinance to try to hit even a single
factory.
The possibility here is that AI could make it better over time to allow militaries to
hit military targets and avoid civilian casualties.
Now if the data is wrong and they've got the wrong target on the list, they're going
to hit the wrong thing very precisely and AI is not necessarily going to fix that.
On the other hand, I saw a piece of reporting in new scientists that was rather alarming.
The headline was, AI's can't stop recommending nuclear strikes in war game simulations.
I don't know if you saw that one.
They wrote about a study in which open AI andthropic and Google opted to use nuclear weapons in simulated
war games in 95% of cases, which I think is slightly more than we humans typically resort
to nuclear weapons.
Should that be freaking us out?
It's a little concerning.
It's a little concerning.
Like I think happily, it's nearly like a tell.
No one is connecting large language models to decisions about using nuclear weapons.
But I think it points to some of the strange failure modes of AI systems.
So they tend towards sycophantcy.
They tend to simply just agree with everything that you say.
I don't think anyone that's interacting with some of these models, they could do it to
the point of absurdity sometimes where, you know, oh, that's brilliant in the model.
That's a genius thing.
War in Iran.
That's a great idea.
Let me help you.
You know, you're like, I don't think so, and that's a real problem when you're talking
about intelligence analysis.
Do we think like GPT is telling Pete Hegseth that right now?
I mean, I don't know.
I hope not.
But you know, but his people might be telling him that, you know, so, so, so like you,
you started this ultimate Yes Men phenomenon with these tools where it's not just that they're
prone to hallucinations or just sort of a fancy way of saying they just make things up sometimes.
But also the models could really be used in ways that either reinforce existing human biases
that reinforce biases in the data or that the people just trust them that they're sort
of this veneer of, oh, the AI said this.
So it must be the right thing to do.
People put faith in it and, you know, we really shouldn't.
We should be more skeptical.
Be more skeptical says Paul Shari.
He's the executive vice president at the center for a new American security.
There are two big stories right now in the world of AI and war.
One is the one we just talked about.
The other is the drama between Claude and Pete.
And drama is forthcoming on today's blend.
Work, school, chores, bills, those are just a few of the things that act like energy
vampires throughout your day.
It can be hard to try and get everything done when you're running on empty.
That's why there's I am a daily ultimate essentials.
It's an all in one wellness drink that gives your body the support it needs without juggling
a bunch of different supplements.
I am a daily ultimate essentials is the go to forgetting the benefits of 16 different
supplements in one tasty drink co-founded by David Beckham and crafted with insight
from experts at Mayo Clinic, Cedar Sinai, and a former NASA chief scientist.
It simplifies your wellness routine and is loaded with 92 nutrient-rich ingredients
such as vitamins, minerals, adaptions, co-Q10, MSM, and pre-pro and postbiotics.
Plus, it's vegan, gluten-free, and non-GMO.
Feel your best self every day with I Am 8.
Go to I Am 8 Health dot com slash explained and use code explained for a free welcome
kit, five free travel sachets, plus 10% off your order.
That's I am number eight, H-E-A-L-T-H dot com slash explained code explained for a free
welcome kit, five free travel sachets, plus 10% off your order.
I Am 8 Health dot com slash explained code explained.
These statements have not been evaluated by the Food Drug Administration.
This product is not intended to diagnose, treat, cure, or prevent any disease.
Support for today explain comes from Ripley.
No one likes running a bunch of disconnected tools to do simple tasks.
So if your company is using an all-in-one platform, it should actually be able to do it
off.
Ripley says that their platform can do it off.
It's a unified platform for global HR, payroll, IT, and finance.
With Ripley, they say workflows that normally bounce across multiple tools and departments
can all just happen in one place automatically.
Sand employee gets promoted or moves.
Ripley can update payroll taxes, hand out new app permissions, ship a new laptop,
issue a new corporate card, and a sign of required manager training all in one place.
Without you having to put in the legwork.
With Ripley, you can run your entire HR, IT, and finance operations as one.
Or pick and choose the products that best feel in the gaps in your software stack.
So if you or your company want to run the backbone of your business on one unified platform
with people at the center, you can go to Ripley.com slash explained and sign up today.
That's R-I-P-P-L-I-N-G dot com slash explained to sign up.
Support for today explained comes from Bombas.
Perhaps you want to get in shape this year.
Bombas wants to tell you about the all-new Bombas sports socks engineered with sport
specific comfort for running, golf, hiking, skiing, snowboarding, and all sport.
Meanwhile, for the loungers among us, Bombas has non-sport footwear available.
But Bombas doesn't just offer sport and non-sport socks.
They also offer super soft base layers that they claim will have you rethinking your whole wardrobe,
underwear, t-shirts, flexible, breathable, buttery, smooth premium every day.
Go to's they say you won't want to leave the house without.
Here's Nisha Chital.
I've been wearing Bombas for several years now.
I have several pairs.
My whole family loves to wear Bombas.
I have several pairs of Bombas ankle socks.
And I have some no-show socks as well that are great for things like loafers and ballet flats.
For every item you purchase, Bombas says an essential clothing item is donated to someone facing housing insecurity.
One purchased, one donated, over 150 million donations and counting.
I'm told, you can go to Bombas.com slash explain and use code explain for 20% off your first purchase.
That's B-O-M-B-A-S.com slash explain code explained at checkout.
The all new 2026 Toyota RAV4 is here, building on everything drivers know and love about Toyota.
With a redesigned look and modern tech that makes life behind the wheel easier than ever,
the new RAV4 comes standard as a hybrid,
providing smooth, efficient performance for both city streets and longer journeys.
Enjoy the legendary reliability Toyota is known for in the all new 2026 RAV4.
Learn and shop more at toyota.com Toyota.
Let's go places.
This is today explained.
Pete Hegseth, our Secretary of Defense and Claude and Thropics large language model
gotten a big fight last week. We asked Axios tech policy reporter Maria Curie, what happened?
So this actually goes back to before the Pentagon related dispute, you know,
you have the CEO of Anthropic, Dario Amadi really positioning himself as the safety first CEO.
One way to think about Anthropic is that it's a little bit trying to put bumpers or guardrails
on that experiment, right? Because if we don't, then you could end up in the world of like the
cigarette companies or the opioid companies where they knew there were dangers and they didn't
talk about them and certainly did not prevent them. And he has been very vocal. He's posted on
X and talked a lot about how he does think there has to be a federal standard to regulate artificial
intelligence and that kind of put him at odds with David Sachs, the guy that's running AI for
President Trump in the White House. They've gotten into, you know, Twitter spats before.
And so it was kind of a long time coming before this Pentagon thing blew up.
This is essentially a situation where the Pentagon for a while has been trying to negotiate terms
with all of the AI labs to bring them into their classified systems.
Under this, the standard of all lawful purposes and Anthropic had kind of said, you know,
there are two specific scenarios in which we are not comfortable with the all lawful purposes
standard. The first one is this issue of domestic mass surveillance and the second one is autonomous
weapons. It doesn't show the judgment that a human soldier would show. Friendly fire or shooting a
civilian or just the wrong kind of things. We don't want to sell something that we don't think is
reliable and we don't want to sell something that could get our own people killed or that could
get innocent people killed. That was not taken while by the Pentagon. Defense Secretary Pete Hexeth
is demanding that San Francisco based Anthropic drop a number of safeguards or risk losing it's
$200 million contract. We do have a statement from the Pentagon and they're telling us that they
are currently quote reviewing its relationship with Anthropics saying quote our nation requires
that our partners be willing to help our warfighters win in any fight. We've been talking to senior
officials throughout this reporting process and they really view it as a private company telling
the government how to protect the country and how to do national security and conduct operations
and essentially what we know is that there were phone calls happening between the Pentagon and
Anthropic nailing down final language around this contract when all of a sudden Pete Hexeth tweeted
that he would be designating Anthropic a supply chain risk. Effective immediately no contractor
supplier or partner that does business with the United States military may conduct any commercial
activity with Anthropic. President Trump posted on Truth Social.
Truth Social. The left wing nut jobs at Anthropic have made a disastrous mistake trying to
strong arm the Department of War and force them to obey their terms of service instead of our
constitution. Their selfishness is putting American lives at risk. Our troops in danger and our
national security in jeopardy. And the entire federal government was going to have to get rid of
Anthropic. Essentially Anthropic had been asking for commercially acquired information and data
for there to be a prohibition on that collection in the Pentagon contract and this goes to the
concern around domestic mass surveillance. The idea here is that according to Anthropic the law
has not caught up to artificial intelligence and you could have a situation where it's perfectly
legal for the Pentagon to collect commercially acquired information that could include you know
financial information purchased from data brokers web browsing data beyond that voter registration
rolls social media posts whether or not you attended a protest concealed carry permits.
There's all sorts of data out there that the government can collect in a perfectly legal way
and you could see how artificial intelligence could make it much quicker much more efficient
to have a continuous collection of that data to really pinpoint and target individuals.
That was a concern and so they were asking for this specific language and they thought they were
about to get it when all of a sudden Pete Hegseth posted on X. Why did they think they were going to
get it? Well you know they thought that this was going to be the language that commercially
acquired information coupled with the all awful purposes. They thought that that was just going
to be enough but the Pentagon actually came back and said no that's not something that we are
comfortable doing which begs the question how did this open AI deal then past muster?
Oh that's a spoiler because what happens is the Pentagon drops and thropic on Friday evening
and then within what like minutes they pick up open AI? That's right so they pick up open AI
for a contract that was very quickly like you know everybody was poking holes in it on X.
I don't see this as a meaningful improvement to the contract there still seem to be some
big shortcomings slash loop holes. I agree it's better but I think the government can drive a
truck through the intentionality language and we heard from you know people familiar with the
negotiations too like this isn't going to actually prevent domestic mass surveillance from happening
it's still too risky so you had Sam Altman on X trying to field all of this criticism he you know
he didn't ask me anything on Saturday night where he had thousands and thousands of questions of
people trying to get answers on how did you go from a tool for the betterment of the human race to
let's work with the department of war if the government comes back with a memo saying that in their
view mass domestic surveillance is legal do you do that? were the terms that you accepted the same
ones anthropic rejected? And so you fast forward to Monday and you have Sam Altman saying okay we've
gone back to the drawing board. We shouldn't have rushed to get this out on Friday we were genuinely
trying to de-escalate things and avoid a much worse outcome but I think it just looked opportunistic
and sloppy. We need to essentially add some language to this contract to give people more
assurances that we are not going to conduct domestic mass surveillance and what they added was
that commercially acquired information cannot be collected and that is prohibited which is the
exact words that anthropic was looking to have in their contract. So like so many other things with
this administration this ends up feeling rather confusing and inconsistent because they bail and
anthropic because anthropic has these ideals these standards they bounce to open AI but open AI is
trying to work out a deal with the same exact standards basically. Well now that we have the specific
language and the legalese it's looking like it's the exact same standards. You know we've also heard
from the Pentagon from Pentagon officials saying like we were able to do this with Sam Altman
he's reasonable this was a reasonable negotiation and anthropic has personal vendettas and so to
your point about inconsistencies absolutely personalities are a factor here and it's not all just
going to come down to legalese and these two standards. Did the Pentagon just go exclusive with
open AI and Sam Altman because there's been reporting that anthropic was actually used in these
attacks on Iran that followed this drama that we had last Friday. Yeah so anthropic is the longest
standing AI model that is being used in the Pentagon for classified purposes. We've established that
it was used in the Madoodle raid. We've established that it was used in the Iran raid. They're very
useful to the Pentagon. Do you know you have senior defense officials describing how much of a
pain in the ass it would be to actually get rid of anthropic and to reportedly they didn't.
No they haven't yet. They were given this six month off ramp for anthropic to be phased out
and for another AI lab to be phased in. I think right now people are having these questions of was
this all just Sam Altman trying to elbow out his competitor from the Pentagon. I think it's too soon to
tell. So I think what this tells us is that in the absence of a law that actually contemplates
artificial intelligence we are left as a broader country and society relying on either
Pete Hegg's that's Department of War deciding how this technology is going to be used
or anyone individual company and anthropic at the end of the day is a company.
And so you have all of these different parties also saying all these companies also saying we
actually do think that a law should be passed we would love for Congress to actually just set the
rules of the road because we have our own competitive pressures that we're also dealing with.
Now whether or not Congress is going to pass a law around this I don't know they've been a
sleep at the wheel on almost everything. So Congress has been asleep at the wheel on almost
everything says Maria Curie from Axios.com Peter Balanon Rosen and Heidi Mawagdi produced our
show today Jolie Myers edited Patrick Boyd and David Tatashore mixed Andrea Lopez Crusado was
on the fact check I'm Sean Rommesforum back at today explained.
Rince knows that greatness takes time but so does laundry so Rince will take your laundry and
hand-deliver it to your door expertly cleaned and you can take the time pursuing your passions.
Time one spent sorting and waiting folding and queuing now spent challenging and innovating
and pushing your way to greatness so pick up the Irish flute or those calligraphy pens or that
daunting beef Wellington recipe card and leave the laundry to us. Rince it's time to be great.



