Loading...
Loading...

OpenAI CEO Sam Altman recently announced a formal partnership to integrate the company’s artificial intelligence into the Department of Defense's classified systems. This strategic move occurs as the Trump administration actively removes Anthropic’s technology from federal use following a dispute over service terms.
The new agreement includes specific safety protocols, such as maintaining human oversight for lethal force and banning domestic surveillance. Altman positioned this deal as a way to standardize military AI regulations and reduce legal friction between tech firms and the government. Ultimately, the shift establishes OpenAI as a primary technological partner for national security operations while marginalizing its competitors.
Zootopia 2 has come home to Disney Plus.
Let's go!
Get ready for a new case.
We're the greatest partners of all time!
New friends.
David Desnick.
And your last name?
Desnick.
Dream team.
Big new habitats.
Zootopia has a secret reptile population.
You can watch the record breaking phenomenon at home.
Zootopia 2, now available on Disney Plus rated PG.
Here right now you can get Disney Plus and Hulu
for just $4.99 a month for three months
with a special limit to time off
or ends March 24th.
After three months plan auto renews
at $12.99 a month, terms apply.
Gambling problem called 1-800-Gambling.
The United States government has officially designated
a leading American artificial intelligence company
a supply chain risk to national security.
Which is a label they usually save for foreign adversaries.
We are talking about the exact same classification
used for Huawei and ZTE.
The Department of War has effectively placed
a do-not-touch order on Anthropics.
That sounds like a lot more than just a cancel contract.
Oh, it is.
It's an absolute blacklist.
It prohibits military contractors, suppliers,
basically any partners from conducting commercial activity
with the company.
If the government can blacklist a domestic company
for refusing to remove safety guardrails from its software,
does the private sector actually retain any control
over how its technology is used in warfare?
That is the multi-billion dollar question here.
And this whole thing didn't start in some corporate boardroom.
It started with a specific military operation,
specifically the raid in Caracas
to capture Nicholas Maduro.
Right. Operation absolute resolve.
Exactly. According to reports,
the US military utilized Anthropics model
clawed during the planning and execution of this operation.
Now Anthropic doesn't have a direct contract
to sit in the situation room.
The integration happened through Palantir.
We should probably clarify Palantir's role here,
because they aren't just a simple database company.
No, no. Think of Palantir as the operating system
from Modern Warfare.
They have this platform called the Maven Smart System.
It ingests satellite imagery, drone feeds,
intercepted communications,
basically all the messy data of war.
And it synthesizes it into a coherent picture
for the commanders on the ground.
Right. And they were running clawed inside that ecosystem
to help process all that information.
The mission itself was technically a success.
I mean, the administration got what they wanted.
But the friction actually started afterward.
Yeah. It started because Anthropic asked a question.
Following the operation,
company leadership allegedly inquired
whether their technology had been utilized in the raid.
And that inquiry triggered an immediate alarm at the Pentagon.
Why, though?
I mean, if I sell a product,
asking how it was used seems pretty standard.
Not when you were dealing with classified operations.
But in Anthropic felt they had to ask
because of a constitutional AI.
Let's use plain English for that.
What exactly is constitutional AI?
Sure. So most AI models are trained
on massive amounts of internet text
and then just fine tune to be helpful.
Anthropic takes a totally different approach.
They give the model a constitution,
a specific set of ethical principles
that has to follow.
Like rules against generating harm or things like that.
Exactly. Rules like do not help kill people,
do not violate privacy, do not create hate speech.
The model essentially checks its own output
against this constitution before it even responds.
So to know if the model actually
followed its own constitution,
the company needs to know what it was doing.
Yes, they needed degree of transparency
to ensure their ethical guidelines aren't being violated.
And the Pentagon viewed this request
for transparency as a massive liability.
Because it's an operational security issue for them.
Right. They saw a private vendor in San Francisco
claiming they had the right
to audit a military operation.
To the generals, that just looks like a vendor
trying to exercise president Barack Obama.
Virginia, we are counting on you.
Republicans want to steal enough seats in Congress
to raid the next election
and wield unchecked power for two more years.
But you can stop them by voting yes
by April 21st.
Help put our elections back on a level playing field
and let voters decide not politicians.
Vote yes by April 21st.
Paid for by Virginians for fair elections.
Life's lock. How can I help?
The IRS said I filed my return, but I haven't.
One in four tax paying Americans
has paid the price of identity fraud.
What do I do?
My refund though, I'm freaking out.
Don't worry, I can fix this.
Life lock fixes identity theft guaranteed
and gets your money back with up to $3 million in coverage.
I'm so relieved.
No problem. I'll be with you every step of the way.
One in four was a fraud paying American.
Not anymore.
Save up to 40% your first year.
Visit lifelock.com slash podcast.
Terms apply.
Your little one grew three inches overnight.
Adorable.
Also expensive.
Sell their pint-sized pieces on D-pop
and list them in minutes with no selling fees.
Because somewhere, a dad refuses to pay full price
for the clothes his kid will outgrow tomorrow.
And he's ready to buy your son's entire wardrobe right now.
Consider your future growthspurt budget secured.
Start selling on D-pop.
Where taste recognizes taste.
Payment processing fees and boosting fees still apply.
See website for details.
Vito power over national security decisions.
They see a tool they bought
and the toolmaker wants to verify
if it was used to hurt someone.
And that friction led directly to the ultimatum.
The Pentagon, led by Secretary Pete Hegseth,
issued a directive to all AI vendors.
The rule was very simple.
AI vendors must allow their tools to be used
for all lawful purposes.
All lawful purposes.
That phrase does a lot of heavy lifting.
But anthropic refused to agree to that broad language.
They did. They insisted on maintaining two
very specific red lines in their contract,
regardless of whether the government called it lawful or not.
What were the two lines?
First, the AI cannot be used
for mass domestic surveillance of American citizens.
Okay. That seems straightforward.
And the second.
Second, it cannot be used for fully autonomous weapons.
We hold on. Let me just make sure I have this straight.
What is the specific definition of an autonomous weapon
in this context?
We are talking about systems that select
and engage targets without human intervention.
In military terminology,
it's lethal autonomous weapons systems.
Meaning the software processes the sensor data,
identifies a human being as a target,
and authorizes the strike.
Yes.
All without a person ever pressing a button.
So anthropic basically said,
we will not build the software that decides who dies.
Correct.
Yeah.
The government's counterargument is that lawful use
already covers those concerns.
They argue that if an action is legal under the constitution,
a private company has no right to block the government
from using its procured tools to execute it.
But we, if the Pentagon says lawful use
is the only standard they will accept,
and they refuse to sign a ban on autonomous weapons,
are they admitting they plan to use them?
That is the real gray area.
The Pentagon spokesperson stated
they have no interest in mass surveillance
or autonomous weapons without human oversight right now.
They actually called those fears fake.
However, they refuse to sign a contract
that explicitly forbade it.
Exactly.
Their position is entirely about operational flexibility.
They argue they cannot have a terms of service agreement
overriding a commander's decision
in the field five or 10 years from now.
And anthropic is saying the technology
just isn't safe enough for that.
That is their core argument.
They point out that current, large language models
hallucinate, they make up facts.
They misinterpret context entirely.
Which is a known issue across the industry.
Yes.
So anthropic believes reliant on them
for autonomous killing is reckless.
And mass surveillance violates fundamental rights.
They view these strictly as safety issues.
While the administration argues
that lawful is the only standard that matters,
and woe corporate policies cannot constrain the military.
Exactly.
Okay, let's pause here to reset the pace.
Usually when a company says no to the government,
they just lose the contract.
They don't get the money.
Right, you walk away, you lose the revenue,
maybe your stock dips a bit,
but you just continue doing business with everyone else.
But the administration didn't just cancel the contract
in this case.
They went much further.
They absolutely did.
President Trump ordered all federal agencies
to immediately cease using anthropics technology.
And then Secretary Hegseth followed up
by designating the company a supply chain risk
under 10 US code section 3252.
I want to focus on that specific statute.
10 US code section 3252.
What does that actually mean?
This is a statute designed specifically
to prevent espionage and sabotage by foreign enemies.
It is the exact legal tool used to ban Huawei and ZTE
because of fears they were funneling data
straight to the Chinese Communist Party.
So it implies the company is an active threat
to the integrity of the nation's defense.
Yes.
And applying it to an American firm for a contract dispute
is entirely unprecedented.
It acts as a corporate death penalty in the federal sector.
Because it's not just the Pentagon
that has to stop using them now.
Exactly.
It effectively forces any company
that wants to work with the military companies
like Boeing, Lockheed Martin, or Palantir
to strip anthropic out of their own internal workflows.
So if I am an engineer at Lockheed Martin,
and I use Claude to write code or summarize
technical documents with my daily work,
you are now a liability.
Lockheed Martin cannot risk its massive government contracts
by harboring a designated supply chain risk
in its software stack.
They have to rip it completely out.
There's a massive contradiction in the order itself, though.
It claims anthropic is a security risk,
but simultaneously mandates they continue providing services
for a six-month transition period.
Which really highlights that this isn't about espionage.
Think about it.
If Huawei was actively spying on the Pentagon,
you wouldn't say, OK, keep the routers plugged in
for six more months while we find a replacement.
Right, you would cut the line immediately.
You would sever it that second.
So that tells us they need the tech.
They absolutely need the tech.
They just don't want the rules that come with it.
Keeping them on for six months
admits that the government is highly dependent on these models.
So this designation is just a power play.
Completely.
It threatens anthropics, entire enterprise ecosystem,
and its projected IPO.
It signals to the rest of Silicon Valley
that non-compliance comes with an existential cost.
And almost immediately, another company
stepped in to take advantage.
Yes.
Hours after the ban on anthropic was announced.
Open AI CEO Sam Altman announced a new agreement.
The pivot was instant.
Open AI stepped right into the vacuum.
They announced an agreement to deploy open AI models
on the Department of Wars classified networks.
Here's the part that is really confusing to me.
Open AI claims they have the exact same red lines
as anthropic regarding surveillance and autonomous weapons.
Obelically, yes.
Their safety guidelines prohibit the exact same things.
So if they have the same red lines,
why did the Pentagon ban one and sign the other?
Why is anthropic at security risk
an open AI at trust and partner?
It comes down to the mechanism of enforcement.
Anthropic wanted strict contractual prohibitions.
They wanted a written veto in the deal
that they could enforce legally.
Open AI adopted a layered approach instead.
What does a layered approach actually mean in this context?
It means they are relying on a cloud-only deployment
structure and the government's own interpretation of the law.
Open AI agreed to the all lawful use standard.
So they are effectively betting that the government
won't define mass surveillance or autonomous weapons as lawful.
Correct.
All men stated that the department
agrees domestic surveillance is illegal,
so they didn't need to fight over it.
Open AI integrated their safety stack into the deployment
rather than demanding external oversight of specific missions.
So anthropic wanted the government to sign a paper
saying we promised not to do X.
Open AI said we know X is illegal,
so we don't need you to sign a paper.
Essentially, yes.
They gave the Pentagon the lawful use language they demanded
while assuring the public that their technical architecture
prevents the bad stuff.
It sounds like open AI gave the Pentagon an optical win.
And a contractual one.
The Pentagon gets to say we don't bow to terms of service
and open AI gets the massive contract.
During this entire fallout, the administration publicly
attacked Anthropic as a radical left and woke company.
The record really contradicts that characterization, though.
Anthropic is funded by major corporate players
like Amazon and Google.
Their CEO, Dario Amadei, has publicly stated
that AI is existentially important for national defense.
He's not exactly a pacifist.
Far from it, he has explicitly stated
he supports helping the US military defeat
autocratic adversaries.
This episode is brought to you by Focus Features.
Would you let AI pilot your plane?
Raise your child, decide your future.
On March 27th, Focus Features presents the AI doc
or how I became an apocalypticist.
Critics and audience at the Sundance and Southwest Film
festivals call it the most urgent movie of our time.
The AI doc or how I became an apocalypticist rated PG-13
only in theaters March 27th.
How many discounts does USA Auto Insurance offer?
Too many to say here.
Multi-vehicle discount, safe driver discount,
new vehicle discount, storage discount.
How many discounts will you stack up?
Tap the banner or visit usa.com slash auto discounts
restrictions apply.
This episode is brought to you by Nespresso.
Introducing Virtuo Up, the latest
and a long line of innovation from Nespresso.
It's innovation you can touch, sense and taste
in every single cup.
With a three-second start, easy open lever
and dedicated brew over ice button,
it's even easier to enjoy your coffee your way.
Visit for yourself shop Virtuo Up exclusively at Nespresso.com.
There he is.
He even admitted in interviews that autonomous weapons
might eventually be necessary.
Wait, really?
So what is his actual objection?
His objection is entirely that the current technology
isn't safe enough yet.
He argues that LLMs hallucinate and make unpredictable errors
and therefore shouldn't be making kill decisions today.
That seems like a technical argument, not a political one.
Exactly.
It suggests the woke label is just political cover.
The real issue here is the sovereign AI doctrine.
Meaning what?
The administration is establishing that the state,
not private labs, is the final arbiter of how technology is deployed.
It is a fundamental shift from a safety first model
to a deployment first mandate.
They are worried about falling behind.
They are looking straight at China.
The argument is that if American companies are permitted
to constrain the military with safety guardrails,
the United States will lose the advantage
to an adversary that operates without any such constraints.
They want the tech to be totally subservient
to national security objectives.
This supply chain risk designation.
Does it stop at the Pentagon?
No.
The General Services Administration
has already terminated anthropics one-gov deal.
That means agencies totally outside the military
like the Department of Energy or the EPA
might have to stop using quad.
It effectively chills the entire market for them.
It forces every company to make a hard choice.
If you want to do business with the US government in any capacity,
you cannot use the risk software.
It consolidates power strictly around the vendors
who agree to the government's terms.
To wrap this up, anthropic held the line on their terms of service
and was designated a national security risk.
Open AI aligned with the government's legal framework
and became the primary partner for the military.
The main takeaway here is that the Silicon Valley consensus,
this era where companies felt they could dictate ethical terms
to the government, is effectively dead.
The government has demonstrated it is entirely
willing to use the full weight of executive power
to ensure AI companies are subservient
to national security objectives.
If you're not subscribed yet, take a second and hit Follow
on whatever app you're using.
It helps us keep making this.
We appreciate you being here.



