Loading...
Loading...

On Friday, President Trump ordered federal agencies to stop using Anthropic’s A.I. systems and Defense Secretary Pete Hegseth designated the company a “supply chain risk.” Then, just a few hours later, the OpenAI chief executive, Sam Altman, announced that his company reached an agreement with the Pentagon. The deal ensures its technology won’t be used for the same two safety concerns Anthropic raised: domestic mass surveillance or autonomous weapons. So what is going on? Is this a political vendetta between the Pentagon and Anthropic? Or are there substantive differences between the agreement Anthropic was offered and the one OpenAI signed? We cut through the confusion.
Additional Reading:
We want to hear from you. Email us at [email protected]. Find “Hard Fork” on YouTube and TikTok.
Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Kasey, where are you?
That beautiful background does not look like your house.
I'm in a ski chalet in keeping with the hard fork tradition of recording bonus episodes
in these strangest places possible.
But here's the good news, Kasey, because while I was invited on a ski trip, I've never
in my life had any intention to ski.
And so my plans for this morning were either to talk about AI with my fiance or talk about
AI with you.
And we flip the coin and it's you.
How are you doing this morning?
Wow.
I feel so honored.
Well, we have a lot to talk about today because it has been a very crazy 48 hour period
in the AI industry and this dispute between the Pentagon and anthropic and now open AI
sort of came out of nowhere at the 11th hour.
It is now involved.
It has been truly an insane day and a half in my life.
How has it been for you?
Well, let me put it this way.
Listeners, Kav, imagine you get engaged and then one week later your fiance is declared
a supply chain risk.
So yeah, it's been a really, really crazy few hours over here as well.
And just because we are going to talk about anthropic and open AI at all of this today,
we should make our AI disclosures.
Mine is that I work for the New York Times, which is suing open AI, Microsoft and perplexity
over alleged copyright violations.
Yes.
And if you missed the other big breaking anthropic story from over the past week, the man that
I am now engaged to works there.
Well, where should we start, Kasey?
Well, look, I think if you're tuning in, maybe you've heard the biggest headlines,
but I think it's worth hitting you with maybe just a few key bullet points.
One is that in the story that we've been covering over the past couple of episodes, it has
come to the point of crisis where anthropic said it had two red lines that it would not
cross.
The Pentagon said that it was going to move to declare the company a supply chain risk.
And then somehow, within 24 hours of that happening, Sam Altman and open AI swooped
in and signed a deal that they say will observe those safeguards.
And so it was just a truly chaotic 24 hours and we should dig into it.
Yes.
And none of this has been happening through like normal diplomatic channels, basically
as far as I can tell, the entirety of this conflict has been contained in like a handful
of posts on X and a handful of blog posts and some stuff that has been leaking out from
either side.
So I have been making calls for the last two days to the people who are involved in this
situation, trying to get some information.
And I've gotten a little bit and I'll happily share that with you.
But I would say confusion reigns, like no one, even the people who are directly involved
in this situation are confused about the details here.
And so I think we should also just say up front that like there is still a lot that
is unknown about what's going on right now.
Absolutely.
Maybe to start Kevin, we could go back to a part of the story that I think is pretty
well known, which is just sort of what happened between anthropic and the Pentagon, particularly
in those final hours where the Pentagon finally said, hey, this isn't going to work.
We're not going to give you what you want.
And time ran out and they did not come to an agreement.
Yeah.
This escalation started on Thursday, February 26th, when basically there was a day left
until this deadline that the Pentagon had given anthropic.
And Dario Amade, the CEO of Anthropic, put out a statement on anthropic's website, basically
saying we are not going to compromise no matter what on these two exceptions that we want.
Mass domestic surveillance and fully autonomous weapons.
He explained why they weren't going to compromise on those.
And then he said in the line that a lot of people have been quoting that quote, these
threats do not change our position.
We cannot in good conscience a seed to their request.
Basically, we have been trying to work out a deal while preserving these exceptions that
are very important to us, but we have not been able to do so.
And probably worth saying, Kevin, that I think a reason that quote stood out so much was
that I cannot remember any tech leader invoking conscience as a reason not to do something
since Trump has been reelected.
So it felt like a shift in tone for the whole discussion around tech and power and just
something we have not seen from Silicon Valley in a while.
Yes.
And what I understand from talking with folks close to the situation is that even after
this post from Dario Amade, there were discussions happening between the Pentagon and people
from Anthropic.
They were trying to work out the contours of a deal.
There was some sort of willingness to at least change some of the language around these
exceptions.
But while these discussions are happening in the back channels between the officials at
the Pentagon and the people at Anthropic, President Trump posts a statement on true social
late Friday afternoon just before this deadline that the Pentagon had given Anthropic.
He said that quote, the United States of America will never allow a radical left woke company
to dictate how our great military fights and wins wars.
He also said that he was directing every federal agency in the United States government
to immediately cease all use of Anthropics technology.
The six month phase out period basically for federal agencies to switch from using
claw to using other models.
One thing the president did not mention is this idea of declaring Anthropic a supply chain
risk, right?
This is something that we talked about on the last show.
Basically this is a much stricter designation, something that we don't think has ever been
applied to a major American company before.
It's usually used for Chinese chip suppliers or things like the Kaspersky lab.
But Trump did not say that he was going to designate the company a risk to the supply chain.
So I think some folks at Anthropic and elsewhere thought, okay, this is like a deal that we
can live with.
We are going to lose our government contracts, but we're not going to be declared essentially
like an enemy of the state.
And more than that, Kevin, he also did not invoke the Defense Production Act, right?
Which like to me was the true worst case scenario here where the United States government
would effectively have nationalized or partly nationalized Anthropic and forced it to make
a version of clawed that did its bidding.
So when I saw the truth social post, my initial thought was like, okay, maybe they're just
going to walk away from this whole debacle and try to save some face.
Yes, it did look like that.
And then a little over an hour after Trump's true social post, Pete Higgseth, the Defense
Secretary posted his own take on the matter on X in which he said that he was directing
the department to designate Anthropic a supply chain risk.
He said, quote, effective immediately, no contractor supplier or partner that does business
with the United States military may conduct any commercial activity with Anthropic.
So this was a pretty severe escalation.
And the people who thought, okay, maybe Anthropic is going to, you know, get away here with
not being declared a supply chain risk thought maybe, maybe they're not after all.
Yeah.
Now, at the moment of this recording, so far, the only evidence that we have that the
Pentagon plans to declare Anthropic a supply chain risk is this social media post, right?
Like my understanding is that Anthropic has not been informed of any new proceeding against
the company.
Anthropic says they would fight it in court.
So while this may happen, and we should talk about what it would mean if it does for
the moment, it also appears like it could just be a threat.
So meanwhile, while all of this is going on between Anthropic and the Pentagon, Open
AI has been working on its own deal with the Pentagon to use its models inside the government's
classified networks.
There have been some reporting on a leaked message that Sam Altman had sent to Open AI employees
on Thursday, basically indicating that they were standing in solidarity with Anthropic,
which is very unusual because these companies do not like each other and their leaders have
not have a long history with each other.
But basically, he was saying to Open AI's employees, we are not going to sort of cave
on these exceptions, either we are committed to not having our models used for mass domestic
surveillance or fully autonomous weapons and actually saying some sort of supportive things
about Anthropic.
But a day later, on Friday night, after this whole deal between Anthropic and the Pentagon
had blown up in spectacular fashion, Sam Altman went on X and posted that Open AI had reached
an agreement with the Pentagon to deploy our models in their classified network, basically
saying we have confidence that our models will not be used for domestic mass surveillance
and autonomous weapon systems and that the Pentagon had agreed with those principles
and then they put them into our deal.
So those were the events of the past couple of days and I think when I summarize them,
it sounds insane because what we effectively have are two companies, Open AI and Anthropic,
that claim to have identical red lines when it comes to the use of their products by
the military, mass domestic surveillance and fully autonomous weapons.
One of them, Anthropic has been declared a supply chain risk, which is a very punitive
harsh measure that basically requires them to cut off all business with the US military
and the federal government.
The other Open AI just announced a deal with the Pentagon to use its systems in classified
networks with the same two red lines that Anthropic had objected over.
There's some nuance there, there's some details that I'm sure we'll get into, but I think
if you just sort of zoom out and look at the facts of the case, it is a truly insane series
of events.
It is.
And I think we should just talk now, Kevin, about this nuance that you bring up.
You know, we said at the top of the show, there is some uncertainty here.
Kevin and I have not been allowed to review the contracts that Anthropic and Open AI
have with the military, although we would love to, we're hard-fork at NYTimes.com.
But I think what we can tell you is that it appears that this conflict comes down to
this all lawful use standard, right?
Keep in mind, the Pentagon signed a deal with Anthropic that had in place the red lines
that it is now freaking out about.
It went back to its AI labs and it said, hey, we want to change this.
We want you to say we can use this for anything that is legal.
On paper, that sounds great.
Here's the problem.
We don't meaningfully regulate the use of AI in this country.
And as we talked about on the show in the past, we do not have a national privacy
law.
These are among the reasons that Anthropic has become very concerned about what powerful
AI systems might do if they were given to the military in a country where there are
not actually laws around how this powerful new technology can be used.
And I think domestic surveillance one is a really interesting one, Kevin.
You know, the Pentagon has said, well, you know, we're not going to domestically surveil
people.
That's illegal.
Hmm.
There are other federal agencies right now that have mounted what amounts to a social
media dragnet looking through the social media posts of people trying to immigrate to
this country, trying to find posts that are critical of the administration and then using
that as a pretext not to allow them to immigrate, right?
Now, maybe the Pentagon will say, well, you know, that's not surveillance.
You know, that's just part of our immigration process.
But I think to folks at Anthropic, they would say, well, no, no, no, if we know how powerful
tools that can go through every social media post in real time, that might be an area
that we are uncomfortable getting into, right?
And so this is where I think we start to understand what is different between Anthropic
and OpenAI here, right?
Is Anthropic has said, we're serious about this stuff and I'm sure it's possible to
write into a contract a little bit of legal ease that gives them enough cover to go back
to their employees.
It's a, hey, don't worry.
We're not going to do anything untoward.
People at the same time doing a little wink, wink, nudge, nudge to the Pentagon.
And the Pentagon could do these tools to do exactly what they're doing with the social
media accounts of would be immigrants, right?
And so to me, that is what I see happening here and it seems like a significant part of
the conflict.
Kevin, I know you've been on the phone like all weekend.
What do you make of that analysis?
Yeah, I think that's largely my understanding when he announced the agreement that they
had made with the Pentagon, Sam Altman did put out a statement that left some room for
interpretation, I think, on what OpenAI had actually agreed to.
So I will be very curious to see the actual language of these contracts if that ever makes
it out into public.
Again, we are hard for it and ytimes.com.
But what I can tell you from talking with folks on all sides of this over the past couple
of days is that OpenAI is framing this as essentially an identical set of constraints,
right?
They don't believe that they have agreed to anything that would require them to use their
models for mass domestic surveillance or for autonomous weapons.
But in his statement, Altman said that the Pentagon, quote, agrees with these principles,
reflects them in law and policy and we put them into our agreement.
So basically, if you kind of parse that very carefully, he is just saying sort of what
the Pentagon has been saying, which is that they're not going to do mass domestic surveillance
because it is illegal.
And what Anthropic has been insisting on this whole time is that actually there are forms
of mass domestic surveillance that are not illegal as the law is currently written.
And so we want to prohibit the use of our systems for that stuff too.
More than that, Amade has also said that during their negotiations, Anthropic was offered
similar concessions, but the Pentagon accompanied those proposed concessions with, quote, legal
leads that would have made them ineffective, which is entirely consistent with what the
undersecretaries of this agency are saying on X, which is that they were not going to
let any private company dictate how they wage war, right?
So I just think that's very important to say is that Anthropic is telling us, hey,
we were offered a very similar deal and it did not protect you as an American in the
way that OpenAI is now telling you that you were being protected.
Yeah, I mean, I think when you boil it all down, there are basically two options here.
One is that the administration and the Pentagon just have a political vendetta against Anthropic.
There's a bunch of language in the statements coming out of Pentagon officials, X accounts
about how these are all, you know, a bunch of woke liberals who are unpatriotic.
And I think there is some sort of sense in which this is just about style and tone and
personality, Emil Michael, one of the undersecretaries at the Pentagon who's been negotiating
this deal just clearly does not like Dario Amade at all.
And I've heard that from multiple people actually that there's like particularly bad blood
between those two.
And so I think that's option one is like this is purely a political vendetta.
OpenAI has been chosen for this contract because the administration likes them more and
there are sort of no substantive difference between what these two companies have agreed
to do.
The other option is that OpenAI has actually agreed to things that Anthropic didn't, that
there are substantive differences between these agreements and that OpenAI is sort of using
this sort of legal ease as you put it to sort of frame this as a victory when really they
have conceded to the thing that Anthropic objected to.
I'm not sure yet which of those two is more true, but I don't think anyone in this situation
except maybe the secretary of defense knows.
Yeah.
You know, I mean, there are two really important things about what you just said Kevin.
One is the idea that the federal government is trying to commit what Dean Ball who was
a member of the Trump administration and helped to write its current AI policy.
What Dean Ball called an attempted corporate murder just based on ideology.
Man, if you live through the bias and censorship debates on social media of the early 2020s,
it's really crazy to hear elected officials saying that because we have a different ideology
than you, we are going to take your contract away, designate you a supply chain risk and
try to prevent other people working for you, right?
So that is, honestly, Kevin, that is how the Chinese government regulates its tech
companies.
Either you get on board with a party or they crush you, right?
So that I think is really chilling.
And again, not just to me, to former members of the Trump administration, okay?
That feels really important to say.
Do you absolutely think about that?
No, I've been looking back through sort of historical examples of the US government taking
punitive actions against American companies.
And I think it's safe to say that this fight with Anthropic and the Pentagon is by a fairly
wide margin, the most punitive action that the US government has taken against a major
American company, at least this century and possibly ever.
We have seen this administration bully and strong arm and jawbone companies in the tech
sector before we have even seen them try to block certain companies from doing business
with the government, but we have not seen them try to kill a company for what, as far
as I can tell, our contractual disputes and ideological differences.
It's really crazy, but of course, this is why almost all of Silicon Valley has lurched
to the right over the past two years.
It's why Tim Cook is giving golden trophies to President Trump.
It's why Greg Brockman at OpenAI is donating $25 million to Trump's political action committee,
right?
There is this sense that you have to be in line with these people or they're going to
try and crush you until now, though, we hadn't actually tried to see the Trump administration
try to crush a company.
But now we have, and I just sort of can't imagine what kind of chilling effect that is
going to have across Silicon Valley.
Casey, I want to get your take on the employee activism that we've seen over the last
couple of days.
There was an open letter petition, whatever you want to call it, going around that was
signed by some employees of OpenAI and Google DeepMind and other leading AI companies basically
saying like we stand with Anthropic, we also do not want to make tools for mass domestic
surveillance and autonomous killing and sort of expressing solidarity with the stance
that Dario Amade has taken.
Do you think that's meaningful?
Do you think that's part of what is fueling some of the decisions that these companies
are making?
Because that has been true in the past.
Employees that these companies have had a lot of leverage over things like military
contracts.
I do think it is very meaningful.
There are a lot of very well-meaning people at OpenAI, at Google, at DeepMind, as well
as Anthropic who truly do not want to see the most dystopian possible AI scenarios come
to pass.
And so it matters that they are going to their leadership and saying we are not going
to participate in this.
I hope that those employees get a hold of the contracts that their employers are signing
and really scrutinize them.
I hope that they take note if they find out that their technology actually is being used
for something that looks pretty domestic surveillance-like, that they would blow the whistle.
We really are going to need to rely on these employees in the coming years as the technology
improves.
And as the Pentagon potentially does the thing that it is telling us today that it is
not going to do.
Yeah.
I think one other important thing to note here is that Sam Altman and OpenAI are trying
to very carefully explain this to their employees in a way that does not suggest that they are
just capitulating to the demands of the Pentagon.
OpenAI is saying to its own employees that they believe they got actually a stronger deal
than the one Anthropocad in terms of protecting against mass domestic surveillance and the use
of their systems for autonomous weapons.
Several people pointed me to this sort of line in Sam Altman's post about this, about
how they were going to create what he called a safety stack.
Basically, a set of protections built into the model itself that the Pentagon is going
to be using in classified situations that would essentially prevent the use of chat
GPT presumably for the things that they're worried about.
Yeah.
By the way, this is the same company that told us it was going to build safeguards to make
sure that Sora couldn't be used to make images of Brian Cranston, Kevin.
So I'm just going to suggest that sometimes when OpenAI tells you it's going to build guard
rails, they don't actually show up on time.
Yeah.
I've also talked to people who say that this is basically security theater that, you
know, if you dump a bunch of data that you've collected on Americans or purchased from
a data broker into an AI model, like it is not going to be able to tell whether that
information was legally gathered, it is not going to be able to tell where that information
came from.
And so this is not really a meaningful change.
Yeah.
Let me underscore that point, Kevin, because it is so important.
It is legal for data broker companies to buy up data on millions of Americans.
And it is also legal for federal agencies to buy that data.
Now that does not constitute domestic surveillance to a legal standard, but it is functionally
equivalent, right?
So this is the whole ball game here, right?
The Pentagon already has all of the tools it needs to do what is practically domestic
surveillance.
It's just not called that because it's legal to buy data about Americans from data brokers.
So I understand we are so deep in the weeds here, but the reason we wanted to do this
episode today is to try to persuade you.
This is very high stakes stuff and it is being done in the shadows and the nuances really,
really matter.
Yeah.
I think the details and nuances are where the whole story lies right now and it's hugely
high stakes.
And so I think on the surface, this might look like some kind of boring contractual debate
between AI companies.
But this is really about the sort of fundamental question of who controls technology.
Is it the people who build the technology or is it the militaries and the governments
of the countries where that technology is built?
And I think that is sort of the high level question under debate here and it's one where
the Pentagon and Anthropic did not see eye to eye.
I mean, this story, Kevin, is the whole reason that you and I have just never been on
the side of AI is all hype and it's fake and it's a bubble that's about to collapse,
right?
We saw these systems improving in real time.
We knew that very soon they would be in a position where they could do the sort of instant
analysis of things like social media data, geolocation data and other data that could
just potentially create massive new systems of oppression and we are now on the precipice
of those systems being potentially rolled out under the guise of a policy that is called
all lawful use because there is no law to regulate them.
So it really just could not be more serious and I'm glad we're getting a chance to talk
about it today.
I want to bring up one more thing though, which is the limb that Sam Altman may have just
crawled out on, right?
As I'm reading through his statement, I'm trying to square it with what I know, you know,
you were talking earlier in this show, it's like, okay, so you're telling me that the
same day the Pentagon tries to kick one company out for having two things that it will never
do.
It signs a deal with another company and makes an agreement that it will never do two
things.
It's so hard to square that, right?
And yet you and I have both covered Sam for a long time and we know that a criticism
he has gotten from his former co-workers is he tells people what they want to hear, right?
This was at the root of him being fired in 2023 was his co-worker saying, this guy is
telling me what I want to hear.
He's not being consistently candid and he's just sort of leaving me in this state of perpetual
confusion.
And so now we fast forward to a moment that is so much higher stakes than that, right?
Because we have to take Sam Altman's word that he has signed a deal that will not enable
mass domestic surveillance of Americans in the short term and maybe autonomous murder
bots in the medium term, which is what?
I don't know, three years, five years, who knows.
So the reason that I note that though, Kevin, is that in every case, it is always come out
in the end what the truth was, right?
And I hope the truth here is that Sam got his red lines.
I hope the truth is that somehow he arm wrestled Pete Hegseth down and Pete Hegseth said,
okay, you got me, Altman, we're not going to do any domestic surveillance for real and
we're not going to do any autonomous murder bots for real.
My fear is though that either through naivete or deception, he has misled us and we are
going to find out sooner or later that in fact those two use cases are not only legal,
but they're happening.
Right.
I think that's still a big TBD and I would also like to know, Sam, if you're listening,
please come on and talk to us about this because I think there's still a lot of unknowns
here.
I would also bring up another point, which is one of the big criticisms of anthropic
over the years has been about this idea of regulatory capture, right?
There are many people, including some very high up in the Trump administration who believe
that all of anthropic warnings and statements about the risks of powerful AI systems, the
speed with which they're accelerating, the things that they could potentially do, have
been kind of a pretext, right, that they're not actually sincere about this, that they're
just trying to get a bunch of onerous regulation passed so that they can sort of enshrine their
status as an incumbent and prevent smaller startups and others from competing with them.
So we've heard that term a lot, regulatory capture.
This to me is an example of regulatory capture, right?
This is a company, open AI, coming into a very hot dispute between their biggest rival
and the United States government and effectively using what seemed to be vibes, charm, possibly
some better political instincts to get a deal done through their relationships with the
government.
So call it what you want, call it savvy, politicking, or negotiating, call it hair-splitting over
the deals of this contract, but this is effectively a company realizing that if it wants to do business
with the US government, it has to essentially abide by the terms that the US government has
set.
That is regulatory capture, as textbook an example is your ever going to see.
Yeah.
So where we go from here, Kav, so I think there are a bunch of unresolved questions that
I'm going to be looking at over the next few weeks and months.
One of them is like, what actually happens to this supply chain risk designation?
This is something that the Pentagon has said it's going to do to Anthropic, but we have
not actually seen any formal language about that other than Pete Hague says posts.
And we have also not fully understood what that actually would mean for Anthropic or
what kinds of relationships it would be forced to sever with various other government contractors.
So that's sort of one bucket of unknowns is like all the legal and contractual details
of this supply chain risk designation for Anthropic.
We also still have a lot to learn about what the other AI companies are being asked to
agree to that Anthropic wouldn't and what companies like OpenAI may have done to get their
deal through while Anthropics was being rejected.
And then I think there's a third bucket, which is like, what does this do to the popularity
of these companies with consumers?
I think we are starting to see very early signs that some consumers who are very upset
about the Pentagon's demands here are switching from chat GPT to Claude.
One of those users appears to have been Katie Perry from the pop star who posted a screenshot
on X of her Claude pro plan that she had newly purchased circled with a little red heart.
So Katie Perry really said the Anthropic employees, those are my California girls and they're
recognizable.
I should also say like I have to underscore that this is exactly the kind of moral conflict
that Dario Amade has been preparing for his entire life.
One of Dario's favorite books, a book that he used to buy for all Anthropic employees
is called The Making of the Atomic Bomb.
It's a very long history of the Manhattan Project during World War II.
And the reason that he wanted Anthropic employees to read this book is that he believed that
eventually what they were building, the AI models, the chatbots would become as important
to national security, to the government, to the future of the global order as nuclear
weapons.
And he wanted to sort of instill in them the idea that like they were doing something
with profound moral and ethical consequences.
He understood that it's not just like building technology that if you build something that
is powerful enough, the government is going to want to use it and they're going to want
to use it on their terms.
And so I think this is exactly the shape of conflict that he was envisioning when he
was telling people to read this book about the Manhattan Project.
I think you're exactly right.
It has been so amazing, honestly, to watch how many predictions that were made by like
the rationalists and the less wrong community or in the early 2010s have started to come
true.
They sort of conflicts between the government and the big AI labs while they were not predicted
with any degree of specificity, there was still a thought that we were going to get here.
And now it sort of seems like that moment has arrived.
I'm sure it must be extremely surreal to Dario as well as many other people who have been
working on this for a long time.
I just hope that we can navigate out of it safely.
Yeah.
Well, truly unprecedented 48 hours or so.
I'm sure a lot more is going to unfold in the days ahead.
And I'm sure we'll be returning to the subject.
You're on hard for it.
But perhaps by then, I'll be out of this ski chalet.
Yeah, I hope you make it down safely.
And I think you should go skiing.
I know you're not a fan, but I think you should do it.
If you knew where my center of gravity was, you would know that Kevin Ruse just tried
to kill me live on air.
Heart Fork is produced by Whitney Jones and Rachel Cohn.
We're edited by Viren Povitch.
Today's show is engineered by Katie McMurrin.
Our executive producer is Jen Poehnaugh.
Original music by Alyssa Moxley and Dan Powell.
Video production by Sawyer Rokay, Pat Gunther, Jake Nichol, and Chris Schott.
You can watch this whole episode on YouTube at youtube.com slash hard for it.
Special thanks to Paula Schumann, Fuiwing Tam, and Dahlia Hadad.
You can email us at heartfork at nytime.com with your AI headlines.
Hard Fork
